problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_5344
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-2822
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use plot_event in a example
The function `plot_event` has currently no example linked to its [doc](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_event.html#nilearn.plotting.plot_event).
It wouldn't be too costly to use it in one example somewhere.
</issue>
<code>
[start of examples/04_glm_first_level/write_events_file.py]
1 """Example of a events.tsv file generation: the neurospin/localizer events.
2 =============================================================================
3
4 The protocol described is the so-called "archi standard" localizer
5 event sequence. See Pinel et al., BMC neuroscience 2007 for reference.
6 """
7
8 print(__doc__)
9
10 #########################################################################
11 # Define the onset times in seconds. Those are typically extracted
12 # from the stimulation software used.
13 import numpy as np
14 onset = np.array([
15 0., 2.4, 8.7, 11.4, 15., 18., 20.7, 23.7, 26.7, 29.7, 33., 35.4, 39.,
16 41.7, 44.7, 48., 56.4, 59.7, 62.4, 69., 71.4, 75., 83.4, 87., 89.7,
17 96., 108., 116.7, 119.4, 122.7, 125.4, 131.4, 135., 137.7, 140.4,
18 143.4, 146.7, 149.4, 153., 156., 159., 162., 164.4, 167.7, 170.4,
19 173.7, 176.7, 188.4, 191.7, 195., 198., 201., 203.7, 207., 210.,
20 212.7, 215.7, 218.7, 221.4, 224.7, 227.7, 230.7, 234., 236.7, 246.,
21 248.4, 251.7, 254.7, 257.4, 260.4, 264., 266.7, 269.7, 275.4, 278.4,
22 284.4, 288., 291., 293.4, 296.7])
23
24 #########################################################################
25 # Associated trial types: these are numbered between 0 and 9, hence
26 # correspond to 10 different conditions.
27 trial_idx = np.array(
28 [7, 7, 0, 2, 9, 4, 9, 3, 5, 9, 1, 6, 8, 8, 6, 6, 8, 0, 3, 4, 5, 8, 6,
29 2, 9, 1, 6, 5, 9, 1, 7, 8, 6, 6, 1, 2, 9, 0, 7, 1, 8, 2, 7, 8, 3, 6,
30 0, 0, 6, 8, 7, 7, 1, 1, 1, 5, 5, 0, 7, 0, 4, 2, 7, 9, 8, 0, 6, 3, 3,
31 7, 1, 0, 0, 4, 1, 9, 8, 4, 9, 9])
32
33 #########################################################################
34 # We may want to map these indices to explicit condition names.
35 # For that, we define a list of 10 strings.
36 condition_ids = ['horizontal checkerboard',
37 'vertical checkerboard',
38 'right button press, auditory instructions',
39 'left button press, auditory instructions',
40 'right button press, visual instructions',
41 'left button press, visual instructions',
42 'mental computation, auditory instructions',
43 'mental computation, visual instructions',
44 'visual sentence',
45 'auditory sentence']
46
47 trial_type = np.array([condition_ids[i] for i in trial_idx])
48
49 #########################################################################
50 # We also define a duration (required by BIDS conventions).
51 duration = np.ones_like(onset)
52
53
54 #########################################################################
55 # Form an event dataframe from these information.
56 import pandas as pd
57 events = pd.DataFrame({'trial_type': trial_type,
58 'onset': onset,
59 'duration': duration})
60
61 #########################################################################
62 # Export them to a tsv file.
63 tsvfile = 'localizer_events.tsv'
64 events.to_csv(tsvfile, sep='\t', index=False)
65 print("Created the events file in %s " % tsvfile)
66
[end of examples/04_glm_first_level/write_events_file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/04_glm_first_level/write_events_file.py b/examples/04_glm_first_level/write_events_file.py
--- a/examples/04_glm_first_level/write_events_file.py
+++ b/examples/04_glm_first_level/write_events_file.py
@@ -63,3 +63,10 @@
tsvfile = 'localizer_events.tsv'
events.to_csv(tsvfile, sep='\t', index=False)
print("Created the events file in %s " % tsvfile)
+
+#########################################################################
+# Optionally, the events can be visualized using the plot_event function.
+from matplotlib import pyplot as plt
+from nilearn.plotting import plot_event
+plot_event(events, figsize=(15, 5))
+plt.show()
|
{"golden_diff": "diff --git a/examples/04_glm_first_level/write_events_file.py b/examples/04_glm_first_level/write_events_file.py\n--- a/examples/04_glm_first_level/write_events_file.py\n+++ b/examples/04_glm_first_level/write_events_file.py\n@@ -63,3 +63,10 @@\n tsvfile = 'localizer_events.tsv'\n events.to_csv(tsvfile, sep='\\t', index=False)\n print(\"Created the events file in %s \" % tsvfile)\n+\n+#########################################################################\n+# Optionally, the events can be visualized using the plot_event function.\n+from matplotlib import pyplot as plt\n+from nilearn.plotting import plot_event\n+plot_event(events, figsize=(15, 5))\n+plt.show()\n", "issue": "Use plot_event in a example\nThe function `plot_event` has currently no example linked to its [doc](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_event.html#nilearn.plotting.plot_event). \r\nIt wouldn't be too costly to use it in one example somewhere.\n", "before_files": [{"content": "\"\"\"Example of a events.tsv file generation: the neurospin/localizer events.\n=============================================================================\n\nThe protocol described is the so-called \"archi standard\" localizer\nevent sequence. See Pinel et al., BMC neuroscience 2007 for reference.\n\"\"\"\n\nprint(__doc__)\n\n#########################################################################\n# Define the onset times in seconds. Those are typically extracted\n# from the stimulation software used.\nimport numpy as np\nonset = np.array([\n 0., 2.4, 8.7, 11.4, 15., 18., 20.7, 23.7, 26.7, 29.7, 33., 35.4, 39.,\n 41.7, 44.7, 48., 56.4, 59.7, 62.4, 69., 71.4, 75., 83.4, 87., 89.7,\n 96., 108., 116.7, 119.4, 122.7, 125.4, 131.4, 135., 137.7, 140.4,\n 143.4, 146.7, 149.4, 153., 156., 159., 162., 164.4, 167.7, 170.4,\n 173.7, 176.7, 188.4, 191.7, 195., 198., 201., 203.7, 207., 210.,\n 212.7, 215.7, 218.7, 221.4, 224.7, 227.7, 230.7, 234., 236.7, 246.,\n 248.4, 251.7, 254.7, 257.4, 260.4, 264., 266.7, 269.7, 275.4, 278.4,\n 284.4, 288., 291., 293.4, 296.7])\n\n#########################################################################\n# Associated trial types: these are numbered between 0 and 9, hence\n# correspond to 10 different conditions.\ntrial_idx = np.array(\n [7, 7, 0, 2, 9, 4, 9, 3, 5, 9, 1, 6, 8, 8, 6, 6, 8, 0, 3, 4, 5, 8, 6,\n 2, 9, 1, 6, 5, 9, 1, 7, 8, 6, 6, 1, 2, 9, 0, 7, 1, 8, 2, 7, 8, 3, 6,\n 0, 0, 6, 8, 7, 7, 1, 1, 1, 5, 5, 0, 7, 0, 4, 2, 7, 9, 8, 0, 6, 3, 3,\n 7, 1, 0, 0, 4, 1, 9, 8, 4, 9, 9])\n\n#########################################################################\n# We may want to map these indices to explicit condition names.\n# For that, we define a list of 10 strings.\ncondition_ids = ['horizontal checkerboard',\n 'vertical checkerboard',\n 'right button press, auditory instructions',\n 'left button press, auditory instructions',\n 'right button press, visual instructions',\n 'left button press, visual instructions',\n 'mental computation, auditory instructions',\n 'mental computation, visual instructions',\n 'visual sentence',\n 'auditory sentence']\n\ntrial_type = np.array([condition_ids[i] for i in trial_idx])\n\n#########################################################################\n# We also define a duration (required by BIDS conventions).\nduration = np.ones_like(onset)\n\n\n#########################################################################\n# Form an event dataframe from these information.\nimport pandas as pd\nevents = pd.DataFrame({'trial_type': trial_type,\n 'onset': onset,\n 'duration': duration})\n\n#########################################################################\n# Export them to a tsv file.\ntsvfile = 'localizer_events.tsv'\nevents.to_csv(tsvfile, sep='\\t', index=False)\nprint(\"Created the events file in %s \" % tsvfile)\n", "path": "examples/04_glm_first_level/write_events_file.py"}]}
| 1,835 | 167 |
gh_patches_debug_13652
|
rasdani/github-patches
|
git_diff
|
inventree__InvenTree-6287
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[PUI] Global login
Global Login (CUI logs in PUI and vice versa) is not working (anymore)
</issue>
<code>
[start of InvenTree/users/api.py]
1 """DRF API definition for the 'users' app."""
2
3 import datetime
4 import logging
5
6 from django.contrib.auth.models import Group, User
7 from django.urls import include, path, re_path
8
9 from rest_framework import exceptions, permissions
10 from rest_framework.response import Response
11 from rest_framework.views import APIView
12
13 import InvenTree.helpers
14 from InvenTree.filters import SEARCH_ORDER_FILTER
15 from InvenTree.mixins import (
16 ListAPI,
17 ListCreateAPI,
18 RetrieveAPI,
19 RetrieveUpdateAPI,
20 RetrieveUpdateDestroyAPI,
21 )
22 from InvenTree.serializers import ExendedUserSerializer, UserCreateSerializer
23 from users.models import ApiToken, Owner, RuleSet, check_user_role
24 from users.serializers import GroupSerializer, OwnerSerializer
25
26 logger = logging.getLogger('inventree')
27
28
29 class OwnerList(ListAPI):
30 """List API endpoint for Owner model.
31
32 Cannot create.
33 """
34
35 queryset = Owner.objects.all()
36 serializer_class = OwnerSerializer
37
38 def filter_queryset(self, queryset):
39 """Implement text search for the "owner" model.
40
41 Note that an "owner" can be either a group, or a user,
42 so we cannot do a direct text search.
43
44 A "hack" here is to post-process the queryset and simply
45 remove any values which do not match.
46
47 It is not necessarily "efficient" to do it this way,
48 but until we determine a better way, this is what we have...
49 """
50 search_term = str(self.request.query_params.get('search', '')).lower()
51 is_active = self.request.query_params.get('is_active', None)
52
53 queryset = super().filter_queryset(queryset)
54
55 results = []
56
57 # Get a list of all matching users, depending on the *is_active* flag
58 if is_active is not None:
59 is_active = InvenTree.helpers.str2bool(is_active)
60 matching_user_ids = User.objects.filter(is_active=is_active).values_list(
61 'pk', flat=True
62 )
63
64 for result in queryset.all():
65 name = str(result.name()).lower().strip()
66 search_match = True
67
68 # Extract search term f
69 if search_term:
70 for entry in search_term.strip().split(' '):
71 if entry not in name:
72 search_match = False
73 break
74
75 if not search_match:
76 continue
77
78 if is_active is not None:
79 # Skip any users which do not match the required *is_active* value
80 if (
81 result.owner_type.name == 'user'
82 and result.owner_id not in matching_user_ids
83 ):
84 continue
85
86 # If we get here, there is no reason *not* to include this result
87 results.append(result)
88
89 return results
90
91
92 class OwnerDetail(RetrieveAPI):
93 """Detail API endpoint for Owner model.
94
95 Cannot edit or delete
96 """
97
98 queryset = Owner.objects.all()
99 serializer_class = OwnerSerializer
100
101
102 class RoleDetails(APIView):
103 """API endpoint which lists the available role permissions for the current user.
104
105 (Requires authentication)
106 """
107
108 permission_classes = [permissions.IsAuthenticated]
109
110 def get(self, request, *args, **kwargs):
111 """Return the list of roles / permissions available to the current user."""
112 user = request.user
113
114 roles = {}
115
116 for ruleset in RuleSet.RULESET_CHOICES:
117 role, _text = ruleset
118
119 permissions = []
120
121 for permission in RuleSet.RULESET_PERMISSIONS:
122 if check_user_role(user, role, permission):
123 permissions.append(permission)
124
125 if len(permissions) > 0:
126 roles[role] = permissions
127 else:
128 roles[role] = None # pragma: no cover
129
130 data = {
131 'user': user.pk,
132 'username': user.username,
133 'roles': roles,
134 'is_staff': user.is_staff,
135 'is_superuser': user.is_superuser,
136 }
137
138 return Response(data)
139
140
141 class UserDetail(RetrieveUpdateDestroyAPI):
142 """Detail endpoint for a single user."""
143
144 queryset = User.objects.all()
145 serializer_class = ExendedUserSerializer
146 permission_classes = [permissions.IsAuthenticated]
147
148
149 class MeUserDetail(RetrieveUpdateAPI, UserDetail):
150 """Detail endpoint for current user."""
151
152 def get_object(self):
153 """Always return the current user object."""
154 return self.request.user
155
156
157 class UserList(ListCreateAPI):
158 """List endpoint for detail on all users."""
159
160 queryset = User.objects.all()
161 serializer_class = UserCreateSerializer
162 permission_classes = [permissions.IsAuthenticated]
163 filter_backends = SEARCH_ORDER_FILTER
164
165 search_fields = ['first_name', 'last_name', 'username']
166
167 ordering_fields = [
168 'email',
169 'username',
170 'first_name',
171 'last_name',
172 'is_staff',
173 'is_superuser',
174 'is_active',
175 ]
176
177 filterset_fields = ['is_staff', 'is_active', 'is_superuser']
178
179
180 class GroupDetail(RetrieveUpdateDestroyAPI):
181 """Detail endpoint for a particular auth group."""
182
183 queryset = Group.objects.all()
184 serializer_class = GroupSerializer
185 permission_classes = [permissions.IsAuthenticated]
186
187
188 class GroupList(ListCreateAPI):
189 """List endpoint for all auth groups."""
190
191 queryset = Group.objects.all()
192 serializer_class = GroupSerializer
193 permission_classes = [permissions.IsAuthenticated]
194
195 filter_backends = SEARCH_ORDER_FILTER
196
197 search_fields = ['name']
198
199 ordering_fields = ['name']
200
201
202 class GetAuthToken(APIView):
203 """Return authentication token for an authenticated user."""
204
205 permission_classes = [permissions.IsAuthenticated]
206
207 def get(self, request, *args, **kwargs):
208 """Return an API token if the user is authenticated.
209
210 - If the user already has a matching token, delete it and create a new one
211 - Existing tokens are *never* exposed again via the API
212 - Once the token is provided, it can be used for auth until it expires
213 """
214 if request.user.is_authenticated:
215 user = request.user
216 name = request.query_params.get('name', '')
217
218 name = ApiToken.sanitize_name(name)
219
220 today = datetime.date.today()
221
222 # Find existing token, which has not expired
223 token = ApiToken.objects.filter(
224 user=user, name=name, revoked=False, expiry__gte=today
225 ).first()
226
227 if not token:
228 # User is authenticated, and requesting a token against the provided name.
229 token = ApiToken.objects.create(user=request.user, name=name)
230
231 # Add some metadata about the request
232 token.set_metadata('user_agent', request.META.get('HTTP_USER_AGENT', ''))
233 token.set_metadata('remote_addr', request.META.get('REMOTE_ADDR', ''))
234 token.set_metadata('remote_host', request.META.get('REMOTE_HOST', ''))
235 token.set_metadata('remote_user', request.META.get('REMOTE_USER', ''))
236 token.set_metadata('server_name', request.META.get('SERVER_NAME', ''))
237 token.set_metadata('server_port', request.META.get('SERVER_PORT', ''))
238
239 data = {'token': token.key, 'name': token.name, 'expiry': token.expiry}
240
241 logger.info(
242 "Created new API token for user '%s' (name='%s')", user.username, name
243 )
244
245 return Response(data)
246
247 else:
248 raise exceptions.NotAuthenticated()
249
250
251 user_urls = [
252 path('roles/', RoleDetails.as_view(), name='api-user-roles'),
253 path('token/', GetAuthToken.as_view(), name='api-token'),
254 path('me/', MeUserDetail.as_view(), name='api-user-me'),
255 path(
256 'owner/',
257 include([
258 path('<int:pk>/', OwnerDetail.as_view(), name='api-owner-detail'),
259 path('', OwnerList.as_view(), name='api-owner-list'),
260 ]),
261 ),
262 path(
263 'group/',
264 include([
265 re_path(
266 r'^(?P<pk>[0-9]+)/?$', GroupDetail.as_view(), name='api-group-detail'
267 ),
268 path('', GroupList.as_view(), name='api-group-list'),
269 ]),
270 ),
271 re_path(r'^(?P<pk>[0-9]+)/?$', UserDetail.as_view(), name='api-user-detail'),
272 path('', UserList.as_view(), name='api-user-list'),
273 ]
274
[end of InvenTree/users/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/InvenTree/users/api.py b/InvenTree/users/api.py
--- a/InvenTree/users/api.py
+++ b/InvenTree/users/api.py
@@ -3,6 +3,7 @@
import datetime
import logging
+from django.contrib.auth import get_user, login
from django.contrib.auth.models import Group, User
from django.urls import include, path, re_path
@@ -242,6 +243,10 @@
"Created new API token for user '%s' (name='%s')", user.username, name
)
+ # Ensure that the users session is logged in (PUI -> CUI login)
+ if not get_user(request).is_authenticated:
+ login(request, user)
+
return Response(data)
else:
|
{"golden_diff": "diff --git a/InvenTree/users/api.py b/InvenTree/users/api.py\n--- a/InvenTree/users/api.py\n+++ b/InvenTree/users/api.py\n@@ -3,6 +3,7 @@\n import datetime\n import logging\n \n+from django.contrib.auth import get_user, login\n from django.contrib.auth.models import Group, User\n from django.urls import include, path, re_path\n \n@@ -242,6 +243,10 @@\n \"Created new API token for user '%s' (name='%s')\", user.username, name\n )\n \n+ # Ensure that the users session is logged in (PUI -> CUI login)\n+ if not get_user(request).is_authenticated:\n+ login(request, user)\n+\n return Response(data)\n \n else:\n", "issue": "[PUI] Global login\nGlobal Login (CUI logs in PUI and vice versa) is not working (anymore)\n", "before_files": [{"content": "\"\"\"DRF API definition for the 'users' app.\"\"\"\n\nimport datetime\nimport logging\n\nfrom django.contrib.auth.models import Group, User\nfrom django.urls import include, path, re_path\n\nfrom rest_framework import exceptions, permissions\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nimport InvenTree.helpers\nfrom InvenTree.filters import SEARCH_ORDER_FILTER\nfrom InvenTree.mixins import (\n ListAPI,\n ListCreateAPI,\n RetrieveAPI,\n RetrieveUpdateAPI,\n RetrieveUpdateDestroyAPI,\n)\nfrom InvenTree.serializers import ExendedUserSerializer, UserCreateSerializer\nfrom users.models import ApiToken, Owner, RuleSet, check_user_role\nfrom users.serializers import GroupSerializer, OwnerSerializer\n\nlogger = logging.getLogger('inventree')\n\n\nclass OwnerList(ListAPI):\n \"\"\"List API endpoint for Owner model.\n\n Cannot create.\n \"\"\"\n\n queryset = Owner.objects.all()\n serializer_class = OwnerSerializer\n\n def filter_queryset(self, queryset):\n \"\"\"Implement text search for the \"owner\" model.\n\n Note that an \"owner\" can be either a group, or a user,\n so we cannot do a direct text search.\n\n A \"hack\" here is to post-process the queryset and simply\n remove any values which do not match.\n\n It is not necessarily \"efficient\" to do it this way,\n but until we determine a better way, this is what we have...\n \"\"\"\n search_term = str(self.request.query_params.get('search', '')).lower()\n is_active = self.request.query_params.get('is_active', None)\n\n queryset = super().filter_queryset(queryset)\n\n results = []\n\n # Get a list of all matching users, depending on the *is_active* flag\n if is_active is not None:\n is_active = InvenTree.helpers.str2bool(is_active)\n matching_user_ids = User.objects.filter(is_active=is_active).values_list(\n 'pk', flat=True\n )\n\n for result in queryset.all():\n name = str(result.name()).lower().strip()\n search_match = True\n\n # Extract search term f\n if search_term:\n for entry in search_term.strip().split(' '):\n if entry not in name:\n search_match = False\n break\n\n if not search_match:\n continue\n\n if is_active is not None:\n # Skip any users which do not match the required *is_active* value\n if (\n result.owner_type.name == 'user'\n and result.owner_id not in matching_user_ids\n ):\n continue\n\n # If we get here, there is no reason *not* to include this result\n results.append(result)\n\n return results\n\n\nclass OwnerDetail(RetrieveAPI):\n \"\"\"Detail API endpoint for Owner model.\n\n Cannot edit or delete\n \"\"\"\n\n queryset = Owner.objects.all()\n serializer_class = OwnerSerializer\n\n\nclass RoleDetails(APIView):\n \"\"\"API endpoint which lists the available role permissions for the current user.\n\n (Requires authentication)\n \"\"\"\n\n permission_classes = [permissions.IsAuthenticated]\n\n def get(self, request, *args, **kwargs):\n \"\"\"Return the list of roles / permissions available to the current user.\"\"\"\n user = request.user\n\n roles = {}\n\n for ruleset in RuleSet.RULESET_CHOICES:\n role, _text = ruleset\n\n permissions = []\n\n for permission in RuleSet.RULESET_PERMISSIONS:\n if check_user_role(user, role, permission):\n permissions.append(permission)\n\n if len(permissions) > 0:\n roles[role] = permissions\n else:\n roles[role] = None # pragma: no cover\n\n data = {\n 'user': user.pk,\n 'username': user.username,\n 'roles': roles,\n 'is_staff': user.is_staff,\n 'is_superuser': user.is_superuser,\n }\n\n return Response(data)\n\n\nclass UserDetail(RetrieveUpdateDestroyAPI):\n \"\"\"Detail endpoint for a single user.\"\"\"\n\n queryset = User.objects.all()\n serializer_class = ExendedUserSerializer\n permission_classes = [permissions.IsAuthenticated]\n\n\nclass MeUserDetail(RetrieveUpdateAPI, UserDetail):\n \"\"\"Detail endpoint for current user.\"\"\"\n\n def get_object(self):\n \"\"\"Always return the current user object.\"\"\"\n return self.request.user\n\n\nclass UserList(ListCreateAPI):\n \"\"\"List endpoint for detail on all users.\"\"\"\n\n queryset = User.objects.all()\n serializer_class = UserCreateSerializer\n permission_classes = [permissions.IsAuthenticated]\n filter_backends = SEARCH_ORDER_FILTER\n\n search_fields = ['first_name', 'last_name', 'username']\n\n ordering_fields = [\n 'email',\n 'username',\n 'first_name',\n 'last_name',\n 'is_staff',\n 'is_superuser',\n 'is_active',\n ]\n\n filterset_fields = ['is_staff', 'is_active', 'is_superuser']\n\n\nclass GroupDetail(RetrieveUpdateDestroyAPI):\n \"\"\"Detail endpoint for a particular auth group.\"\"\"\n\n queryset = Group.objects.all()\n serializer_class = GroupSerializer\n permission_classes = [permissions.IsAuthenticated]\n\n\nclass GroupList(ListCreateAPI):\n \"\"\"List endpoint for all auth groups.\"\"\"\n\n queryset = Group.objects.all()\n serializer_class = GroupSerializer\n permission_classes = [permissions.IsAuthenticated]\n\n filter_backends = SEARCH_ORDER_FILTER\n\n search_fields = ['name']\n\n ordering_fields = ['name']\n\n\nclass GetAuthToken(APIView):\n \"\"\"Return authentication token for an authenticated user.\"\"\"\n\n permission_classes = [permissions.IsAuthenticated]\n\n def get(self, request, *args, **kwargs):\n \"\"\"Return an API token if the user is authenticated.\n\n - If the user already has a matching token, delete it and create a new one\n - Existing tokens are *never* exposed again via the API\n - Once the token is provided, it can be used for auth until it expires\n \"\"\"\n if request.user.is_authenticated:\n user = request.user\n name = request.query_params.get('name', '')\n\n name = ApiToken.sanitize_name(name)\n\n today = datetime.date.today()\n\n # Find existing token, which has not expired\n token = ApiToken.objects.filter(\n user=user, name=name, revoked=False, expiry__gte=today\n ).first()\n\n if not token:\n # User is authenticated, and requesting a token against the provided name.\n token = ApiToken.objects.create(user=request.user, name=name)\n\n # Add some metadata about the request\n token.set_metadata('user_agent', request.META.get('HTTP_USER_AGENT', ''))\n token.set_metadata('remote_addr', request.META.get('REMOTE_ADDR', ''))\n token.set_metadata('remote_host', request.META.get('REMOTE_HOST', ''))\n token.set_metadata('remote_user', request.META.get('REMOTE_USER', ''))\n token.set_metadata('server_name', request.META.get('SERVER_NAME', ''))\n token.set_metadata('server_port', request.META.get('SERVER_PORT', ''))\n\n data = {'token': token.key, 'name': token.name, 'expiry': token.expiry}\n\n logger.info(\n \"Created new API token for user '%s' (name='%s')\", user.username, name\n )\n\n return Response(data)\n\n else:\n raise exceptions.NotAuthenticated()\n\n\nuser_urls = [\n path('roles/', RoleDetails.as_view(), name='api-user-roles'),\n path('token/', GetAuthToken.as_view(), name='api-token'),\n path('me/', MeUserDetail.as_view(), name='api-user-me'),\n path(\n 'owner/',\n include([\n path('<int:pk>/', OwnerDetail.as_view(), name='api-owner-detail'),\n path('', OwnerList.as_view(), name='api-owner-list'),\n ]),\n ),\n path(\n 'group/',\n include([\n re_path(\n r'^(?P<pk>[0-9]+)/?$', GroupDetail.as_view(), name='api-group-detail'\n ),\n path('', GroupList.as_view(), name='api-group-list'),\n ]),\n ),\n re_path(r'^(?P<pk>[0-9]+)/?$', UserDetail.as_view(), name='api-user-detail'),\n path('', UserList.as_view(), name='api-user-list'),\n]\n", "path": "InvenTree/users/api.py"}]}
| 3,083 | 175 |
gh_patches_debug_8407
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-6526
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bokeh 0.12.6 incompatible with Python 2.7.9?
Hi there! I have posted this issue with the [dask.distributed project](https://github.com/dask/distributed/issues/1193#issuecomment-309802212) in which context it appeared, and I was asked to file the issue here, since it seems to be a Bokeh problem.
I have a virtual environment with the following contents:
```
> pip freeze
backports-abc==0.5
bkcharts==0.2
bokeh==0.12.6 <--------------
boto3==1.4.4
botocore==1.5.71
certifi==2017.4.17
chardet==3.0.4
click==6.7
cloudpickle==0.3.1
dask==0.15.0 <--------------
distributed==1.17.1 <--------------
docutils==0.13.1
futures==3.1.1
graphviz==0.7.1
HeapDict==1.0.0
idna==2.5
Jinja2==2.9.6
jmespath==0.9.3
locket==0.2.0
MarkupSafe==1.0
msgpack-python==0.4.8
numpy==1.13.0
pandas==0.20.2
partd==0.3.8
psutil==5.2.2
python-dateutil==2.6.0
pytz==2017.2
PyYAML==3.12
requests==2.18.1
s3fs==0.1.1
s3transfer==0.1.10
singledispatch==3.4.0.3
six==1.10.0
sortedcontainers==1.5.7
tblib==1.3.2
toolz==0.8.2
tornado==4.5.1
urllib3==1.21.1
zict==0.1.2
```
When I try to start the dask scheduler, I get the following output:
```
> dask-scheduler
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Could not launch service: ('bokeh', 8787)
Traceback (most recent call last):
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/scheduler.py", line 404, in start_services
service = v(self, io_loop=self.loop)
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/bokeh/scheduler.py", line 995, in __init__
scheduler)))
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/application/handlers/function.py", line 11, in __init__
_check_callback(func, ('doc',))
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/callback_manager.py", line 12, in _check_callback
sig = signature(callback)
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/future.py", line 85, in signature
for name in func.keywords.keys():
AttributeError: 'NoneType' object has no attribute 'keys'
distributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786
distributed.scheduler - INFO - http at: 0.0.0.0:9786
distributed.scheduler - INFO - Local Directory: /tmp/scheduler-zmXtOf
distributed.scheduler - INFO - -----------------------------------------------
^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'
```
I can fix this problem by downgrading Bokeh to 0.12.5:
```
> pip install -U bokeh==0.12.5
...
Installing collected packages: bokeh
Found existing installation: bokeh 0.12.6
Uninstalling bokeh-0.12.6:
Successfully uninstalled bokeh-0.12.6
Running setup.py install for bokeh ... done
Successfully installed bokeh-0.12.5
> dask-scheduler
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786
distributed.scheduler - INFO - bokeh at: 0.0.0.0:8787
distributed.scheduler - INFO - http at: 0.0.0.0:9786
distributed.scheduler - INFO - Local Directory: /tmp/scheduler-U0qy1k
distributed.scheduler - INFO - -----------------------------------------------
^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'
```
I was able to reproduce the issue on my Debian 8 machine with Python 2.7.9. The error does _not_ occur on my Mac with Python 2.7.11. @pitrou could not reproduce the problem with Python 2.7.12.
Bokeh 0.12.6 incompatible with Python 2.7.9?
Hi there! I have posted this issue with the [dask.distributed project](https://github.com/dask/distributed/issues/1193#issuecomment-309802212) in which context it appeared, and I was asked to file the issue here, since it seems to be a Bokeh problem.
I have a virtual environment with the following contents:
```
> pip freeze
backports-abc==0.5
bkcharts==0.2
bokeh==0.12.6 <--------------
boto3==1.4.4
botocore==1.5.71
certifi==2017.4.17
chardet==3.0.4
click==6.7
cloudpickle==0.3.1
dask==0.15.0 <--------------
distributed==1.17.1 <--------------
docutils==0.13.1
futures==3.1.1
graphviz==0.7.1
HeapDict==1.0.0
idna==2.5
Jinja2==2.9.6
jmespath==0.9.3
locket==0.2.0
MarkupSafe==1.0
msgpack-python==0.4.8
numpy==1.13.0
pandas==0.20.2
partd==0.3.8
psutil==5.2.2
python-dateutil==2.6.0
pytz==2017.2
PyYAML==3.12
requests==2.18.1
s3fs==0.1.1
s3transfer==0.1.10
singledispatch==3.4.0.3
six==1.10.0
sortedcontainers==1.5.7
tblib==1.3.2
toolz==0.8.2
tornado==4.5.1
urllib3==1.21.1
zict==0.1.2
```
When I try to start the dask scheduler, I get the following output:
```
> dask-scheduler
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Could not launch service: ('bokeh', 8787)
Traceback (most recent call last):
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/scheduler.py", line 404, in start_services
service = v(self, io_loop=self.loop)
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/bokeh/scheduler.py", line 995, in __init__
scheduler)))
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/application/handlers/function.py", line 11, in __init__
_check_callback(func, ('doc',))
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/callback_manager.py", line 12, in _check_callback
sig = signature(callback)
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/future.py", line 85, in signature
for name in func.keywords.keys():
AttributeError: 'NoneType' object has no attribute 'keys'
distributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786
distributed.scheduler - INFO - http at: 0.0.0.0:9786
distributed.scheduler - INFO - Local Directory: /tmp/scheduler-zmXtOf
distributed.scheduler - INFO - -----------------------------------------------
^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'
```
I can fix this problem by downgrading Bokeh to 0.12.5:
```
> pip install -U bokeh==0.12.5
...
Installing collected packages: bokeh
Found existing installation: bokeh 0.12.6
Uninstalling bokeh-0.12.6:
Successfully uninstalled bokeh-0.12.6
Running setup.py install for bokeh ... done
Successfully installed bokeh-0.12.5
> dask-scheduler
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786
distributed.scheduler - INFO - bokeh at: 0.0.0.0:8787
distributed.scheduler - INFO - http at: 0.0.0.0:9786
distributed.scheduler - INFO - Local Directory: /tmp/scheduler-U0qy1k
distributed.scheduler - INFO - -----------------------------------------------
^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'
```
I was able to reproduce the issue on my Debian 8 machine with Python 2.7.9. The error does _not_ occur on my Mac with Python 2.7.11. @pitrou could not reproduce the problem with Python 2.7.12.
</issue>
<code>
[start of bokeh/util/future.py]
1 ''' Utilities for Py2/Py3 interop.
2
3 '''
4
5 import sys
6
7 def with_metaclass(meta, *bases):
8 """ Add metaclasses in both Python 2 and Python 3.
9
10 Function from jinja2/_compat.py. License: BSD.
11
12 Use it like this::
13
14 class BaseForm(object):
15 pass
16
17 class FormType(type):
18 pass
19
20 class Form(with_metaclass(FormType, BaseForm)):
21 pass
22
23 This requires a bit of explanation: the basic idea is to make a
24 dummy metaclass for one level of class instantiation that replaces
25 itself with the actual metaclass. Because of internal type checks
26 we also need to make sure that we downgrade the custom metaclass
27 for one level to something closer to type (that's why __call__ and
28 __init__ comes back from type etc.).
29
30 This has the advantage over six.with_metaclass of not introducing
31 dummy classes into the final MRO.
32 """
33 class metaclass(meta):
34 __call__ = type.__call__
35 __init__ = type.__init__
36 def __new__(cls, name, this_bases, d):
37 if this_bases is None:
38 return type.__new__(cls, name, (), d)
39 return meta(name, bases, d)
40 return metaclass('temporary_class', None, {})
41
42
43 # There is a problem with using @wraps decorator in combination with functools.partial.
44 # This issue is not present in Python 3.
45 # This redefinition will be triggered only if issue affects user,
46 # otherwise regular definition of @wraps will be used.
47 #
48 # this code snippet was originally posted in following stack overflow discussion:
49 # http://stackoverflow.com/a/28752007
50
51 from functools import wraps, partial, WRAPPER_ASSIGNMENTS
52
53 try:
54 wraps(partial(wraps))(wraps)
55 except AttributeError:
56 @wraps(wraps)
57 def wraps(obj, attr_names=WRAPPER_ASSIGNMENTS, wraps=wraps):
58 return wraps(obj, assigned=(name for name in attr_names if hasattr(obj, name)))
59
60 del partial, WRAPPER_ASSIGNMENTS
61
62
63 # inspect.getargspec and inspect.formatargspec were deprecated in Python 3.5
64 # in favor of the newer inspect.signature introspection
65
66 if sys.version_info[:2] < (3, 4):
67
68 def signature(func):
69 # The modifications in this function are to make results more in line
70 # with Python 3, i.e. self is not included in bound methods, supplied
71 # parameters are not reported in partial, etc. This simplifies the
72 # downstream code considerably.
73 from inspect import getargspec, isfunction, ismethod
74 from functools import partial
75
76 if isfunction(func) or ismethod(func):
77 sig = getargspec(func)
78 if ismethod(func):
79 sig.args.remove('self')
80 return sig
81
82 elif isinstance(func, partial):
83 sig = getargspec(func.func)
84 if 'self' in sig.args: sig.args.remove('self')
85 for name in func.keywords.keys():
86 sig.args.remove(name)
87 for val in func.args:
88 del sig.args[0]
89 return sig
90
91 else:
92 sig = getargspec(func.__call__)
93 sig.args.remove('self')
94 return sig
95
96 def format_signature(sig):
97 from inspect import formatargspec
98 return formatargspec(*sig)
99
100 def get_param_info(sig):
101 return (sig.args, sig.defaults or [])
102
103 else:
104 from inspect import signature; signature
105
106 def format_signature(sig):
107 return str(sig)
108
109 def get_param_info(sig):
110 defaults = []
111 for param in sig.parameters.values():
112 if param.default is not param.empty:
113 defaults.append(param.default)
114 return list(sig.parameters), defaults
115
[end of bokeh/util/future.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bokeh/util/future.py b/bokeh/util/future.py
--- a/bokeh/util/future.py
+++ b/bokeh/util/future.py
@@ -82,8 +82,9 @@
elif isinstance(func, partial):
sig = getargspec(func.func)
if 'self' in sig.args: sig.args.remove('self')
- for name in func.keywords.keys():
- sig.args.remove(name)
+ if func.keywords is not None:
+ for name in func.keywords.keys():
+ sig.args.remove(name)
for val in func.args:
del sig.args[0]
return sig
|
{"golden_diff": "diff --git a/bokeh/util/future.py b/bokeh/util/future.py\n--- a/bokeh/util/future.py\n+++ b/bokeh/util/future.py\n@@ -82,8 +82,9 @@\n elif isinstance(func, partial):\n sig = getargspec(func.func)\n if 'self' in sig.args: sig.args.remove('self')\n- for name in func.keywords.keys():\n- sig.args.remove(name)\n+ if func.keywords is not None:\n+ for name in func.keywords.keys():\n+ sig.args.remove(name)\n for val in func.args:\n del sig.args[0]\n return sig\n", "issue": "Bokeh 0.12.6 incompatible with Python 2.7.9?\nHi there! I have posted this issue with the [dask.distributed project](https://github.com/dask/distributed/issues/1193#issuecomment-309802212) in which context it appeared, and I was asked to file the issue here, since it seems to be a Bokeh problem.\r\n\r\nI have a virtual environment with the following contents:\r\n```\r\n> pip freeze\r\nbackports-abc==0.5\r\nbkcharts==0.2\r\nbokeh==0.12.6 <--------------\r\nboto3==1.4.4\r\nbotocore==1.5.71\r\ncertifi==2017.4.17\r\nchardet==3.0.4\r\nclick==6.7\r\ncloudpickle==0.3.1\r\ndask==0.15.0 <--------------\r\ndistributed==1.17.1 <--------------\r\ndocutils==0.13.1\r\nfutures==3.1.1\r\ngraphviz==0.7.1\r\nHeapDict==1.0.0\r\nidna==2.5\r\nJinja2==2.9.6\r\njmespath==0.9.3\r\nlocket==0.2.0\r\nMarkupSafe==1.0\r\nmsgpack-python==0.4.8\r\nnumpy==1.13.0\r\npandas==0.20.2\r\npartd==0.3.8\r\npsutil==5.2.2\r\npython-dateutil==2.6.0\r\npytz==2017.2\r\nPyYAML==3.12\r\nrequests==2.18.1\r\ns3fs==0.1.1\r\ns3transfer==0.1.10\r\nsingledispatch==3.4.0.3\r\nsix==1.10.0\r\nsortedcontainers==1.5.7\r\ntblib==1.3.2\r\ntoolz==0.8.2\r\ntornado==4.5.1\r\nurllib3==1.21.1\r\nzict==0.1.2\r\n```\r\nWhen I try to start the dask scheduler, I get the following output:\r\n```\r\n> dask-scheduler\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\ndistributed.scheduler - INFO - Could not launch service: ('bokeh', 8787)\r\nTraceback (most recent call last):\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/scheduler.py\", line 404, in start_services\r\n service = v(self, io_loop=self.loop)\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/bokeh/scheduler.py\", line 995, in __init__\r\n scheduler)))\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/application/handlers/function.py\", line 11, in __init__\r\n _check_callback(func, ('doc',))\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/callback_manager.py\", line 12, in _check_callback\r\n sig = signature(callback)\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/future.py\", line 85, in signature\r\n for name in func.keywords.keys():\r\nAttributeError: 'NoneType' object has no attribute 'keys'\r\ndistributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786\r\ndistributed.scheduler - INFO - http at: 0.0.0.0:9786\r\ndistributed.scheduler - INFO - Local Directory: /tmp/scheduler-zmXtOf\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\n^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'\r\n```\r\nI can fix this problem by downgrading Bokeh to 0.12.5:\r\n```\r\n> pip install -U bokeh==0.12.5\r\n...\r\nInstalling collected packages: bokeh\r\n Found existing installation: bokeh 0.12.6\r\n Uninstalling bokeh-0.12.6:\r\n Successfully uninstalled bokeh-0.12.6\r\n Running setup.py install for bokeh ... done\r\nSuccessfully installed bokeh-0.12.5\r\n\r\n> dask-scheduler\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\ndistributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786\r\ndistributed.scheduler - INFO - bokeh at: 0.0.0.0:8787\r\ndistributed.scheduler - INFO - http at: 0.0.0.0:9786\r\ndistributed.scheduler - INFO - Local Directory: /tmp/scheduler-U0qy1k\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\n^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'\r\n```\r\n\r\nI was able to reproduce the issue on my Debian 8 machine with Python 2.7.9. The error does _not_ occur on my Mac with Python 2.7.11. @pitrou could not reproduce the problem with Python 2.7.12.\r\n\nBokeh 0.12.6 incompatible with Python 2.7.9?\nHi there! I have posted this issue with the [dask.distributed project](https://github.com/dask/distributed/issues/1193#issuecomment-309802212) in which context it appeared, and I was asked to file the issue here, since it seems to be a Bokeh problem.\r\n\r\nI have a virtual environment with the following contents:\r\n```\r\n> pip freeze\r\nbackports-abc==0.5\r\nbkcharts==0.2\r\nbokeh==0.12.6 <--------------\r\nboto3==1.4.4\r\nbotocore==1.5.71\r\ncertifi==2017.4.17\r\nchardet==3.0.4\r\nclick==6.7\r\ncloudpickle==0.3.1\r\ndask==0.15.0 <--------------\r\ndistributed==1.17.1 <--------------\r\ndocutils==0.13.1\r\nfutures==3.1.1\r\ngraphviz==0.7.1\r\nHeapDict==1.0.0\r\nidna==2.5\r\nJinja2==2.9.6\r\njmespath==0.9.3\r\nlocket==0.2.0\r\nMarkupSafe==1.0\r\nmsgpack-python==0.4.8\r\nnumpy==1.13.0\r\npandas==0.20.2\r\npartd==0.3.8\r\npsutil==5.2.2\r\npython-dateutil==2.6.0\r\npytz==2017.2\r\nPyYAML==3.12\r\nrequests==2.18.1\r\ns3fs==0.1.1\r\ns3transfer==0.1.10\r\nsingledispatch==3.4.0.3\r\nsix==1.10.0\r\nsortedcontainers==1.5.7\r\ntblib==1.3.2\r\ntoolz==0.8.2\r\ntornado==4.5.1\r\nurllib3==1.21.1\r\nzict==0.1.2\r\n```\r\nWhen I try to start the dask scheduler, I get the following output:\r\n```\r\n> dask-scheduler\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\ndistributed.scheduler - INFO - Could not launch service: ('bokeh', 8787)\r\nTraceback (most recent call last):\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/scheduler.py\", line 404, in start_services\r\n service = v(self, io_loop=self.loop)\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/bokeh/scheduler.py\", line 995, in __init__\r\n scheduler)))\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/application/handlers/function.py\", line 11, in __init__\r\n _check_callback(func, ('doc',))\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/callback_manager.py\", line 12, in _check_callback\r\n sig = signature(callback)\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/future.py\", line 85, in signature\r\n for name in func.keywords.keys():\r\nAttributeError: 'NoneType' object has no attribute 'keys'\r\ndistributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786\r\ndistributed.scheduler - INFO - http at: 0.0.0.0:9786\r\ndistributed.scheduler - INFO - Local Directory: /tmp/scheduler-zmXtOf\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\n^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'\r\n```\r\nI can fix this problem by downgrading Bokeh to 0.12.5:\r\n```\r\n> pip install -U bokeh==0.12.5\r\n...\r\nInstalling collected packages: bokeh\r\n Found existing installation: bokeh 0.12.6\r\n Uninstalling bokeh-0.12.6:\r\n Successfully uninstalled bokeh-0.12.6\r\n Running setup.py install for bokeh ... done\r\nSuccessfully installed bokeh-0.12.5\r\n\r\n> dask-scheduler\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\ndistributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786\r\ndistributed.scheduler - INFO - bokeh at: 0.0.0.0:8787\r\ndistributed.scheduler - INFO - http at: 0.0.0.0:9786\r\ndistributed.scheduler - INFO - Local Directory: /tmp/scheduler-U0qy1k\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\n^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'\r\n```\r\n\r\nI was able to reproduce the issue on my Debian 8 machine with Python 2.7.9. The error does _not_ occur on my Mac with Python 2.7.11. @pitrou could not reproduce the problem with Python 2.7.12.\r\n\n", "before_files": [{"content": "''' Utilities for Py2/Py3 interop.\n\n'''\n\nimport sys\n\ndef with_metaclass(meta, *bases):\n \"\"\" Add metaclasses in both Python 2 and Python 3.\n\n Function from jinja2/_compat.py. License: BSD.\n\n Use it like this::\n\n class BaseForm(object):\n pass\n\n class FormType(type):\n pass\n\n class Form(with_metaclass(FormType, BaseForm)):\n pass\n\n This requires a bit of explanation: the basic idea is to make a\n dummy metaclass for one level of class instantiation that replaces\n itself with the actual metaclass. Because of internal type checks\n we also need to make sure that we downgrade the custom metaclass\n for one level to something closer to type (that's why __call__ and\n __init__ comes back from type etc.).\n\n This has the advantage over six.with_metaclass of not introducing\n dummy classes into the final MRO.\n \"\"\"\n class metaclass(meta):\n __call__ = type.__call__\n __init__ = type.__init__\n def __new__(cls, name, this_bases, d):\n if this_bases is None:\n return type.__new__(cls, name, (), d)\n return meta(name, bases, d)\n return metaclass('temporary_class', None, {})\n\n\n# There is a problem with using @wraps decorator in combination with functools.partial.\n# This issue is not present in Python 3.\n# This redefinition will be triggered only if issue affects user,\n# otherwise regular definition of @wraps will be used.\n#\n# this code snippet was originally posted in following stack overflow discussion:\n# http://stackoverflow.com/a/28752007\n\nfrom functools import wraps, partial, WRAPPER_ASSIGNMENTS\n\ntry:\n wraps(partial(wraps))(wraps)\nexcept AttributeError:\n @wraps(wraps)\n def wraps(obj, attr_names=WRAPPER_ASSIGNMENTS, wraps=wraps):\n return wraps(obj, assigned=(name for name in attr_names if hasattr(obj, name)))\n\ndel partial, WRAPPER_ASSIGNMENTS\n\n\n# inspect.getargspec and inspect.formatargspec were deprecated in Python 3.5\n# in favor of the newer inspect.signature introspection\n\nif sys.version_info[:2] < (3, 4):\n\n def signature(func):\n # The modifications in this function are to make results more in line\n # with Python 3, i.e. self is not included in bound methods, supplied\n # parameters are not reported in partial, etc. This simplifies the\n # downstream code considerably.\n from inspect import getargspec, isfunction, ismethod\n from functools import partial\n\n if isfunction(func) or ismethod(func):\n sig = getargspec(func)\n if ismethod(func):\n sig.args.remove('self')\n return sig\n\n elif isinstance(func, partial):\n sig = getargspec(func.func)\n if 'self' in sig.args: sig.args.remove('self')\n for name in func.keywords.keys():\n sig.args.remove(name)\n for val in func.args:\n del sig.args[0]\n return sig\n\n else:\n sig = getargspec(func.__call__)\n sig.args.remove('self')\n return sig\n\n def format_signature(sig):\n from inspect import formatargspec\n return formatargspec(*sig)\n\n def get_param_info(sig):\n return (sig.args, sig.defaults or [])\n\nelse:\n from inspect import signature; signature\n\n def format_signature(sig):\n return str(sig)\n\n def get_param_info(sig):\n defaults = []\n for param in sig.parameters.values():\n if param.default is not param.empty:\n defaults.append(param.default)\n return list(sig.parameters), defaults\n", "path": "bokeh/util/future.py"}]}
| 3,978 | 142 |
gh_patches_debug_34666
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-3891
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plots: replace --show-json with --show-vega
Requested by @dmpetrov for cml. `--show-vega` should require a target and return a filled vega template. `--show-json` is not needed, let's delete it.
</issue>
<code>
[start of dvc/command/plots.py]
1 import argparse
2 import logging
3 import os
4
5 from dvc.command.base import CmdBase, append_doc_link, fix_subparsers
6 from dvc.exceptions import DvcException
7 from dvc.utils import format_link
8
9 logger = logging.getLogger(__name__)
10
11 PAGE_HTML = """<!DOCTYPE html>
12 <html>
13 <head>
14 <title>DVC Plot</title>
15 <script src="https://cdn.jsdelivr.net/npm/[email protected]"></script>
16 <script src="https://cdn.jsdelivr.net/npm/[email protected]"></script>
17 <script src="https://cdn.jsdelivr.net/npm/[email protected]"></script>
18 </head>
19 <body>
20 {divs}
21 </body>
22 </html>"""
23
24 DIV_HTML = """<div id = "{id}"></div>
25 <script type = "text/javascript">
26 var spec = {vega_json};
27 vegaEmbed('#{id}', spec);
28 </script>"""
29
30
31 class CmdPlots(CmdBase):
32 def _func(self, *args, **kwargs):
33 raise NotImplementedError
34
35 def run(self):
36 try:
37 plots = self._func(
38 targets=self.args.targets,
39 template=self.args.template,
40 x_field=self.args.x,
41 y_field=self.args.y,
42 csv_header=not self.args.no_csv_header,
43 title=self.args.title,
44 x_title=self.args.xlab,
45 y_title=self.args.ylab,
46 )
47
48 if self.args.show_json:
49 import json
50
51 logger.info(json.dumps(plots))
52 return 0
53
54 divs = [
55 DIV_HTML.format(id=f"plot{i}", vega_json=plot)
56 for i, plot in enumerate(plots.values())
57 ]
58 html = PAGE_HTML.format(divs="\n".join(divs))
59 path = self.args.out or "plots.html"
60
61 with open(path, "w") as fobj:
62 fobj.write(html)
63
64 logger.info(
65 "file://{}".format(os.path.join(self.repo.root_dir, path))
66 )
67
68 except DvcException:
69 logger.exception("")
70 return 1
71
72 return 0
73
74
75 class CmdPlotsShow(CmdPlots):
76 def _func(self, *args, **kwargs):
77 return self.repo.plots.show(*args, **kwargs)
78
79
80 class CmdPlotsDiff(CmdPlots):
81 def _func(self, *args, **kwargs):
82 return self.repo.plots.diff(*args, revs=self.args.revisions, **kwargs)
83
84
85 def add_parser(subparsers, parent_parser):
86 PLOTS_HELP = (
87 "Generating plots for metrics stored in structured files "
88 "(JSON, CSV, TSV)."
89 )
90
91 plots_parser = subparsers.add_parser(
92 "plots",
93 parents=[parent_parser],
94 description=append_doc_link(PLOTS_HELP, "plots"),
95 help=PLOTS_HELP,
96 formatter_class=argparse.RawDescriptionHelpFormatter,
97 )
98 plots_subparsers = plots_parser.add_subparsers(
99 dest="cmd",
100 help="Use `dvc plots CMD --help` to display command-specific help.",
101 )
102
103 fix_subparsers(plots_subparsers)
104
105 SHOW_HELP = "Generate a plots image file from a metrics file."
106 plots_show_parser = plots_subparsers.add_parser(
107 "show",
108 parents=[parent_parser],
109 description=append_doc_link(SHOW_HELP, "plots/show"),
110 help=SHOW_HELP,
111 formatter_class=argparse.RawDescriptionHelpFormatter,
112 )
113 plots_show_parser.add_argument(
114 "-t",
115 "--template",
116 nargs="?",
117 default=None,
118 help=(
119 "Special JSON or HTML schema file to inject with the data. "
120 "See {}".format(
121 format_link("https://man.dvc.org/plots#plot-templates")
122 )
123 ),
124 )
125 plots_show_parser.add_argument(
126 "-o", "--out", default=None, help="Destination path to save plots to.",
127 )
128 plots_show_parser.add_argument(
129 "-x", default=None, help="Field name for x axis."
130 )
131 plots_show_parser.add_argument(
132 "-y", default=None, help="Field name for y axis."
133 )
134 plots_show_parser.add_argument(
135 "--no-csv-header",
136 action="store_true",
137 default=False,
138 help="Required when CSV or TSV datafile does not have a header.",
139 )
140 plots_show_parser.add_argument(
141 "--show-json",
142 action="store_true",
143 default=False,
144 help="Show output in JSON format.",
145 )
146 plots_show_parser.add_argument("--title", default=None, help="Plot title.")
147 plots_show_parser.add_argument(
148 "--xlab", default=None, help="X axis title."
149 )
150 plots_show_parser.add_argument(
151 "--ylab", default=None, help="Y axis title."
152 )
153 plots_show_parser.add_argument(
154 "targets",
155 nargs="*",
156 help="Metrics files to visualize. Shows all plots by default.",
157 )
158 plots_show_parser.set_defaults(func=CmdPlotsShow)
159
160 PLOTS_DIFF_HELP = (
161 "Plot differences in metrics between commits in the DVC "
162 "repository, or between the last commit and the workspace."
163 )
164 plots_diff_parser = plots_subparsers.add_parser(
165 "diff",
166 parents=[parent_parser],
167 description=append_doc_link(PLOTS_DIFF_HELP, "plots/diff"),
168 help=PLOTS_DIFF_HELP,
169 formatter_class=argparse.RawDescriptionHelpFormatter,
170 )
171 plots_diff_parser.add_argument(
172 "-t",
173 "--template",
174 nargs="?",
175 default=None,
176 help=(
177 "Special JSON or HTML schema file to inject with the data. "
178 "See {}".format(
179 format_link("https://man.dvc.org/plots#plot-templates")
180 )
181 ),
182 )
183 plots_diff_parser.add_argument(
184 "--targets",
185 nargs="*",
186 help="Metrics file to visualize. Shows all plots by default.",
187 )
188 plots_diff_parser.add_argument(
189 "-o", "--out", default=None, help="Destination path to save plots to.",
190 )
191 plots_diff_parser.add_argument(
192 "-x", default=None, help="Field name for x axis."
193 )
194 plots_diff_parser.add_argument(
195 "-y", default=None, help="Field name for y axis."
196 )
197 plots_diff_parser.add_argument(
198 "--no-csv-header",
199 action="store_true",
200 default=False,
201 help="Provided CSV ot TSV datafile does not have a header.",
202 )
203 plots_diff_parser.add_argument(
204 "--show-json",
205 action="store_true",
206 default=False,
207 help="Show output in JSON format.",
208 )
209 plots_diff_parser.add_argument("--title", default=None, help="Plot title.")
210 plots_diff_parser.add_argument(
211 "--xlab", default=None, help="X axis title."
212 )
213 plots_diff_parser.add_argument(
214 "--ylab", default=None, help="Y axis title."
215 )
216 plots_diff_parser.add_argument(
217 "revisions", nargs="*", default=None, help="Git commits to plot from",
218 )
219 plots_diff_parser.set_defaults(func=CmdPlotsDiff)
220
[end of dvc/command/plots.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dvc/command/plots.py b/dvc/command/plots.py
--- a/dvc/command/plots.py
+++ b/dvc/command/plots.py
@@ -33,6 +33,16 @@
raise NotImplementedError
def run(self):
+ if self.args.show_vega:
+ if not self.args.targets:
+ logger.error("please specify a target for `--show-vega`")
+ return 1
+ if len(self.args.targets) > 1:
+ logger.error(
+ "you can only specify one target for `--show-vega`"
+ )
+ return 1
+
try:
plots = self._func(
targets=self.args.targets,
@@ -45,10 +55,9 @@
y_title=self.args.ylab,
)
- if self.args.show_json:
- import json
-
- logger.info(json.dumps(plots))
+ if self.args.show_vega:
+ target = self.args.targets[0]
+ logger.info(plots[target])
return 0
divs = [
@@ -138,10 +147,10 @@
help="Required when CSV or TSV datafile does not have a header.",
)
plots_show_parser.add_argument(
- "--show-json",
+ "--show-vega",
action="store_true",
default=False,
- help="Show output in JSON format.",
+ help="Show output in VEGA format.",
)
plots_show_parser.add_argument("--title", default=None, help="Plot title.")
plots_show_parser.add_argument(
@@ -201,10 +210,10 @@
help="Provided CSV ot TSV datafile does not have a header.",
)
plots_diff_parser.add_argument(
- "--show-json",
+ "--show-vega",
action="store_true",
default=False,
- help="Show output in JSON format.",
+ help="Show output in VEGA format.",
)
plots_diff_parser.add_argument("--title", default=None, help="Plot title.")
plots_diff_parser.add_argument(
|
{"golden_diff": "diff --git a/dvc/command/plots.py b/dvc/command/plots.py\n--- a/dvc/command/plots.py\n+++ b/dvc/command/plots.py\n@@ -33,6 +33,16 @@\n raise NotImplementedError\n \n def run(self):\n+ if self.args.show_vega:\n+ if not self.args.targets:\n+ logger.error(\"please specify a target for `--show-vega`\")\n+ return 1\n+ if len(self.args.targets) > 1:\n+ logger.error(\n+ \"you can only specify one target for `--show-vega`\"\n+ )\n+ return 1\n+\n try:\n plots = self._func(\n targets=self.args.targets,\n@@ -45,10 +55,9 @@\n y_title=self.args.ylab,\n )\n \n- if self.args.show_json:\n- import json\n-\n- logger.info(json.dumps(plots))\n+ if self.args.show_vega:\n+ target = self.args.targets[0]\n+ logger.info(plots[target])\n return 0\n \n divs = [\n@@ -138,10 +147,10 @@\n help=\"Required when CSV or TSV datafile does not have a header.\",\n )\n plots_show_parser.add_argument(\n- \"--show-json\",\n+ \"--show-vega\",\n action=\"store_true\",\n default=False,\n- help=\"Show output in JSON format.\",\n+ help=\"Show output in VEGA format.\",\n )\n plots_show_parser.add_argument(\"--title\", default=None, help=\"Plot title.\")\n plots_show_parser.add_argument(\n@@ -201,10 +210,10 @@\n help=\"Provided CSV ot TSV datafile does not have a header.\",\n )\n plots_diff_parser.add_argument(\n- \"--show-json\",\n+ \"--show-vega\",\n action=\"store_true\",\n default=False,\n- help=\"Show output in JSON format.\",\n+ help=\"Show output in VEGA format.\",\n )\n plots_diff_parser.add_argument(\"--title\", default=None, help=\"Plot title.\")\n plots_diff_parser.add_argument(\n", "issue": "plots: replace --show-json with --show-vega\nRequested by @dmpetrov for cml. `--show-vega` should require a target and return a filled vega template. `--show-json` is not needed, let's delete it.\n", "before_files": [{"content": "import argparse\nimport logging\nimport os\n\nfrom dvc.command.base import CmdBase, append_doc_link, fix_subparsers\nfrom dvc.exceptions import DvcException\nfrom dvc.utils import format_link\n\nlogger = logging.getLogger(__name__)\n\nPAGE_HTML = \"\"\"<!DOCTYPE html>\n<html>\n<head>\n <title>DVC Plot</title>\n <script src=\"https://cdn.jsdelivr.net/npm/[email protected]\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/[email protected]\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/[email protected]\"></script>\n</head>\n<body>\n {divs}\n</body>\n</html>\"\"\"\n\nDIV_HTML = \"\"\"<div id = \"{id}\"></div>\n<script type = \"text/javascript\">\n var spec = {vega_json};\n vegaEmbed('#{id}', spec);\n</script>\"\"\"\n\n\nclass CmdPlots(CmdBase):\n def _func(self, *args, **kwargs):\n raise NotImplementedError\n\n def run(self):\n try:\n plots = self._func(\n targets=self.args.targets,\n template=self.args.template,\n x_field=self.args.x,\n y_field=self.args.y,\n csv_header=not self.args.no_csv_header,\n title=self.args.title,\n x_title=self.args.xlab,\n y_title=self.args.ylab,\n )\n\n if self.args.show_json:\n import json\n\n logger.info(json.dumps(plots))\n return 0\n\n divs = [\n DIV_HTML.format(id=f\"plot{i}\", vega_json=plot)\n for i, plot in enumerate(plots.values())\n ]\n html = PAGE_HTML.format(divs=\"\\n\".join(divs))\n path = self.args.out or \"plots.html\"\n\n with open(path, \"w\") as fobj:\n fobj.write(html)\n\n logger.info(\n \"file://{}\".format(os.path.join(self.repo.root_dir, path))\n )\n\n except DvcException:\n logger.exception(\"\")\n return 1\n\n return 0\n\n\nclass CmdPlotsShow(CmdPlots):\n def _func(self, *args, **kwargs):\n return self.repo.plots.show(*args, **kwargs)\n\n\nclass CmdPlotsDiff(CmdPlots):\n def _func(self, *args, **kwargs):\n return self.repo.plots.diff(*args, revs=self.args.revisions, **kwargs)\n\n\ndef add_parser(subparsers, parent_parser):\n PLOTS_HELP = (\n \"Generating plots for metrics stored in structured files \"\n \"(JSON, CSV, TSV).\"\n )\n\n plots_parser = subparsers.add_parser(\n \"plots\",\n parents=[parent_parser],\n description=append_doc_link(PLOTS_HELP, \"plots\"),\n help=PLOTS_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n plots_subparsers = plots_parser.add_subparsers(\n dest=\"cmd\",\n help=\"Use `dvc plots CMD --help` to display command-specific help.\",\n )\n\n fix_subparsers(plots_subparsers)\n\n SHOW_HELP = \"Generate a plots image file from a metrics file.\"\n plots_show_parser = plots_subparsers.add_parser(\n \"show\",\n parents=[parent_parser],\n description=append_doc_link(SHOW_HELP, \"plots/show\"),\n help=SHOW_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n plots_show_parser.add_argument(\n \"-t\",\n \"--template\",\n nargs=\"?\",\n default=None,\n help=(\n \"Special JSON or HTML schema file to inject with the data. \"\n \"See {}\".format(\n format_link(\"https://man.dvc.org/plots#plot-templates\")\n )\n ),\n )\n plots_show_parser.add_argument(\n \"-o\", \"--out\", default=None, help=\"Destination path to save plots to.\",\n )\n plots_show_parser.add_argument(\n \"-x\", default=None, help=\"Field name for x axis.\"\n )\n plots_show_parser.add_argument(\n \"-y\", default=None, help=\"Field name for y axis.\"\n )\n plots_show_parser.add_argument(\n \"--no-csv-header\",\n action=\"store_true\",\n default=False,\n help=\"Required when CSV or TSV datafile does not have a header.\",\n )\n plots_show_parser.add_argument(\n \"--show-json\",\n action=\"store_true\",\n default=False,\n help=\"Show output in JSON format.\",\n )\n plots_show_parser.add_argument(\"--title\", default=None, help=\"Plot title.\")\n plots_show_parser.add_argument(\n \"--xlab\", default=None, help=\"X axis title.\"\n )\n plots_show_parser.add_argument(\n \"--ylab\", default=None, help=\"Y axis title.\"\n )\n plots_show_parser.add_argument(\n \"targets\",\n nargs=\"*\",\n help=\"Metrics files to visualize. Shows all plots by default.\",\n )\n plots_show_parser.set_defaults(func=CmdPlotsShow)\n\n PLOTS_DIFF_HELP = (\n \"Plot differences in metrics between commits in the DVC \"\n \"repository, or between the last commit and the workspace.\"\n )\n plots_diff_parser = plots_subparsers.add_parser(\n \"diff\",\n parents=[parent_parser],\n description=append_doc_link(PLOTS_DIFF_HELP, \"plots/diff\"),\n help=PLOTS_DIFF_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n plots_diff_parser.add_argument(\n \"-t\",\n \"--template\",\n nargs=\"?\",\n default=None,\n help=(\n \"Special JSON or HTML schema file to inject with the data. \"\n \"See {}\".format(\n format_link(\"https://man.dvc.org/plots#plot-templates\")\n )\n ),\n )\n plots_diff_parser.add_argument(\n \"--targets\",\n nargs=\"*\",\n help=\"Metrics file to visualize. Shows all plots by default.\",\n )\n plots_diff_parser.add_argument(\n \"-o\", \"--out\", default=None, help=\"Destination path to save plots to.\",\n )\n plots_diff_parser.add_argument(\n \"-x\", default=None, help=\"Field name for x axis.\"\n )\n plots_diff_parser.add_argument(\n \"-y\", default=None, help=\"Field name for y axis.\"\n )\n plots_diff_parser.add_argument(\n \"--no-csv-header\",\n action=\"store_true\",\n default=False,\n help=\"Provided CSV ot TSV datafile does not have a header.\",\n )\n plots_diff_parser.add_argument(\n \"--show-json\",\n action=\"store_true\",\n default=False,\n help=\"Show output in JSON format.\",\n )\n plots_diff_parser.add_argument(\"--title\", default=None, help=\"Plot title.\")\n plots_diff_parser.add_argument(\n \"--xlab\", default=None, help=\"X axis title.\"\n )\n plots_diff_parser.add_argument(\n \"--ylab\", default=None, help=\"Y axis title.\"\n )\n plots_diff_parser.add_argument(\n \"revisions\", nargs=\"*\", default=None, help=\"Git commits to plot from\",\n )\n plots_diff_parser.set_defaults(func=CmdPlotsDiff)\n", "path": "dvc/command/plots.py"}]}
| 2,683 | 470 |
gh_patches_debug_4117
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-6178
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MacOS: Clipboard nspaste make app crash when copying text
<!--
The issue tracker is a tool to address bugs.
Please use the #support Discord channel at https://chat.kivy.org/ or Stack Overflow for
support questions, more information at https://git.io/vM1yQ.
Before opening a new issue, make sure you do the following:
* check that your issue isn't already filed: https://git.io/vM1iE
* prepare a short, runnable example that reproduces the issue
* reproduce the problem with the latest development version of Kivy
* double-check that the issue is indeed a bug and not a support request
-->
### Versions
* Python: 3.7.1
* OS: MacOS 10.13.6
* Kivy: 1.10.1
* Kivy installation method: pypi
### Description
When I try copy text in TextInput, this make app crash. But paste is OK.
### Code and Logs
```log
Traceback (most recent call last):
File "main.py", line 56, in <module>
app.run()
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/app.py", line 826, in run
runTouchApp()
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/base.py", line 502, in runTouchApp
EventLoop.window.mainloop()
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/window_sdl2.py", line 727, in mainloop
self._mainloop()
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/window_sdl2.py", line 662, in _mainloop
self.modifiers):
File "kivy/_event.pyx", line 703, in kivy._event.EventDispatcher.dispatch
File "kivy/_event.pyx", line 1214, in kivy._event.EventObservers.dispatch
File "kivy/_event.pyx", line 1138, in kivy._event.EventObservers._dispatch
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/__init__.py", line 162, in _on_window_key_down
return self.dispatch('on_key_down', keycode, text, modifiers)
File "kivy/_event.pyx", line 703, in kivy._event.EventDispatcher.dispatch
File "kivy/_event.pyx", line 1214, in kivy._event.EventObservers.dispatch
File "kivy/_event.pyx", line 1138, in kivy._event.EventObservers._dispatch
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/uix/textinput.py", line 2434, in keyboard_on_key_down
self.copy()
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/uix/textinput.py", line 1727, in copy
return Clipboard.copy(self.selection_text)
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/__init__.py", line 73, in copy
self._copy(data)
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/__init__.py", line 87, in _copy
self.put(data, self._clip_mime_type)
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/clipboard_nspaste.py", line 40, in put
pb.writeObjects_([data])
File "pyobjus/pyobjus.pyx", line 393, in pyobjus.ObjcMethod.__call__
File "pyobjus/pyobjus_conversions.pxi", line 617, in pyobjus.convert_py_arg_to_cy
File "pyobjus/pyobjus_conversions.pxi", line 441, in pyobjus.convert_py_to_nsobject
File "pyobjus/pyobjus.pyx", line 393, in pyobjus.ObjcMethod.__call__
File "pyobjus/pyobjus_conversions.pxi", line 617, in pyobjus.convert_py_arg_to_cy
File "pyobjus/pyobjus_conversions.pxi", line 452, in pyobjus.convert_py_to_nsobject
File "pyobjus/pyobjus.pyx", line 974, in pyobjus.objc_create_delegate
pyobjus.ObjcException: You've passed b'kivyproject' as delegate, but there is no @protocol methods declared.
```
</issue>
<code>
[start of kivy/core/clipboard/clipboard_nspaste.py]
1 '''
2 Clipboard OsX: implementation of clipboard using Appkit
3 '''
4
5 __all__ = ('ClipboardNSPaste', )
6
7 from kivy.core.clipboard import ClipboardBase
8 from kivy.utils import platform
9
10 if platform != 'macosx':
11 raise SystemError('Unsupported platform for appkit clipboard.')
12 try:
13 from pyobjus import autoclass
14 from pyobjus.dylib_manager import load_framework, INCLUDE
15 load_framework(INCLUDE.AppKit)
16 except ImportError:
17 raise SystemError('Pyobjus not installed. Please run the following'
18 ' command to install it. `pip install --user pyobjus`')
19
20 NSPasteboard = autoclass('NSPasteboard')
21 NSString = autoclass('NSString')
22
23
24 class ClipboardNSPaste(ClipboardBase):
25
26 def __init__(self):
27 super(ClipboardNSPaste, self).__init__()
28 self._clipboard = NSPasteboard.generalPasteboard()
29
30 def get(self, mimetype='text/plain'):
31 pb = self._clipboard
32 data = pb.stringForType_('public.utf8-plain-text')
33 if not data:
34 return ""
35 return data.UTF8String()
36
37 def put(self, data, mimetype='text/plain'):
38 pb = self._clipboard
39 pb.clearContents()
40 pb.writeObjects_([data])
41
42 def get_types(self):
43 return list('text/plain',)
44
[end of kivy/core/clipboard/clipboard_nspaste.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kivy/core/clipboard/clipboard_nspaste.py b/kivy/core/clipboard/clipboard_nspaste.py
--- a/kivy/core/clipboard/clipboard_nspaste.py
+++ b/kivy/core/clipboard/clipboard_nspaste.py
@@ -37,7 +37,8 @@
def put(self, data, mimetype='text/plain'):
pb = self._clipboard
pb.clearContents()
- pb.writeObjects_([data])
+ utf8 = NSString.alloc().initWithUTF8String_(data)
+ pb.setString_forType_(utf8, 'public.utf8-plain-text')
def get_types(self):
return list('text/plain',)
|
{"golden_diff": "diff --git a/kivy/core/clipboard/clipboard_nspaste.py b/kivy/core/clipboard/clipboard_nspaste.py\n--- a/kivy/core/clipboard/clipboard_nspaste.py\n+++ b/kivy/core/clipboard/clipboard_nspaste.py\n@@ -37,7 +37,8 @@\n def put(self, data, mimetype='text/plain'):\n pb = self._clipboard\n pb.clearContents()\n- pb.writeObjects_([data])\n+ utf8 = NSString.alloc().initWithUTF8String_(data)\n+ pb.setString_forType_(utf8, 'public.utf8-plain-text')\n \n def get_types(self):\n return list('text/plain',)\n", "issue": "MacOS: Clipboard nspaste make app crash when copying text\n<!--\r\nThe issue tracker is a tool to address bugs.\r\nPlease use the #support Discord channel at https://chat.kivy.org/ or Stack Overflow for\r\nsupport questions, more information at https://git.io/vM1yQ.\r\n\r\nBefore opening a new issue, make sure you do the following:\r\n * check that your issue isn't already filed: https://git.io/vM1iE\r\n * prepare a short, runnable example that reproduces the issue\r\n * reproduce the problem with the latest development version of Kivy\r\n * double-check that the issue is indeed a bug and not a support request\r\n-->\r\n\r\n### Versions\r\n\r\n* Python: 3.7.1\r\n* OS: MacOS 10.13.6\r\n* Kivy: 1.10.1\r\n* Kivy installation method: pypi\r\n\r\n### Description\r\n\r\nWhen I try copy text in TextInput, this make app crash. But paste is OK.\r\n\r\n### Code and Logs\r\n\r\n```log\r\nTraceback (most recent call last):\r\n File \"main.py\", line 56, in <module>\r\n app.run()\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/app.py\", line 826, in run\r\n runTouchApp()\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/base.py\", line 502, in runTouchApp\r\n EventLoop.window.mainloop()\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/window_sdl2.py\", line 727, in mainloop\r\n self._mainloop()\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/window_sdl2.py\", line 662, in _mainloop\r\n self.modifiers):\r\n File \"kivy/_event.pyx\", line 703, in kivy._event.EventDispatcher.dispatch\r\n File \"kivy/_event.pyx\", line 1214, in kivy._event.EventObservers.dispatch\r\n File \"kivy/_event.pyx\", line 1138, in kivy._event.EventObservers._dispatch\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/__init__.py\", line 162, in _on_window_key_down\r\n return self.dispatch('on_key_down', keycode, text, modifiers)\r\n File \"kivy/_event.pyx\", line 703, in kivy._event.EventDispatcher.dispatch\r\n File \"kivy/_event.pyx\", line 1214, in kivy._event.EventObservers.dispatch\r\n File \"kivy/_event.pyx\", line 1138, in kivy._event.EventObservers._dispatch\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/uix/textinput.py\", line 2434, in keyboard_on_key_down\r\n self.copy()\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/uix/textinput.py\", line 1727, in copy\r\n return Clipboard.copy(self.selection_text)\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/__init__.py\", line 73, in copy\r\n self._copy(data)\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/__init__.py\", line 87, in _copy\r\n self.put(data, self._clip_mime_type)\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/clipboard_nspaste.py\", line 40, in put\r\n pb.writeObjects_([data])\r\n File \"pyobjus/pyobjus.pyx\", line 393, in pyobjus.ObjcMethod.__call__\r\n File \"pyobjus/pyobjus_conversions.pxi\", line 617, in pyobjus.convert_py_arg_to_cy\r\n File \"pyobjus/pyobjus_conversions.pxi\", line 441, in pyobjus.convert_py_to_nsobject\r\n File \"pyobjus/pyobjus.pyx\", line 393, in pyobjus.ObjcMethod.__call__\r\n File \"pyobjus/pyobjus_conversions.pxi\", line 617, in pyobjus.convert_py_arg_to_cy\r\n File \"pyobjus/pyobjus_conversions.pxi\", line 452, in pyobjus.convert_py_to_nsobject\r\n File \"pyobjus/pyobjus.pyx\", line 974, in pyobjus.objc_create_delegate\r\n pyobjus.ObjcException: You've passed b'kivyproject' as delegate, but there is no @protocol methods declared.\r\n```\r\n\n", "before_files": [{"content": "'''\nClipboard OsX: implementation of clipboard using Appkit\n'''\n\n__all__ = ('ClipboardNSPaste', )\n\nfrom kivy.core.clipboard import ClipboardBase\nfrom kivy.utils import platform\n\nif platform != 'macosx':\n raise SystemError('Unsupported platform for appkit clipboard.')\ntry:\n from pyobjus import autoclass\n from pyobjus.dylib_manager import load_framework, INCLUDE\n load_framework(INCLUDE.AppKit)\nexcept ImportError:\n raise SystemError('Pyobjus not installed. Please run the following'\n ' command to install it. `pip install --user pyobjus`')\n\nNSPasteboard = autoclass('NSPasteboard')\nNSString = autoclass('NSString')\n\n\nclass ClipboardNSPaste(ClipboardBase):\n\n def __init__(self):\n super(ClipboardNSPaste, self).__init__()\n self._clipboard = NSPasteboard.generalPasteboard()\n\n def get(self, mimetype='text/plain'):\n pb = self._clipboard\n data = pb.stringForType_('public.utf8-plain-text')\n if not data:\n return \"\"\n return data.UTF8String()\n\n def put(self, data, mimetype='text/plain'):\n pb = self._clipboard\n pb.clearContents()\n pb.writeObjects_([data])\n\n def get_types(self):\n return list('text/plain',)\n", "path": "kivy/core/clipboard/clipboard_nspaste.py"}]}
| 2,005 | 148 |
gh_patches_debug_33844
|
rasdani/github-patches
|
git_diff
|
getredash__redash-4354
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make Cypress tests work with [email protected]
Running our tests with [email protected] doesn't work. Need to figure out what happened, until then pinning the version to 3.4.1 (#4284).
</issue>
<code>
[start of redash/app.py]
1 from flask import Flask
2 from werkzeug.contrib.fixers import ProxyFix
3
4 from . import settings
5
6
7 class Redash(Flask):
8 """A custom Flask app for Redash"""
9 def __init__(self, *args, **kwargs):
10 kwargs.update({
11 'template_folder': settings.STATIC_ASSETS_PATH,
12 'static_folder': settings.STATIC_ASSETS_PATH,
13 'static_url_path': '/static',
14 })
15 super(Redash, self).__init__(__name__, *args, **kwargs)
16 # Make sure we get the right referral address even behind proxies like nginx.
17 self.wsgi_app = ProxyFix(self.wsgi_app, settings.PROXIES_COUNT)
18 # Configure Redash using our settings
19 self.config.from_object('redash.settings')
20
21
22 def create_app():
23 from . import authentication, extensions, handlers, limiter, mail, migrate, security
24 from .handlers import chrome_logger
25 from .handlers.webpack import configure_webpack
26 from .metrics import request as request_metrics
27 from .models import db, users
28 from .utils import sentry
29 from .version_check import reset_new_version_status
30
31 sentry.init()
32 app = Redash()
33
34 # Check and update the cached version for use by the client
35 app.before_first_request(reset_new_version_status)
36
37 security.init_app(app)
38 request_metrics.init_app(app)
39 db.init_app(app)
40 migrate.init_app(app, db)
41 mail.init_app(app)
42 authentication.init_app(app)
43 limiter.init_app(app)
44 handlers.init_app(app)
45 configure_webpack(app)
46 extensions.init_app(app)
47 chrome_logger.init_app(app)
48 users.init_app(app)
49
50 return app
51
[end of redash/app.py]
[start of redash/handlers/chrome_logger.py]
1 import time
2 import chromelogger
3 from flask import g, request
4 from flask_sqlalchemy import get_debug_queries
5
6
7 def log_queries():
8 total_duration = 0.0
9 queries_count = 0
10
11 chromelogger.group("SQL Queries")
12
13 for q in get_debug_queries():
14 total_duration += q.duration
15 queries_count += 1
16 chromelogger.info(q.statement % q.parameters)
17 chromelogger.info("Runtime: {:.2f}ms".format(1000 * q.duration))
18
19 chromelogger.info("{} queries executed in {:.2f}ms.".format(queries_count, total_duration*1000))
20
21 chromelogger.group_end("SQL Queries")
22
23
24 def chrome_log(response):
25 request_duration = (time.time() - g.start_time) * 1000
26 queries_duration = g.get('queries_duration', 0.0)
27 queries_count = g.get('queries_count', 0)
28
29 group_name = '{} {} ({}, {:.2f}ms runtime, {} queries in {:.2f}ms)'.format(
30 request.method, request.path, response.status_code, request_duration, queries_count, queries_duration)
31
32 chromelogger.group_collapsed(group_name)
33
34 endpoint = (request.endpoint or 'unknown').replace('.', '_')
35 chromelogger.info('Endpoint: {}'.format(endpoint))
36 chromelogger.info('Content Type: {}'.format(response.content_type))
37 chromelogger.info('Content Length: {}'.format(response.content_length or -1))
38
39 log_queries()
40
41 chromelogger.group_end(group_name)
42
43 header = chromelogger.get_header()
44 if header is not None:
45 response.headers.add(*header)
46
47 return response
48
49
50 def init_app(app):
51 if not app.debug:
52 return
53
54 app.after_request(chrome_log)
55
[end of redash/handlers/chrome_logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/redash/app.py b/redash/app.py
--- a/redash/app.py
+++ b/redash/app.py
@@ -21,7 +21,6 @@
def create_app():
from . import authentication, extensions, handlers, limiter, mail, migrate, security
- from .handlers import chrome_logger
from .handlers.webpack import configure_webpack
from .metrics import request as request_metrics
from .models import db, users
@@ -44,7 +43,6 @@
handlers.init_app(app)
configure_webpack(app)
extensions.init_app(app)
- chrome_logger.init_app(app)
users.init_app(app)
return app
diff --git a/redash/handlers/chrome_logger.py b/redash/handlers/chrome_logger.py
deleted file mode 100644
--- a/redash/handlers/chrome_logger.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import time
-import chromelogger
-from flask import g, request
-from flask_sqlalchemy import get_debug_queries
-
-
-def log_queries():
- total_duration = 0.0
- queries_count = 0
-
- chromelogger.group("SQL Queries")
-
- for q in get_debug_queries():
- total_duration += q.duration
- queries_count += 1
- chromelogger.info(q.statement % q.parameters)
- chromelogger.info("Runtime: {:.2f}ms".format(1000 * q.duration))
-
- chromelogger.info("{} queries executed in {:.2f}ms.".format(queries_count, total_duration*1000))
-
- chromelogger.group_end("SQL Queries")
-
-
-def chrome_log(response):
- request_duration = (time.time() - g.start_time) * 1000
- queries_duration = g.get('queries_duration', 0.0)
- queries_count = g.get('queries_count', 0)
-
- group_name = '{} {} ({}, {:.2f}ms runtime, {} queries in {:.2f}ms)'.format(
- request.method, request.path, response.status_code, request_duration, queries_count, queries_duration)
-
- chromelogger.group_collapsed(group_name)
-
- endpoint = (request.endpoint or 'unknown').replace('.', '_')
- chromelogger.info('Endpoint: {}'.format(endpoint))
- chromelogger.info('Content Type: {}'.format(response.content_type))
- chromelogger.info('Content Length: {}'.format(response.content_length or -1))
-
- log_queries()
-
- chromelogger.group_end(group_name)
-
- header = chromelogger.get_header()
- if header is not None:
- response.headers.add(*header)
-
- return response
-
-
-def init_app(app):
- if not app.debug:
- return
-
- app.after_request(chrome_log)
|
{"golden_diff": "diff --git a/redash/app.py b/redash/app.py\n--- a/redash/app.py\n+++ b/redash/app.py\n@@ -21,7 +21,6 @@\n \n def create_app():\n from . import authentication, extensions, handlers, limiter, mail, migrate, security\n- from .handlers import chrome_logger\n from .handlers.webpack import configure_webpack\n from .metrics import request as request_metrics\n from .models import db, users\n@@ -44,7 +43,6 @@\n handlers.init_app(app)\n configure_webpack(app)\n extensions.init_app(app)\n- chrome_logger.init_app(app)\n users.init_app(app)\n \n return app\ndiff --git a/redash/handlers/chrome_logger.py b/redash/handlers/chrome_logger.py\ndeleted file mode 100644\n--- a/redash/handlers/chrome_logger.py\n+++ /dev/null\n@@ -1,54 +0,0 @@\n-import time\n-import chromelogger\n-from flask import g, request\n-from flask_sqlalchemy import get_debug_queries\n-\n-\n-def log_queries():\n- total_duration = 0.0\n- queries_count = 0\n-\n- chromelogger.group(\"SQL Queries\")\n-\n- for q in get_debug_queries():\n- total_duration += q.duration\n- queries_count += 1\n- chromelogger.info(q.statement % q.parameters)\n- chromelogger.info(\"Runtime: {:.2f}ms\".format(1000 * q.duration))\n-\n- chromelogger.info(\"{} queries executed in {:.2f}ms.\".format(queries_count, total_duration*1000))\n-\n- chromelogger.group_end(\"SQL Queries\")\n-\n-\n-def chrome_log(response):\n- request_duration = (time.time() - g.start_time) * 1000\n- queries_duration = g.get('queries_duration', 0.0)\n- queries_count = g.get('queries_count', 0)\n-\n- group_name = '{} {} ({}, {:.2f}ms runtime, {} queries in {:.2f}ms)'.format(\n- request.method, request.path, response.status_code, request_duration, queries_count, queries_duration)\n-\n- chromelogger.group_collapsed(group_name)\n-\n- endpoint = (request.endpoint or 'unknown').replace('.', '_')\n- chromelogger.info('Endpoint: {}'.format(endpoint))\n- chromelogger.info('Content Type: {}'.format(response.content_type))\n- chromelogger.info('Content Length: {}'.format(response.content_length or -1))\n-\n- log_queries()\n-\n- chromelogger.group_end(group_name)\n-\n- header = chromelogger.get_header()\n- if header is not None:\n- response.headers.add(*header)\n-\n- return response\n-\n-\n-def init_app(app):\n- if not app.debug:\n- return\n-\n- app.after_request(chrome_log)\n", "issue": "Make Cypress tests work with [email protected]\nRunning our tests with [email protected] doesn't work. Need to figure out what happened, until then pinning the version to 3.4.1 (#4284).\n", "before_files": [{"content": "from flask import Flask\nfrom werkzeug.contrib.fixers import ProxyFix\n\nfrom . import settings\n\n\nclass Redash(Flask):\n \"\"\"A custom Flask app for Redash\"\"\"\n def __init__(self, *args, **kwargs):\n kwargs.update({\n 'template_folder': settings.STATIC_ASSETS_PATH,\n 'static_folder': settings.STATIC_ASSETS_PATH,\n 'static_url_path': '/static',\n })\n super(Redash, self).__init__(__name__, *args, **kwargs)\n # Make sure we get the right referral address even behind proxies like nginx.\n self.wsgi_app = ProxyFix(self.wsgi_app, settings.PROXIES_COUNT)\n # Configure Redash using our settings\n self.config.from_object('redash.settings')\n\n\ndef create_app():\n from . import authentication, extensions, handlers, limiter, mail, migrate, security\n from .handlers import chrome_logger\n from .handlers.webpack import configure_webpack\n from .metrics import request as request_metrics\n from .models import db, users\n from .utils import sentry\n from .version_check import reset_new_version_status\n\n sentry.init()\n app = Redash()\n\n # Check and update the cached version for use by the client\n app.before_first_request(reset_new_version_status)\n\n security.init_app(app)\n request_metrics.init_app(app)\n db.init_app(app)\n migrate.init_app(app, db)\n mail.init_app(app)\n authentication.init_app(app)\n limiter.init_app(app)\n handlers.init_app(app)\n configure_webpack(app)\n extensions.init_app(app)\n chrome_logger.init_app(app)\n users.init_app(app)\n\n return app\n", "path": "redash/app.py"}, {"content": "import time\nimport chromelogger\nfrom flask import g, request\nfrom flask_sqlalchemy import get_debug_queries\n\n\ndef log_queries():\n total_duration = 0.0\n queries_count = 0\n\n chromelogger.group(\"SQL Queries\")\n\n for q in get_debug_queries():\n total_duration += q.duration\n queries_count += 1\n chromelogger.info(q.statement % q.parameters)\n chromelogger.info(\"Runtime: {:.2f}ms\".format(1000 * q.duration))\n\n chromelogger.info(\"{} queries executed in {:.2f}ms.\".format(queries_count, total_duration*1000))\n\n chromelogger.group_end(\"SQL Queries\")\n\n\ndef chrome_log(response):\n request_duration = (time.time() - g.start_time) * 1000\n queries_duration = g.get('queries_duration', 0.0)\n queries_count = g.get('queries_count', 0)\n\n group_name = '{} {} ({}, {:.2f}ms runtime, {} queries in {:.2f}ms)'.format(\n request.method, request.path, response.status_code, request_duration, queries_count, queries_duration)\n\n chromelogger.group_collapsed(group_name)\n\n endpoint = (request.endpoint or 'unknown').replace('.', '_')\n chromelogger.info('Endpoint: {}'.format(endpoint))\n chromelogger.info('Content Type: {}'.format(response.content_type))\n chromelogger.info('Content Length: {}'.format(response.content_length or -1))\n\n log_queries()\n\n chromelogger.group_end(group_name)\n\n header = chromelogger.get_header()\n if header is not None:\n response.headers.add(*header)\n\n return response\n\n\ndef init_app(app):\n if not app.debug:\n return\n\n app.after_request(chrome_log)\n", "path": "redash/handlers/chrome_logger.py"}]}
| 1,571 | 642 |
gh_patches_debug_39704
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-1667
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sitemap indications of alternate language pages
https://support.google.com/webmasters/answer/2620865?hl=en
I do not have a multi-lingual page myself at this time, so I have no interest in implementing this. Nikola should support it, though.
</issue>
<code>
[start of nikola/plugins/task/sitemap/__init__.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2015 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 from __future__ import print_function, absolute_import, unicode_literals
28 import io
29 import datetime
30 import os
31 try:
32 from urlparse import urljoin, urlparse
33 import robotparser as robotparser
34 except ImportError:
35 from urllib.parse import urljoin, urlparse # NOQA
36 import urllib.robotparser as robotparser # NOQA
37
38 from nikola.plugin_categories import LateTask
39 from nikola.utils import config_changed, apply_filters
40
41
42 urlset_header = """<?xml version="1.0" encoding="UTF-8"?>
43 <urlset
44 xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
45 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
46 xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9
47 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">
48 """
49
50 loc_format = """ <url>
51 <loc>{0}</loc>
52 <lastmod>{1}</lastmod>
53 </url>
54 """
55
56 urlset_footer = "</urlset>"
57
58 sitemapindex_header = """<?xml version="1.0" encoding="UTF-8"?>
59 <sitemapindex
60 xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
61 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
62 xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9
63 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">
64 """
65
66 sitemap_format = """ <sitemap>
67 <loc>{0}</loc>
68 <lastmod>{1}</lastmod>
69 </sitemap>
70 """
71
72 sitemapindex_footer = "</sitemapindex>"
73
74
75 def get_base_path(base):
76 """returns the path of a base URL if it contains one.
77
78 >>> get_base_path('http://some.site') == '/'
79 True
80 >>> get_base_path('http://some.site/') == '/'
81 True
82 >>> get_base_path('http://some.site/some/sub-path') == '/some/sub-path/'
83 True
84 >>> get_base_path('http://some.site/some/sub-path/') == '/some/sub-path/'
85 True
86 """
87 # first parse the base_url for some path
88 base_parsed = urlparse(base)
89
90 if not base_parsed.path:
91 sub_path = ''
92 else:
93 sub_path = base_parsed.path
94 if sub_path.endswith('/'):
95 return sub_path
96 else:
97 return sub_path + '/'
98
99
100 class Sitemap(LateTask):
101 """Generate a sitemap."""
102
103 name = "sitemap"
104
105 def gen_tasks(self):
106 """Generate a sitemap."""
107 kw = {
108 "base_url": self.site.config["BASE_URL"],
109 "site_url": self.site.config["SITE_URL"],
110 "output_folder": self.site.config["OUTPUT_FOLDER"],
111 "strip_indexes": self.site.config["STRIP_INDEXES"],
112 "index_file": self.site.config["INDEX_FILE"],
113 "sitemap_include_fileless_dirs": self.site.config["SITEMAP_INCLUDE_FILELESS_DIRS"],
114 "mapped_extensions": self.site.config.get('MAPPED_EXTENSIONS', ['.html', '.htm', '.xml', '.rss']),
115 "robots_exclusions": self.site.config["ROBOTS_EXCLUSIONS"],
116 "filters": self.site.config["FILTERS"],
117 }
118
119 output = kw['output_folder']
120 base_url = kw['base_url']
121 mapped_exts = kw['mapped_extensions']
122
123 output_path = kw['output_folder']
124 sitemapindex_path = os.path.join(output_path, "sitemapindex.xml")
125 sitemap_path = os.path.join(output_path, "sitemap.xml")
126 base_path = get_base_path(kw['base_url'])
127 sitemapindex = {}
128 urlset = {}
129
130 def scan_locs():
131 for root, dirs, files in os.walk(output, followlinks=True):
132 if not dirs and not files and not kw['sitemap_include_fileless_dirs']:
133 continue # Totally empty, not on sitemap
134 path = os.path.relpath(root, output)
135 # ignore the current directory.
136 path = (path.replace(os.sep, '/') + '/').replace('./', '')
137 lastmod = self.get_lastmod(root)
138 loc = urljoin(base_url, base_path + path)
139 if kw['index_file'] in files and kw['strip_indexes']: # ignore folders when not stripping urls
140 post = self.site.post_per_file.get(path + kw['index_file'])
141 if post and (post.is_draft or post.is_private or post.publish_later):
142 continue
143 urlset[loc] = loc_format.format(loc, lastmod)
144 for fname in files:
145 if kw['strip_indexes'] and fname == kw['index_file']:
146 continue # We already mapped the folder
147 if os.path.splitext(fname)[-1] in mapped_exts:
148 real_path = os.path.join(root, fname)
149 path = os.path.relpath(real_path, output)
150 if path.endswith(kw['index_file']) and kw['strip_indexes']:
151 # ignore index files when stripping urls
152 continue
153 if not robot_fetch(path):
154 continue
155 if path.endswith('.html') or path.endswith('.htm'):
156 try:
157 if u'<!doctype html' not in io.open(real_path, 'r', encoding='utf8').read(1024).lower():
158 # ignores "html" files without doctype
159 # alexa-verify, google-site-verification, etc.
160 continue
161 except UnicodeDecodeError:
162 # ignore ancient files
163 # most non-utf8 files are worthless anyways
164 continue
165 """ put RSS in sitemapindex[] instead of in urlset[], sitemap_path is included after it is generated """
166 if path.endswith('.xml') or path.endswith('.rss'):
167 filehead = io.open(real_path, 'r', encoding='utf8').read(512)
168 if u'<rss' in filehead or (u'<urlset' in filehead and path != sitemap_path):
169 path = path.replace(os.sep, '/')
170 lastmod = self.get_lastmod(real_path)
171 loc = urljoin(base_url, base_path + path)
172 sitemapindex[loc] = sitemap_format.format(loc, lastmod)
173 continue
174 else:
175 continue # ignores all XML files except those presumed to be RSS
176 post = self.site.post_per_file.get(path)
177 if post and (post.is_draft or post.is_private or post.publish_later):
178 continue
179 path = path.replace(os.sep, '/')
180 lastmod = self.get_lastmod(real_path)
181 loc = urljoin(base_url, base_path + path)
182 urlset[loc] = loc_format.format(loc, lastmod)
183
184 def robot_fetch(path):
185 for rule in kw["robots_exclusions"]:
186 robot = robotparser.RobotFileParser()
187 robot.parse(["User-Agent: *", "Disallow: {0}".format(rule)])
188 if not robot.can_fetch("*", '/' + path):
189 return False # not robot food
190 return True
191
192 def write_sitemap():
193 # Have to rescan, because files may have been added between
194 # task dep scanning and task execution
195 with io.open(sitemap_path, 'w+', encoding='utf8') as outf:
196 outf.write(urlset_header)
197 for k in sorted(urlset.keys()):
198 outf.write(urlset[k])
199 outf.write(urlset_footer)
200 sitemap_url = urljoin(base_url, base_path + "sitemap.xml")
201 sitemapindex[sitemap_url] = sitemap_format.format(sitemap_url, self.get_lastmod(sitemap_path))
202
203 def write_sitemapindex():
204 with io.open(sitemapindex_path, 'w+', encoding='utf8') as outf:
205 outf.write(sitemapindex_header)
206 for k in sorted(sitemapindex.keys()):
207 outf.write(sitemapindex[k])
208 outf.write(sitemapindex_footer)
209
210 # Yield a task to calculate the dependencies of the sitemap
211 # Other tasks can depend on this output, instead of having
212 # to scan locations.
213 def scan_locs_task():
214 scan_locs()
215
216 # Generate a list of file dependencies for the actual generation
217 # task, so rebuilds are triggered. (Issue #1032)
218 output = kw["output_folder"]
219 file_dep = []
220
221 for i in urlset.keys():
222 p = os.path.join(output, urlparse(i).path.replace(base_path, '', 1))
223 if not p.endswith('sitemap.xml') and not os.path.isdir(p):
224 file_dep.append(p)
225 if os.path.isdir(p) and os.path.exists(os.path.join(p, 'index.html')):
226 file_dep.append(p + 'index.html')
227
228 for i in sitemapindex.keys():
229 p = os.path.join(output, urlparse(i).path.replace(base_path, '', 1))
230 if not p.endswith('sitemap.xml') and not os.path.isdir(p):
231 file_dep.append(p)
232 if os.path.isdir(p) and os.path.exists(os.path.join(p, 'index.html')):
233 file_dep.append(p + 'index.html')
234
235 return {'file_dep': file_dep}
236
237 yield {
238 "basename": "_scan_locs",
239 "name": "sitemap",
240 "actions": [(scan_locs_task)]
241 }
242
243 yield self.group_task()
244 yield apply_filters({
245 "basename": "sitemap",
246 "name": sitemap_path,
247 "targets": [sitemap_path],
248 "actions": [(write_sitemap,)],
249 "uptodate": [config_changed(kw, 'nikola.plugins.task.sitemap:write')],
250 "clean": True,
251 "task_dep": ["render_site"],
252 "calc_dep": ["_scan_locs:sitemap"],
253 }, kw['filters'])
254 yield apply_filters({
255 "basename": "sitemap",
256 "name": sitemapindex_path,
257 "targets": [sitemapindex_path],
258 "actions": [(write_sitemapindex,)],
259 "uptodate": [config_changed(kw, 'nikola.plugins.task.sitemap:write_index')],
260 "clean": True,
261 "file_dep": [sitemap_path]
262 }, kw['filters'])
263
264 def get_lastmod(self, p):
265 if self.site.invariant:
266 return '2038-01-01'
267 else:
268 return datetime.datetime.fromtimestamp(os.stat(p).st_mtime).isoformat().split('T')[0]
269
270 if __name__ == '__main__':
271 import doctest
272 doctest.testmod()
273
[end of nikola/plugins/task/sitemap/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nikola/plugins/task/sitemap/__init__.py b/nikola/plugins/task/sitemap/__init__.py
--- a/nikola/plugins/task/sitemap/__init__.py
+++ b/nikola/plugins/task/sitemap/__init__.py
@@ -50,6 +50,7 @@
loc_format = """ <url>
<loc>{0}</loc>
<lastmod>{1}</lastmod>
+ {2}
</url>
"""
@@ -69,6 +70,13 @@
</sitemap>
"""
+alternates_format = """<xhtml:link
+ rel="alternate"
+ hreflang="{0}"
+ href="{1}"
+ />"""
+
+
sitemapindex_footer = "</sitemapindex>"
@@ -114,6 +122,7 @@
"mapped_extensions": self.site.config.get('MAPPED_EXTENSIONS', ['.html', '.htm', '.xml', '.rss']),
"robots_exclusions": self.site.config["ROBOTS_EXCLUSIONS"],
"filters": self.site.config["FILTERS"],
+ "translations": self.site.config["TRANSLATIONS"],
}
output = kw['output_folder']
@@ -140,7 +149,14 @@
post = self.site.post_per_file.get(path + kw['index_file'])
if post and (post.is_draft or post.is_private or post.publish_later):
continue
- urlset[loc] = loc_format.format(loc, lastmod)
+ alternates = []
+ if post:
+ for lang in kw['translations']:
+ alt_url = post.permalink(lang=lang, absolute=True)
+ if loc == alt_url:
+ continue
+ alternates.append(alternates_format.format(lang, alt_url))
+ urlset[loc] = loc_format.format(loc, lastmod, '\n'.join(alternates))
for fname in files:
if kw['strip_indexes'] and fname == kw['index_file']:
continue # We already mapped the folder
@@ -179,7 +195,14 @@
path = path.replace(os.sep, '/')
lastmod = self.get_lastmod(real_path)
loc = urljoin(base_url, base_path + path)
- urlset[loc] = loc_format.format(loc, lastmod)
+ alternates = []
+ if post:
+ for lang in kw['translations']:
+ alt_url = post.permalink(lang=lang, absolute=True)
+ if loc == alt_url:
+ continue
+ alternates.append(alternates_format.format(lang, alt_url))
+ urlset[loc] = loc_format.format(loc, lastmod, '\n'.join(alternates))
def robot_fetch(path):
for rule in kw["robots_exclusions"]:
|
{"golden_diff": "diff --git a/nikola/plugins/task/sitemap/__init__.py b/nikola/plugins/task/sitemap/__init__.py\n--- a/nikola/plugins/task/sitemap/__init__.py\n+++ b/nikola/plugins/task/sitemap/__init__.py\n@@ -50,6 +50,7 @@\n loc_format = \"\"\" <url>\n <loc>{0}</loc>\n <lastmod>{1}</lastmod>\n+ {2}\n </url>\n \"\"\"\n \n@@ -69,6 +70,13 @@\n </sitemap>\n \"\"\"\n \n+alternates_format = \"\"\"<xhtml:link\n+ rel=\"alternate\"\n+ hreflang=\"{0}\"\n+ href=\"{1}\"\n+ />\"\"\"\n+\n+\n sitemapindex_footer = \"</sitemapindex>\"\n \n \n@@ -114,6 +122,7 @@\n \"mapped_extensions\": self.site.config.get('MAPPED_EXTENSIONS', ['.html', '.htm', '.xml', '.rss']),\n \"robots_exclusions\": self.site.config[\"ROBOTS_EXCLUSIONS\"],\n \"filters\": self.site.config[\"FILTERS\"],\n+ \"translations\": self.site.config[\"TRANSLATIONS\"],\n }\n \n output = kw['output_folder']\n@@ -140,7 +149,14 @@\n post = self.site.post_per_file.get(path + kw['index_file'])\n if post and (post.is_draft or post.is_private or post.publish_later):\n continue\n- urlset[loc] = loc_format.format(loc, lastmod)\n+ alternates = []\n+ if post:\n+ for lang in kw['translations']:\n+ alt_url = post.permalink(lang=lang, absolute=True)\n+ if loc == alt_url:\n+ continue\n+ alternates.append(alternates_format.format(lang, alt_url))\n+ urlset[loc] = loc_format.format(loc, lastmod, '\\n'.join(alternates))\n for fname in files:\n if kw['strip_indexes'] and fname == kw['index_file']:\n continue # We already mapped the folder\n@@ -179,7 +195,14 @@\n path = path.replace(os.sep, '/')\n lastmod = self.get_lastmod(real_path)\n loc = urljoin(base_url, base_path + path)\n- urlset[loc] = loc_format.format(loc, lastmod)\n+ alternates = []\n+ if post:\n+ for lang in kw['translations']:\n+ alt_url = post.permalink(lang=lang, absolute=True)\n+ if loc == alt_url:\n+ continue\n+ alternates.append(alternates_format.format(lang, alt_url))\n+ urlset[loc] = loc_format.format(loc, lastmod, '\\n'.join(alternates))\n \n def robot_fetch(path):\n for rule in kw[\"robots_exclusions\"]:\n", "issue": "Sitemap indications of alternate language pages\nhttps://support.google.com/webmasters/answer/2620865?hl=en\n\nI do not have a multi-lingual page myself at this time, so I have no interest in implementing this. Nikola should support it, though.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import print_function, absolute_import, unicode_literals\nimport io\nimport datetime\nimport os\ntry:\n from urlparse import urljoin, urlparse\n import robotparser as robotparser\nexcept ImportError:\n from urllib.parse import urljoin, urlparse # NOQA\n import urllib.robotparser as robotparser # NOQA\n\nfrom nikola.plugin_categories import LateTask\nfrom nikola.utils import config_changed, apply_filters\n\n\nurlset_header = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<urlset\n xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\"\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://www.sitemaps.org/schemas/sitemap/0.9\n http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd\">\n\"\"\"\n\nloc_format = \"\"\" <url>\n <loc>{0}</loc>\n <lastmod>{1}</lastmod>\n </url>\n\"\"\"\n\nurlset_footer = \"</urlset>\"\n\nsitemapindex_header = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<sitemapindex\n xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\"\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://www.sitemaps.org/schemas/sitemap/0.9\n http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd\">\n\"\"\"\n\nsitemap_format = \"\"\" <sitemap>\n <loc>{0}</loc>\n <lastmod>{1}</lastmod>\n </sitemap>\n\"\"\"\n\nsitemapindex_footer = \"</sitemapindex>\"\n\n\ndef get_base_path(base):\n \"\"\"returns the path of a base URL if it contains one.\n\n >>> get_base_path('http://some.site') == '/'\n True\n >>> get_base_path('http://some.site/') == '/'\n True\n >>> get_base_path('http://some.site/some/sub-path') == '/some/sub-path/'\n True\n >>> get_base_path('http://some.site/some/sub-path/') == '/some/sub-path/'\n True\n \"\"\"\n # first parse the base_url for some path\n base_parsed = urlparse(base)\n\n if not base_parsed.path:\n sub_path = ''\n else:\n sub_path = base_parsed.path\n if sub_path.endswith('/'):\n return sub_path\n else:\n return sub_path + '/'\n\n\nclass Sitemap(LateTask):\n \"\"\"Generate a sitemap.\"\"\"\n\n name = \"sitemap\"\n\n def gen_tasks(self):\n \"\"\"Generate a sitemap.\"\"\"\n kw = {\n \"base_url\": self.site.config[\"BASE_URL\"],\n \"site_url\": self.site.config[\"SITE_URL\"],\n \"output_folder\": self.site.config[\"OUTPUT_FOLDER\"],\n \"strip_indexes\": self.site.config[\"STRIP_INDEXES\"],\n \"index_file\": self.site.config[\"INDEX_FILE\"],\n \"sitemap_include_fileless_dirs\": self.site.config[\"SITEMAP_INCLUDE_FILELESS_DIRS\"],\n \"mapped_extensions\": self.site.config.get('MAPPED_EXTENSIONS', ['.html', '.htm', '.xml', '.rss']),\n \"robots_exclusions\": self.site.config[\"ROBOTS_EXCLUSIONS\"],\n \"filters\": self.site.config[\"FILTERS\"],\n }\n\n output = kw['output_folder']\n base_url = kw['base_url']\n mapped_exts = kw['mapped_extensions']\n\n output_path = kw['output_folder']\n sitemapindex_path = os.path.join(output_path, \"sitemapindex.xml\")\n sitemap_path = os.path.join(output_path, \"sitemap.xml\")\n base_path = get_base_path(kw['base_url'])\n sitemapindex = {}\n urlset = {}\n\n def scan_locs():\n for root, dirs, files in os.walk(output, followlinks=True):\n if not dirs and not files and not kw['sitemap_include_fileless_dirs']:\n continue # Totally empty, not on sitemap\n path = os.path.relpath(root, output)\n # ignore the current directory.\n path = (path.replace(os.sep, '/') + '/').replace('./', '')\n lastmod = self.get_lastmod(root)\n loc = urljoin(base_url, base_path + path)\n if kw['index_file'] in files and kw['strip_indexes']: # ignore folders when not stripping urls\n post = self.site.post_per_file.get(path + kw['index_file'])\n if post and (post.is_draft or post.is_private or post.publish_later):\n continue\n urlset[loc] = loc_format.format(loc, lastmod)\n for fname in files:\n if kw['strip_indexes'] and fname == kw['index_file']:\n continue # We already mapped the folder\n if os.path.splitext(fname)[-1] in mapped_exts:\n real_path = os.path.join(root, fname)\n path = os.path.relpath(real_path, output)\n if path.endswith(kw['index_file']) and kw['strip_indexes']:\n # ignore index files when stripping urls\n continue\n if not robot_fetch(path):\n continue\n if path.endswith('.html') or path.endswith('.htm'):\n try:\n if u'<!doctype html' not in io.open(real_path, 'r', encoding='utf8').read(1024).lower():\n # ignores \"html\" files without doctype\n # alexa-verify, google-site-verification, etc.\n continue\n except UnicodeDecodeError:\n # ignore ancient files\n # most non-utf8 files are worthless anyways\n continue\n \"\"\" put RSS in sitemapindex[] instead of in urlset[], sitemap_path is included after it is generated \"\"\"\n if path.endswith('.xml') or path.endswith('.rss'):\n filehead = io.open(real_path, 'r', encoding='utf8').read(512)\n if u'<rss' in filehead or (u'<urlset' in filehead and path != sitemap_path):\n path = path.replace(os.sep, '/')\n lastmod = self.get_lastmod(real_path)\n loc = urljoin(base_url, base_path + path)\n sitemapindex[loc] = sitemap_format.format(loc, lastmod)\n continue\n else:\n continue # ignores all XML files except those presumed to be RSS\n post = self.site.post_per_file.get(path)\n if post and (post.is_draft or post.is_private or post.publish_later):\n continue\n path = path.replace(os.sep, '/')\n lastmod = self.get_lastmod(real_path)\n loc = urljoin(base_url, base_path + path)\n urlset[loc] = loc_format.format(loc, lastmod)\n\n def robot_fetch(path):\n for rule in kw[\"robots_exclusions\"]:\n robot = robotparser.RobotFileParser()\n robot.parse([\"User-Agent: *\", \"Disallow: {0}\".format(rule)])\n if not robot.can_fetch(\"*\", '/' + path):\n return False # not robot food\n return True\n\n def write_sitemap():\n # Have to rescan, because files may have been added between\n # task dep scanning and task execution\n with io.open(sitemap_path, 'w+', encoding='utf8') as outf:\n outf.write(urlset_header)\n for k in sorted(urlset.keys()):\n outf.write(urlset[k])\n outf.write(urlset_footer)\n sitemap_url = urljoin(base_url, base_path + \"sitemap.xml\")\n sitemapindex[sitemap_url] = sitemap_format.format(sitemap_url, self.get_lastmod(sitemap_path))\n\n def write_sitemapindex():\n with io.open(sitemapindex_path, 'w+', encoding='utf8') as outf:\n outf.write(sitemapindex_header)\n for k in sorted(sitemapindex.keys()):\n outf.write(sitemapindex[k])\n outf.write(sitemapindex_footer)\n\n # Yield a task to calculate the dependencies of the sitemap\n # Other tasks can depend on this output, instead of having\n # to scan locations.\n def scan_locs_task():\n scan_locs()\n\n # Generate a list of file dependencies for the actual generation\n # task, so rebuilds are triggered. (Issue #1032)\n output = kw[\"output_folder\"]\n file_dep = []\n\n for i in urlset.keys():\n p = os.path.join(output, urlparse(i).path.replace(base_path, '', 1))\n if not p.endswith('sitemap.xml') and not os.path.isdir(p):\n file_dep.append(p)\n if os.path.isdir(p) and os.path.exists(os.path.join(p, 'index.html')):\n file_dep.append(p + 'index.html')\n\n for i in sitemapindex.keys():\n p = os.path.join(output, urlparse(i).path.replace(base_path, '', 1))\n if not p.endswith('sitemap.xml') and not os.path.isdir(p):\n file_dep.append(p)\n if os.path.isdir(p) and os.path.exists(os.path.join(p, 'index.html')):\n file_dep.append(p + 'index.html')\n\n return {'file_dep': file_dep}\n\n yield {\n \"basename\": \"_scan_locs\",\n \"name\": \"sitemap\",\n \"actions\": [(scan_locs_task)]\n }\n\n yield self.group_task()\n yield apply_filters({\n \"basename\": \"sitemap\",\n \"name\": sitemap_path,\n \"targets\": [sitemap_path],\n \"actions\": [(write_sitemap,)],\n \"uptodate\": [config_changed(kw, 'nikola.plugins.task.sitemap:write')],\n \"clean\": True,\n \"task_dep\": [\"render_site\"],\n \"calc_dep\": [\"_scan_locs:sitemap\"],\n }, kw['filters'])\n yield apply_filters({\n \"basename\": \"sitemap\",\n \"name\": sitemapindex_path,\n \"targets\": [sitemapindex_path],\n \"actions\": [(write_sitemapindex,)],\n \"uptodate\": [config_changed(kw, 'nikola.plugins.task.sitemap:write_index')],\n \"clean\": True,\n \"file_dep\": [sitemap_path]\n }, kw['filters'])\n\n def get_lastmod(self, p):\n if self.site.invariant:\n return '2038-01-01'\n else:\n return datetime.datetime.fromtimestamp(os.stat(p).st_mtime).isoformat().split('T')[0]\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod()\n", "path": "nikola/plugins/task/sitemap/__init__.py"}]}
| 3,840 | 610 |
gh_patches_debug_35467
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-2019
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typecating column to JSON_List/Map doesn't work.
## To Reproduce
<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->
1. Create an empty table.
2. Create a new column and add a JSON array to it.
3. Try to typecast the column to JSON List.
4. Notice there is no error/change to the data type of the column.
## Additional context
<!-- Add any other context about the problem or screenshots here. -->

</issue>
<code>
[start of mathesar/api/serializers/columns.py]
1 from rest_framework import serializers
2 from rest_framework.exceptions import ValidationError
3 from rest_framework.fields import empty, SerializerMethodField
4 from rest_framework.settings import api_settings
5
6 from mathesar.api.exceptions.mixins import MathesarErrorMessageMixin
7 from mathesar.api.serializers.shared_serializers import (
8 DisplayOptionsMappingSerializer,
9 DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY,
10 )
11 from mathesar.models.base import Column
12 from db.types.operations.convert import get_db_type_enum_from_id
13
14
15 class InputValueField(serializers.CharField):
16 """
17 Takes in an arbitrary value. Emulates the record creation endpoint,
18 which takes in arbitrary values (un-validated and un-processed request.data).
19 This field replicates that behavior in a serializer.
20 """
21
22 def to_internal_value(self, data):
23 return data
24
25 def to_representation(self, value):
26 return value
27
28
29 class TypeOptionSerializer(MathesarErrorMessageMixin, serializers.Serializer):
30 length = serializers.IntegerField(required=False)
31 precision = serializers.IntegerField(required=False)
32 scale = serializers.IntegerField(required=False)
33 fields = serializers.CharField(required=False)
34
35 def validate(self, attrs):
36 if attrs.get('scale', None) is not None and attrs.get('precision', None) is None:
37 attrs['precision'] = 1000
38 return super().validate(attrs)
39
40 def run_validation(self, data=empty):
41 # Ensure that there are no unknown type options passed in.
42 if data is not empty and data is not None:
43 unknown = set(data) - set(self.fields)
44 if unknown:
45 errors = ['Unknown field: {}'.format(field) for field in unknown]
46 raise serializers.ValidationError({
47 api_settings.NON_FIELD_ERRORS_KEY: errors,
48 })
49
50 return super(TypeOptionSerializer, self).run_validation(data)
51
52
53 TYPE_KEY = 'type'
54 DISPLAY_OPTIONS_KEY = 'display_options'
55
56
57 class SimpleColumnSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
58 class Meta:
59 model = Column
60 fields = ('id',
61 'name',
62 TYPE_KEY,
63 'type_options',
64 DISPLAY_OPTIONS_KEY,
65 )
66 id = serializers.IntegerField(required=False)
67 name = serializers.CharField()
68 # TODO consider renaming type and type_options to db_type and db_type_options
69 # The name of below attribute should match value of TYPE_KEY
70 type = serializers.CharField()
71 type_options = TypeOptionSerializer(required=False, allow_null=True)
72 # The name of below attribute should match value of DISPLAY_OPTIONS_KEY
73 display_options = DisplayOptionsMappingSerializer(required=False, allow_null=True)
74
75 def to_representation(self, instance):
76 if isinstance(instance, dict):
77 db_type_id = instance.get(TYPE_KEY)
78 db_type = get_db_type_enum_from_id(db_type_id)
79 else:
80 db_type = instance.db_type
81 # TODO replace or remove this assert before production
82 assert db_type is not None
83 self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = db_type
84 representation = super().to_representation(instance)
85 _force_canonical_type(representation, db_type)
86 return representation
87
88 def to_internal_value(self, data):
89 if self.partial and TYPE_KEY not in data:
90 db_type = getattr(self.instance, 'db_type', None)
91 else:
92 db_type_id = data.get(TYPE_KEY, None)
93 db_type = get_db_type_enum_from_id(db_type_id) if db_type_id else None
94 self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = db_type
95 return super().to_internal_value(data)
96
97
98 def _force_canonical_type(representation, db_type):
99 """
100 Sometimes the representation's TYPE_KEY attribute will also include type option information
101 (e.g. `numeric(3, 5)`). We override the attribute's value to a canonical type id.
102
103 This might be better solved upstream, but since our Column model subclasses SA's Column,
104 overriding its TYPE_KEY attribute, might interfere with SA's workings.
105 """
106 representation[TYPE_KEY] = db_type.id
107 return representation
108
109
110 class ColumnDefaultSerializer(MathesarErrorMessageMixin, serializers.Serializer):
111 value = InputValueField()
112 is_dynamic = serializers.BooleanField(read_only=True)
113
114
115 class ColumnSerializer(SimpleColumnSerializer):
116 class Meta(SimpleColumnSerializer.Meta):
117 fields = SimpleColumnSerializer.Meta.fields + (
118 'nullable',
119 'primary_key',
120 'source_column',
121 'copy_source_data',
122 'copy_source_constraints',
123 'valid_target_types',
124 'default',
125 'has_dependents',
126 )
127 model_fields = (DISPLAY_OPTIONS_KEY,)
128
129 name = serializers.CharField(required=False, allow_blank=True)
130
131 # From scratch fields
132 type = serializers.CharField(required=False)
133 nullable = serializers.BooleanField(default=True)
134 primary_key = serializers.BooleanField(default=False)
135 default = ColumnDefaultSerializer(
136 source='column_default_dict', required=False, allow_null=True, default=None
137 )
138
139 # From duplication fields
140 source_column = serializers.PrimaryKeyRelatedField(queryset=Column.current_objects.all(), required=False, write_only=True)
141 copy_source_data = serializers.BooleanField(default=True, write_only=True)
142 copy_source_constraints = serializers.BooleanField(default=True, write_only=True)
143
144 # Read only fields
145 valid_target_types = SerializerMethodField(method_name='get_valid_target_types', read_only=True)
146
147 def validate(self, data):
148 data = super().validate(data)
149 # Reevaluate column display options based on the new column type.
150 if TYPE_KEY in data and DISPLAY_OPTIONS_KEY not in data:
151 if self.instance:
152 db_type = getattr(self.instance, 'db_type', None)
153 # Invalidate display_options if type has been changed
154 if db_type is not None:
155 if str(db_type.id) != data[TYPE_KEY]:
156 data[DISPLAY_OPTIONS_KEY] = None
157 else:
158 data[DISPLAY_OPTIONS_KEY] = None
159 if not self.partial:
160 from_scratch_required_fields = [TYPE_KEY]
161 from_scratch_specific_fields = [TYPE_KEY, 'nullable', 'primary_key']
162 from_dupe_required_fields = ['source_column']
163 from_dupe_specific_fields = ['source_column', 'copy_source_data',
164 'copy_source_constraints']
165
166 # Note that we run validation on self.initial_data, as `data` has defaults
167 # filled in for fields that weren't specified by the request
168 from_scratch_required_all = all([
169 f in self.initial_data for f in from_scratch_required_fields
170 ])
171 from_scratch_specific_in = [
172 f for f in from_scratch_specific_fields if f in self.initial_data
173 ]
174 from_dupe_required_all = all([
175 f in self.initial_data for f in from_dupe_required_fields
176 ])
177 from_dupe_specific_in = [
178 f for f in from_dupe_specific_fields if f in self.initial_data
179 ]
180
181 if len(from_dupe_specific_in) and len(from_scratch_specific_in):
182 raise ValidationError(
183 f'{from_scratch_specific_in} cannot be passed in if '
184 f'{from_dupe_specific_in} has also been passed in.'
185 )
186 elif not from_dupe_required_all and not from_scratch_required_all:
187 # We default to from scratch required fields if no fields are passed
188 if len(from_dupe_specific_in) and not len(from_scratch_specific_in):
189 required_fields = from_dupe_required_fields
190 else:
191 required_fields = from_scratch_required_fields
192 raise ValidationError({
193 f: ['This field is required.']
194 for f in required_fields
195 if f not in self.initial_data
196 })
197 return data
198
199 @property
200 def validated_model_fields(self):
201 return {key: self.validated_data[key] for key in self.validated_data if key in self.Meta.model_fields}
202
203 def get_valid_target_types(self, column):
204 valid_target_types = column.valid_target_types
205 if valid_target_types:
206 valid_target_type_ids = tuple(
207 db_type.id for db_type in valid_target_types
208 )
209 return valid_target_type_ids
210
[end of mathesar/api/serializers/columns.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mathesar/api/serializers/columns.py b/mathesar/api/serializers/columns.py
--- a/mathesar/api/serializers/columns.py
+++ b/mathesar/api/serializers/columns.py
@@ -1,4 +1,4 @@
-from rest_framework import serializers
+from rest_framework import serializers, status
from rest_framework.exceptions import ValidationError
from rest_framework.fields import empty, SerializerMethodField
from rest_framework.settings import api_settings
@@ -8,6 +8,10 @@
DisplayOptionsMappingSerializer,
DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY,
)
+from mathesar.api.exceptions.database_exceptions import (
+ exceptions as database_api_exceptions,
+)
+from db.columns.exceptions import InvalidTypeError
from mathesar.models.base import Column
from db.types.operations.convert import get_db_type_enum_from_id
@@ -147,15 +151,20 @@
def validate(self, data):
data = super().validate(data)
# Reevaluate column display options based on the new column type.
- if TYPE_KEY in data and DISPLAY_OPTIONS_KEY not in data:
- if self.instance:
+ if TYPE_KEY in data and self.instance:
+ db_type = get_db_type_enum_from_id(data[TYPE_KEY].lower())
+ target_types = self.instance.valid_target_types
+ if db_type not in target_types:
+ raise database_api_exceptions.InvalidTypeCastAPIException(
+ InvalidTypeError,
+ status_code=status.HTTP_400_BAD_REQUEST
+ )
+ if DISPLAY_OPTIONS_KEY not in data:
db_type = getattr(self.instance, 'db_type', None)
# Invalidate display_options if type has been changed
if db_type is not None:
if str(db_type.id) != data[TYPE_KEY]:
data[DISPLAY_OPTIONS_KEY] = None
- else:
- data[DISPLAY_OPTIONS_KEY] = None
if not self.partial:
from_scratch_required_fields = [TYPE_KEY]
from_scratch_specific_fields = [TYPE_KEY, 'nullable', 'primary_key']
|
{"golden_diff": "diff --git a/mathesar/api/serializers/columns.py b/mathesar/api/serializers/columns.py\n--- a/mathesar/api/serializers/columns.py\n+++ b/mathesar/api/serializers/columns.py\n@@ -1,4 +1,4 @@\n-from rest_framework import serializers\n+from rest_framework import serializers, status\n from rest_framework.exceptions import ValidationError\n from rest_framework.fields import empty, SerializerMethodField\n from rest_framework.settings import api_settings\n@@ -8,6 +8,10 @@\n DisplayOptionsMappingSerializer,\n DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY,\n )\n+from mathesar.api.exceptions.database_exceptions import (\n+ exceptions as database_api_exceptions,\n+)\n+from db.columns.exceptions import InvalidTypeError\n from mathesar.models.base import Column\n from db.types.operations.convert import get_db_type_enum_from_id\n \n@@ -147,15 +151,20 @@\n def validate(self, data):\n data = super().validate(data)\n # Reevaluate column display options based on the new column type.\n- if TYPE_KEY in data and DISPLAY_OPTIONS_KEY not in data:\n- if self.instance:\n+ if TYPE_KEY in data and self.instance:\n+ db_type = get_db_type_enum_from_id(data[TYPE_KEY].lower())\n+ target_types = self.instance.valid_target_types\n+ if db_type not in target_types:\n+ raise database_api_exceptions.InvalidTypeCastAPIException(\n+ InvalidTypeError,\n+ status_code=status.HTTP_400_BAD_REQUEST\n+ )\n+ if DISPLAY_OPTIONS_KEY not in data:\n db_type = getattr(self.instance, 'db_type', None)\n # Invalidate display_options if type has been changed\n if db_type is not None:\n if str(db_type.id) != data[TYPE_KEY]:\n data[DISPLAY_OPTIONS_KEY] = None\n- else:\n- data[DISPLAY_OPTIONS_KEY] = None\n if not self.partial:\n from_scratch_required_fields = [TYPE_KEY]\n from_scratch_specific_fields = [TYPE_KEY, 'nullable', 'primary_key']\n", "issue": "Typecating column to JSON_List/Map doesn't work.\n## To Reproduce\r\n<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->\r\n\r\n1. Create an empty table.\r\n2. Create a new column and add a JSON array to it.\r\n3. Try to typecast the column to JSON List.\r\n4. Notice there is no error/change to the data type of the column.\r\n\r\n\r\n## Additional context\r\n<!-- Add any other context about the problem or screenshots here. -->\r\n\r\n\n", "before_files": [{"content": "from rest_framework import serializers\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.fields import empty, SerializerMethodField\nfrom rest_framework.settings import api_settings\n\nfrom mathesar.api.exceptions.mixins import MathesarErrorMessageMixin\nfrom mathesar.api.serializers.shared_serializers import (\n DisplayOptionsMappingSerializer,\n DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY,\n)\nfrom mathesar.models.base import Column\nfrom db.types.operations.convert import get_db_type_enum_from_id\n\n\nclass InputValueField(serializers.CharField):\n \"\"\"\n Takes in an arbitrary value. Emulates the record creation endpoint,\n which takes in arbitrary values (un-validated and un-processed request.data).\n This field replicates that behavior in a serializer.\n \"\"\"\n\n def to_internal_value(self, data):\n return data\n\n def to_representation(self, value):\n return value\n\n\nclass TypeOptionSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n length = serializers.IntegerField(required=False)\n precision = serializers.IntegerField(required=False)\n scale = serializers.IntegerField(required=False)\n fields = serializers.CharField(required=False)\n\n def validate(self, attrs):\n if attrs.get('scale', None) is not None and attrs.get('precision', None) is None:\n attrs['precision'] = 1000\n return super().validate(attrs)\n\n def run_validation(self, data=empty):\n # Ensure that there are no unknown type options passed in.\n if data is not empty and data is not None:\n unknown = set(data) - set(self.fields)\n if unknown:\n errors = ['Unknown field: {}'.format(field) for field in unknown]\n raise serializers.ValidationError({\n api_settings.NON_FIELD_ERRORS_KEY: errors,\n })\n\n return super(TypeOptionSerializer, self).run_validation(data)\n\n\nTYPE_KEY = 'type'\nDISPLAY_OPTIONS_KEY = 'display_options'\n\n\nclass SimpleColumnSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = Column\n fields = ('id',\n 'name',\n TYPE_KEY,\n 'type_options',\n DISPLAY_OPTIONS_KEY,\n )\n id = serializers.IntegerField(required=False)\n name = serializers.CharField()\n # TODO consider renaming type and type_options to db_type and db_type_options\n # The name of below attribute should match value of TYPE_KEY\n type = serializers.CharField()\n type_options = TypeOptionSerializer(required=False, allow_null=True)\n # The name of below attribute should match value of DISPLAY_OPTIONS_KEY\n display_options = DisplayOptionsMappingSerializer(required=False, allow_null=True)\n\n def to_representation(self, instance):\n if isinstance(instance, dict):\n db_type_id = instance.get(TYPE_KEY)\n db_type = get_db_type_enum_from_id(db_type_id)\n else:\n db_type = instance.db_type\n # TODO replace or remove this assert before production\n assert db_type is not None\n self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = db_type\n representation = super().to_representation(instance)\n _force_canonical_type(representation, db_type)\n return representation\n\n def to_internal_value(self, data):\n if self.partial and TYPE_KEY not in data:\n db_type = getattr(self.instance, 'db_type', None)\n else:\n db_type_id = data.get(TYPE_KEY, None)\n db_type = get_db_type_enum_from_id(db_type_id) if db_type_id else None\n self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = db_type\n return super().to_internal_value(data)\n\n\ndef _force_canonical_type(representation, db_type):\n \"\"\"\n Sometimes the representation's TYPE_KEY attribute will also include type option information\n (e.g. `numeric(3, 5)`). We override the attribute's value to a canonical type id.\n\n This might be better solved upstream, but since our Column model subclasses SA's Column,\n overriding its TYPE_KEY attribute, might interfere with SA's workings.\n \"\"\"\n representation[TYPE_KEY] = db_type.id\n return representation\n\n\nclass ColumnDefaultSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n value = InputValueField()\n is_dynamic = serializers.BooleanField(read_only=True)\n\n\nclass ColumnSerializer(SimpleColumnSerializer):\n class Meta(SimpleColumnSerializer.Meta):\n fields = SimpleColumnSerializer.Meta.fields + (\n 'nullable',\n 'primary_key',\n 'source_column',\n 'copy_source_data',\n 'copy_source_constraints',\n 'valid_target_types',\n 'default',\n 'has_dependents',\n )\n model_fields = (DISPLAY_OPTIONS_KEY,)\n\n name = serializers.CharField(required=False, allow_blank=True)\n\n # From scratch fields\n type = serializers.CharField(required=False)\n nullable = serializers.BooleanField(default=True)\n primary_key = serializers.BooleanField(default=False)\n default = ColumnDefaultSerializer(\n source='column_default_dict', required=False, allow_null=True, default=None\n )\n\n # From duplication fields\n source_column = serializers.PrimaryKeyRelatedField(queryset=Column.current_objects.all(), required=False, write_only=True)\n copy_source_data = serializers.BooleanField(default=True, write_only=True)\n copy_source_constraints = serializers.BooleanField(default=True, write_only=True)\n\n # Read only fields\n valid_target_types = SerializerMethodField(method_name='get_valid_target_types', read_only=True)\n\n def validate(self, data):\n data = super().validate(data)\n # Reevaluate column display options based on the new column type.\n if TYPE_KEY in data and DISPLAY_OPTIONS_KEY not in data:\n if self.instance:\n db_type = getattr(self.instance, 'db_type', None)\n # Invalidate display_options if type has been changed\n if db_type is not None:\n if str(db_type.id) != data[TYPE_KEY]:\n data[DISPLAY_OPTIONS_KEY] = None\n else:\n data[DISPLAY_OPTIONS_KEY] = None\n if not self.partial:\n from_scratch_required_fields = [TYPE_KEY]\n from_scratch_specific_fields = [TYPE_KEY, 'nullable', 'primary_key']\n from_dupe_required_fields = ['source_column']\n from_dupe_specific_fields = ['source_column', 'copy_source_data',\n 'copy_source_constraints']\n\n # Note that we run validation on self.initial_data, as `data` has defaults\n # filled in for fields that weren't specified by the request\n from_scratch_required_all = all([\n f in self.initial_data for f in from_scratch_required_fields\n ])\n from_scratch_specific_in = [\n f for f in from_scratch_specific_fields if f in self.initial_data\n ]\n from_dupe_required_all = all([\n f in self.initial_data for f in from_dupe_required_fields\n ])\n from_dupe_specific_in = [\n f for f in from_dupe_specific_fields if f in self.initial_data\n ]\n\n if len(from_dupe_specific_in) and len(from_scratch_specific_in):\n raise ValidationError(\n f'{from_scratch_specific_in} cannot be passed in if '\n f'{from_dupe_specific_in} has also been passed in.'\n )\n elif not from_dupe_required_all and not from_scratch_required_all:\n # We default to from scratch required fields if no fields are passed\n if len(from_dupe_specific_in) and not len(from_scratch_specific_in):\n required_fields = from_dupe_required_fields\n else:\n required_fields = from_scratch_required_fields\n raise ValidationError({\n f: ['This field is required.']\n for f in required_fields\n if f not in self.initial_data\n })\n return data\n\n @property\n def validated_model_fields(self):\n return {key: self.validated_data[key] for key in self.validated_data if key in self.Meta.model_fields}\n\n def get_valid_target_types(self, column):\n valid_target_types = column.valid_target_types\n if valid_target_types:\n valid_target_type_ids = tuple(\n db_type.id for db_type in valid_target_types\n )\n return valid_target_type_ids\n", "path": "mathesar/api/serializers/columns.py"}]}
| 2,967 | 441 |
gh_patches_debug_7863
|
rasdani/github-patches
|
git_diff
|
facebookresearch__hydra-1363
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Nevergrad-Plugin] Add support for Python 3.9
Python 3.9 support pending on scikit 2.4.0 release. Relevant comment: scikit-learn/scikit-learn#18621 (comment)
Related to #1062
</issue>
<code>
[start of plugins/hydra_nevergrad_sweeper/setup.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 # type: ignore
3 from setuptools import find_namespace_packages, setup
4
5 with open("README.md", "r") as fh:
6 LONG_DESC = fh.read()
7 setup(
8 name="hydra-nevergrad-sweeper",
9 version="1.1.0rc1",
10 author="Jeremy Rapin, Omry Yadan, Jieru Hu",
11 author_email="[email protected], [email protected], [email protected]",
12 description="Hydra Nevergrad Sweeper plugin",
13 long_description=LONG_DESC,
14 long_description_content_type="text/markdown",
15 url="https://github.com/facebookresearch/hydra/",
16 packages=find_namespace_packages(include=["hydra_plugins.*"]),
17 classifiers=[
18 "License :: OSI Approved :: MIT License",
19 "Programming Language :: Python :: 3.6",
20 "Programming Language :: Python :: 3.7",
21 "Programming Language :: Python :: 3.8",
22 # "Programming Language :: Python :: 3.9",
23 "Operating System :: OS Independent",
24 "Development Status :: 4 - Beta",
25 ],
26 install_requires=["hydra-core>=1.0.0", "nevergrad>=0.4.1.post4"],
27 include_package_data=True,
28 )
29
[end of plugins/hydra_nevergrad_sweeper/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/hydra_nevergrad_sweeper/setup.py b/plugins/hydra_nevergrad_sweeper/setup.py
--- a/plugins/hydra_nevergrad_sweeper/setup.py
+++ b/plugins/hydra_nevergrad_sweeper/setup.py
@@ -19,7 +19,7 @@
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
- # "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.9",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
],
|
{"golden_diff": "diff --git a/plugins/hydra_nevergrad_sweeper/setup.py b/plugins/hydra_nevergrad_sweeper/setup.py\n--- a/plugins/hydra_nevergrad_sweeper/setup.py\n+++ b/plugins/hydra_nevergrad_sweeper/setup.py\n@@ -19,7 +19,7 @@\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n- # \"Programming Language :: Python :: 3.9\",\n+ \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 4 - Beta\",\n ],\n", "issue": "[Nevergrad-Plugin] Add support for Python 3.9\nPython 3.9 support pending on scikit 2.4.0 release. Relevant comment: scikit-learn/scikit-learn#18621 (comment)\r\n\r\nRelated to #1062\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nfrom setuptools import find_namespace_packages, setup\n\nwith open(\"README.md\", \"r\") as fh:\n LONG_DESC = fh.read()\n setup(\n name=\"hydra-nevergrad-sweeper\",\n version=\"1.1.0rc1\",\n author=\"Jeremy Rapin, Omry Yadan, Jieru Hu\",\n author_email=\"[email protected], [email protected], [email protected]\",\n description=\"Hydra Nevergrad Sweeper plugin\",\n long_description=LONG_DESC,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra/\",\n packages=find_namespace_packages(include=[\"hydra_plugins.*\"]),\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n # \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 4 - Beta\",\n ],\n install_requires=[\"hydra-core>=1.0.0\", \"nevergrad>=0.4.1.post4\"],\n include_package_data=True,\n )\n", "path": "plugins/hydra_nevergrad_sweeper/setup.py"}]}
| 946 | 155 |
gh_patches_debug_13339
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-1491
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[requires.io] dependency update on master branch
</issue>
<code>
[start of warehouse/celery.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import celery.backends
14
15 # We need to trick Celery into supporting rediss:// URLs which is how redis-py
16 # signals that you should use Redis with TLS.
17 celery.backends.BACKEND_ALIASES["rediss"] = "warehouse.celery:TLSRedisBackend" # noqa
18
19 from celery import Celery, Task
20 from celery.backends.redis import RedisBackend as _RedisBackend
21 from celery.signals import celeryd_init
22 from pyramid import scripting
23 from pyramid.threadlocal import get_current_request
24 from raven.contrib.celery import register_signal, register_logger_signal
25
26 from warehouse.config import Environment, configure
27
28
29 @celeryd_init.connect
30 def _configure_celery(*args, **kwargs):
31 config = configure()
32 register_logger_signal(config.registry["raven.client"])
33 register_signal(config.registry["raven.client"])
34
35
36 class TLSRedisBackend(_RedisBackend):
37
38 def _params_from_url(self, url, defaults):
39 params = super()._params_from_url(url, defaults)
40 params.update({"connection_class": self.redis.SSLConnection})
41 return params
42
43
44 class WarehouseTask(Task):
45
46 abstract = True
47
48 def __call__(self, *args, **kwargs):
49 registry = self.app.pyramid_config.registry
50 pyramid_env = scripting.prepare(registry=registry)
51
52 try:
53 return super().__call__(pyramid_env["request"], *args, **kwargs)
54 finally:
55 pyramid_env["closer"]()
56
57 def apply_async(self, *args, **kwargs):
58 # The API design of Celery makes this threadlocal pretty impossible to
59 # avoid :(
60 request = get_current_request()
61
62 # If for whatever reason we were unable to get a request we'll just
63 # skip this and call the original method to send this immediately.
64 if request is None or not hasattr(request, "tm"):
65 return super().apply_async(*args, **kwargs)
66
67 # This will break things that expect to get an AsyncResult because
68 # we're no longer going to be returning an async result from this when
69 # called from within a request, response cycle. Ideally we shouldn't be
70 # waiting for responses in a request/response cycle anyways though.
71 request.tm.get().addAfterCommitHook(
72 self._after_commit_hook,
73 args=args,
74 kws=kwargs,
75 )
76
77 def _after_commit_hook(self, success, *args, **kwargs):
78 if success:
79 super().apply_async(*args, **kwargs)
80
81
82 app = Celery("warehouse")
83 app.Task = WarehouseTask
84
85
86 task = app.task
87
88
89 def includeme(config):
90 s = config.registry.settings
91 app.pyramid_config = config
92 app.conf.update(
93 BROKER_URL=s["celery.broker_url"],
94 BROKER_USE_SSL=s["warehouse.env"] == Environment.production,
95 CELERY_DISABLE_RATE_LIMITS=True,
96 CELERY_RESULT_BACKEND=s["celery.result_url"],
97 CELERY_RESULT_SERIALIZER="json",
98 CELERY_TASK_SERIALIZER="json",
99 CELERY_ACCEPT_CONTENT=["json", "msgpack"],
100 CELERY_MESSAGE_COMPRESSION="gzip",
101 CELERY_QUEUE_HA_POLICY="all",
102 )
103
[end of warehouse/celery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/celery.py b/warehouse/celery.py
--- a/warehouse/celery.py
+++ b/warehouse/celery.py
@@ -10,11 +10,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-import celery.backends
+import celery.app.backends
# We need to trick Celery into supporting rediss:// URLs which is how redis-py
# signals that you should use Redis with TLS.
-celery.backends.BACKEND_ALIASES["rediss"] = "warehouse.celery:TLSRedisBackend" # noqa
+celery.app.backends.BACKEND_ALIASES["rediss"] = "warehouse.celery:TLSRedisBackend" # noqa
from celery import Celery, Task
from celery.backends.redis import RedisBackend as _RedisBackend
|
{"golden_diff": "diff --git a/warehouse/celery.py b/warehouse/celery.py\n--- a/warehouse/celery.py\n+++ b/warehouse/celery.py\n@@ -10,11 +10,11 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-import celery.backends\n+import celery.app.backends\n \n # We need to trick Celery into supporting rediss:// URLs which is how redis-py\n # signals that you should use Redis with TLS.\n-celery.backends.BACKEND_ALIASES[\"rediss\"] = \"warehouse.celery:TLSRedisBackend\" # noqa\n+celery.app.backends.BACKEND_ALIASES[\"rediss\"] = \"warehouse.celery:TLSRedisBackend\" # noqa\n \n from celery import Celery, Task\n from celery.backends.redis import RedisBackend as _RedisBackend\n", "issue": "[requires.io] dependency update on master branch\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport celery.backends\n\n# We need to trick Celery into supporting rediss:// URLs which is how redis-py\n# signals that you should use Redis with TLS.\ncelery.backends.BACKEND_ALIASES[\"rediss\"] = \"warehouse.celery:TLSRedisBackend\" # noqa\n\nfrom celery import Celery, Task\nfrom celery.backends.redis import RedisBackend as _RedisBackend\nfrom celery.signals import celeryd_init\nfrom pyramid import scripting\nfrom pyramid.threadlocal import get_current_request\nfrom raven.contrib.celery import register_signal, register_logger_signal\n\nfrom warehouse.config import Environment, configure\n\n\n@celeryd_init.connect\ndef _configure_celery(*args, **kwargs):\n config = configure()\n register_logger_signal(config.registry[\"raven.client\"])\n register_signal(config.registry[\"raven.client\"])\n\n\nclass TLSRedisBackend(_RedisBackend):\n\n def _params_from_url(self, url, defaults):\n params = super()._params_from_url(url, defaults)\n params.update({\"connection_class\": self.redis.SSLConnection})\n return params\n\n\nclass WarehouseTask(Task):\n\n abstract = True\n\n def __call__(self, *args, **kwargs):\n registry = self.app.pyramid_config.registry\n pyramid_env = scripting.prepare(registry=registry)\n\n try:\n return super().__call__(pyramid_env[\"request\"], *args, **kwargs)\n finally:\n pyramid_env[\"closer\"]()\n\n def apply_async(self, *args, **kwargs):\n # The API design of Celery makes this threadlocal pretty impossible to\n # avoid :(\n request = get_current_request()\n\n # If for whatever reason we were unable to get a request we'll just\n # skip this and call the original method to send this immediately.\n if request is None or not hasattr(request, \"tm\"):\n return super().apply_async(*args, **kwargs)\n\n # This will break things that expect to get an AsyncResult because\n # we're no longer going to be returning an async result from this when\n # called from within a request, response cycle. Ideally we shouldn't be\n # waiting for responses in a request/response cycle anyways though.\n request.tm.get().addAfterCommitHook(\n self._after_commit_hook,\n args=args,\n kws=kwargs,\n )\n\n def _after_commit_hook(self, success, *args, **kwargs):\n if success:\n super().apply_async(*args, **kwargs)\n\n\napp = Celery(\"warehouse\")\napp.Task = WarehouseTask\n\n\ntask = app.task\n\n\ndef includeme(config):\n s = config.registry.settings\n app.pyramid_config = config\n app.conf.update(\n BROKER_URL=s[\"celery.broker_url\"],\n BROKER_USE_SSL=s[\"warehouse.env\"] == Environment.production,\n CELERY_DISABLE_RATE_LIMITS=True,\n CELERY_RESULT_BACKEND=s[\"celery.result_url\"],\n CELERY_RESULT_SERIALIZER=\"json\",\n CELERY_TASK_SERIALIZER=\"json\",\n CELERY_ACCEPT_CONTENT=[\"json\", \"msgpack\"],\n CELERY_MESSAGE_COMPRESSION=\"gzip\",\n CELERY_QUEUE_HA_POLICY=\"all\",\n )\n", "path": "warehouse/celery.py"}]}
| 1,545 | 188 |
gh_patches_debug_21400
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-1461
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[ENH] Imputer for multivariate timeseries
**Is your feature request related to a problem? Please describe.**
Imputer Transformation (sktime.transformations.series.impute.Imputer) works only with univariate time series. So that one does not have to manipulate the data laboriously before, a multivariate version of Imputer would help. sktime.transformations.series.compose -> ColumnwiseTransformer could work with the Imputer. Is it planned to provide a multivariate imputer or should the ColumnwiseTransformer always be applied?
**Describe the solution you'd like**
A query of the dimension of the input data could be prefixed so that only one Imputer version is needed.
```
from sktime.transformations.base import _SeriesToSeriesTransformer
from sktime.transformations.series.compose import ColumnwiseTransformer
from sktime.transformations.series.impute import Imputer
__author__ = ["Martin Walter"]
__all__ = ["ImputerMultivariate"]
class ImputerMultivariate(_SeriesToSeriesTransformer):
"""Missing value imputation of multivariate timeseries.
The Imputer transforms input series by replacing missing values according
to an imputation strategy specified by `method`.
Parameters
----------
method : str, default="drift"
Method to fill the missing values values.
* "drift" : drift/trend values by sktime.PolynomialTrendForecaster()
* "linear" : linear interpolation, by pd.Series.interpolate()
* "nearest" : use nearest value, by pd.Series.interpolate()
* "constant" : same constant value (given in arg value) for all NaN
* "mean" : pd.Series.mean()
* "median" : pd.Series.median()
* "backfill" ot "bfill" : adapted from pd.Series.fillna()
* "pad" or "ffill" : adapted from pd.Series.fillna()
* "random" : random values between pd.Series.min() and .max()
* "forecaster" : use an sktime Forecaster, given in arg forecaster
missing_values : int/float/str, default=None
The placeholder for the missing values. All occurrences of
missing_values will be imputed. If None then np.nan is used.
value : int/float, default=None
Value to use to fill missing values when method="constant".
forecaster : Any Forecaster based on sktime.BaseForecaster, default=None
Use a given Forecaster to impute by insample predictions when
method="forecaster". Before fitting, missing data is imputed with
method="ffill" or "bfill" as heuristic.
random_state : int/float/str, optional
Value to set random.seed() if method="random", default None
Examples
--------
>>> from sktime.transformations.series.impute import Imputer
>>> from sktime.datasets import load_airline
>>> y = load_airline()
>>> transformer = Imputer(method="drift")
>>> y_hat = transformer.fit_transform(y)
"""
_tags = {
"fit-in-transform": True,
"handles-missing-data": True,
"skip-inverse-transform": True,
}
def __init__(
self,
method="drift",
random_state=None,
value=None,
forecaster=None,
missing_values=None):
self.transformer = ColumnwiseTransformer(
Imputer(
method=method,
random_state=random_state,
value=value,
forecaster=forecaster,
missing_values=missing_values,
)
)
super(ImputerMultivariate, self).__init__()
def fit(self, X, y=None):
self._is_fitted = True
self.transformer.fit(X, y)
return self
def transform(self, X, y=None):
X = self.transformer.transform(X, y)
return X
```
</issue>
<code>
[start of sktime/transformations/series/impute.py]
1 #!/usr/bin/env python3 -u
2 # -*- coding: utf-8 -*-
3 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
4 """Utilities to impute series with missing values."""
5
6 __author__ = ["Martin Walter"]
7 __all__ = ["Imputer"]
8
9 from sktime.transformations.base import _SeriesToSeriesTransformer
10 from sktime.utils.validation.series import check_series
11 from sktime.forecasting.trend import PolynomialTrendForecaster
12 from sklearn.utils import check_random_state
13 from sktime.forecasting.base import ForecastingHorizon
14 from sklearn.base import clone
15
16 import numpy as np
17 import pandas as pd
18
19
20 class Imputer(_SeriesToSeriesTransformer):
21 """Missing value imputation.
22
23 The Imputer transforms input series by replacing missing values according
24 to an imputation strategy specified by `method`.
25
26 Parameters
27 ----------
28 method : str, default="drift"
29 Method to fill the missing values values.
30
31 * "drift" : drift/trend values by sktime.PolynomialTrendForecaster()
32 * "linear" : linear interpolation, by pd.Series.interpolate()
33 * "nearest" : use nearest value, by pd.Series.interpolate()
34 * "constant" : same constant value (given in arg value) for all NaN
35 * "mean" : pd.Series.mean()
36 * "median" : pd.Series.median()
37 * "backfill" ot "bfill" : adapted from pd.Series.fillna()
38 * "pad" or "ffill" : adapted from pd.Series.fillna()
39 * "random" : random values between pd.Series.min() and .max()
40 * "forecaster" : use an sktime Forecaster, given in arg forecaster
41
42 missing_values : int/float/str, default=None
43 The placeholder for the missing values. All occurrences of
44 missing_values will be imputed. If None then np.nan is used.
45 value : int/float, default=None
46 Value to use to fill missing values when method="constant".
47 forecaster : Any Forecaster based on sktime.BaseForecaster, default=None
48 Use a given Forecaster to impute by insample predictions when
49 method="forecaster". Before fitting, missing data is imputed with
50 method="ffill" or "bfill" as heuristic.
51 random_state : int/float/str, optional
52 Value to set random.seed() if method="random", default None
53
54 Examples
55 --------
56 >>> from sktime.transformations.series.impute import Imputer
57 >>> from sktime.datasets import load_airline
58 >>> y = load_airline()
59 >>> transformer = Imputer(method="drift")
60 >>> y_hat = transformer.fit_transform(y)
61 """
62
63 _tags = {
64 "fit-in-transform": True,
65 "handles-missing-data": True,
66 "skip-inverse-transform": True,
67 }
68
69 def __init__(
70 self,
71 method="drift",
72 random_state=None,
73 value=None,
74 forecaster=None,
75 missing_values=None,
76 ):
77
78 self.method = method
79 self.missing_values = missing_values
80 self.value = value
81 self.forecaster = forecaster
82 self.random_state = random_state
83 super(Imputer, self).__init__()
84
85 def transform(self, Z, X=None):
86 """Transform data.
87
88 Returns a transformed version of Z.
89
90 Parameters
91 ----------
92 Z : pd.Series, pd.DataFrame
93
94 Returns
95 -------
96 Z : pd.Series, pd.DataFrame
97 Transformed time series(es).
98 """
99 self.check_is_fitted()
100 self._check_method()
101 Z = check_series(Z)
102 Z = Z.copy()
103
104 # replace missing_values with np.nan
105 if self.missing_values:
106 Z = Z.replace(to_replace=self.missing_values, value=np.nan)
107
108 if not _has_missing_values(Z):
109 return Z
110
111 elif self.method == "random":
112 if isinstance(Z, pd.DataFrame):
113 for col in Z:
114 Z[col] = Z[col].apply(
115 lambda i: self._get_random(Z[col]) if np.isnan(i) else i
116 )
117 else:
118 Z = Z.apply(lambda i: self._get_random(Z) if np.isnan(i) else i)
119 elif self.method == "constant":
120 Z = Z.fillna(value=self.value)
121 elif self.method in ["backfill", "bfill", "pad", "ffill"]:
122 Z = Z.fillna(method=self.method)
123 elif self.method == "drift":
124 forecaster = PolynomialTrendForecaster(degree=1)
125 Z = _impute_with_forecaster(forecaster, Z)
126 elif self.method == "forecaster":
127 forecaster = clone(self.forecaster)
128 Z = _impute_with_forecaster(forecaster, Z)
129 elif self.method == "mean":
130 Z = Z.fillna(value=Z.mean())
131 elif self.method == "median":
132 Z = Z.fillna(value=Z.median())
133 elif self.method in ["nearest", "linear"]:
134 Z = Z.interpolate(method=self.method)
135 else:
136 raise ValueError(f"`method`: {self.method} not available.")
137 # fill first/last elements of series,
138 # as some methods (e.g. "linear") cant impute those
139 Z = Z.fillna(method="ffill").fillna(method="backfill")
140 return Z
141
142 def _check_method(self):
143 if (
144 self.value is not None
145 and self.method != "constant"
146 or self.method == "constant"
147 and self.value is None
148 ):
149 raise ValueError(
150 """Imputing with a value can only be
151 used if method="constant" and if parameter "value" is not None"""
152 )
153 elif (
154 self.forecaster is not None
155 and self.method != "forecaster"
156 or self.method == "forecaster"
157 and self.forecaster is None
158 ):
159 raise ValueError(
160 """Imputing with a forecaster can only be used if
161 method=\"forecaster\" and if arg forecaster is not None"""
162 )
163 else:
164 pass
165
166 def _get_random(self, Z):
167 """Create a random int or float value.
168
169 :param Z: Series
170 :type Z: pd.Series
171 :return: Random int or float between min and max of Z
172 :rtype: int/float
173 """
174 rng = check_random_state(self.random_state)
175 # check if series contains only int or int-like values (e.g. 3.0)
176 if (Z.dropna() % 1 == 0).all():
177 return rng.randint(Z.min(), Z.max())
178 else:
179 return rng.uniform(Z.min(), Z.max())
180
181
182 def _impute_with_forecaster(forecaster, Z):
183 """Use a given forecaster for imputation by in-sample predictions.
184
185 Parameters
186 ----------
187 forecaster: Forecaster
188 Forecaster to use for imputation
189 Z : pd.Series or pd.DataFrame
190 Series to impute.
191
192 Returns
193 -------
194 zt : pd.Series or pd.DataFrame
195 Series with imputed values.
196 """
197 if isinstance(Z, pd.Series):
198 series = [Z]
199 elif isinstance(Z, pd.DataFrame):
200 series = [Z[column] for column in Z]
201
202 for z in series:
203 # define fh based on index of missing values
204 na_index = z.index[z.isna()]
205 fh = ForecastingHorizon(values=na_index, is_relative=False)
206
207 # fill NaN before fitting with ffill and backfill (heuristic)
208 forecaster.fit(y=z.fillna(method="ffill").fillna(method="backfill"), fh=fh)
209
210 # replace missing values with predicted values
211 z[na_index] = forecaster.predict()
212 return Z
213
214
215 def _has_missing_values(Z):
216 return Z.isnull().to_numpy().any()
217
[end of sktime/transformations/series/impute.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sktime/transformations/series/impute.py b/sktime/transformations/series/impute.py
--- a/sktime/transformations/series/impute.py
+++ b/sktime/transformations/series/impute.py
@@ -6,15 +6,17 @@
__author__ = ["Martin Walter"]
__all__ = ["Imputer"]
+
+import numpy as np
+import pandas as pd
+
+from sklearn.base import clone
+from sklearn.utils import check_random_state
+
from sktime.transformations.base import _SeriesToSeriesTransformer
from sktime.utils.validation.series import check_series
from sktime.forecasting.trend import PolynomialTrendForecaster
-from sklearn.utils import check_random_state
from sktime.forecasting.base import ForecastingHorizon
-from sklearn.base import clone
-
-import numpy as np
-import pandas as pd
class Imputer(_SeriesToSeriesTransformer):
@@ -64,6 +66,7 @@
"fit-in-transform": True,
"handles-missing-data": True,
"skip-inverse-transform": True,
+ "univariate-only": False,
}
def __init__(
|
{"golden_diff": "diff --git a/sktime/transformations/series/impute.py b/sktime/transformations/series/impute.py\n--- a/sktime/transformations/series/impute.py\n+++ b/sktime/transformations/series/impute.py\n@@ -6,15 +6,17 @@\n __author__ = [\"Martin Walter\"]\n __all__ = [\"Imputer\"]\n \n+\n+import numpy as np\n+import pandas as pd\n+\n+from sklearn.base import clone\n+from sklearn.utils import check_random_state\n+\n from sktime.transformations.base import _SeriesToSeriesTransformer\n from sktime.utils.validation.series import check_series\n from sktime.forecasting.trend import PolynomialTrendForecaster\n-from sklearn.utils import check_random_state\n from sktime.forecasting.base import ForecastingHorizon\n-from sklearn.base import clone\n-\n-import numpy as np\n-import pandas as pd\n \n \n class Imputer(_SeriesToSeriesTransformer):\n@@ -64,6 +66,7 @@\n \"fit-in-transform\": True,\n \"handles-missing-data\": True,\n \"skip-inverse-transform\": True,\n+ \"univariate-only\": False,\n }\n \n def __init__(\n", "issue": "[ENH] Imputer for multivariate timeseries\n**Is your feature request related to a problem? Please describe.**\r\n\r\nImputer Transformation (sktime.transformations.series.impute.Imputer) works only with univariate time series. So that one does not have to manipulate the data laboriously before, a multivariate version of Imputer would help. sktime.transformations.series.compose -> ColumnwiseTransformer could work with the Imputer. Is it planned to provide a multivariate imputer or should the ColumnwiseTransformer always be applied?\r\n\r\n**Describe the solution you'd like**\r\nA query of the dimension of the input data could be prefixed so that only one Imputer version is needed.\r\n\r\n```\r\nfrom sktime.transformations.base import _SeriesToSeriesTransformer\r\nfrom sktime.transformations.series.compose import ColumnwiseTransformer\r\nfrom sktime.transformations.series.impute import Imputer\r\n\r\n__author__ = [\"Martin Walter\"]\r\n__all__ = [\"ImputerMultivariate\"]\r\n\r\nclass ImputerMultivariate(_SeriesToSeriesTransformer):\r\n \"\"\"Missing value imputation of multivariate timeseries.\r\n\r\n The Imputer transforms input series by replacing missing values according\r\n to an imputation strategy specified by `method`.\r\n\r\n Parameters\r\n ----------\r\n method : str, default=\"drift\"\r\n Method to fill the missing values values.\r\n\r\n * \"drift\" : drift/trend values by sktime.PolynomialTrendForecaster()\r\n * \"linear\" : linear interpolation, by pd.Series.interpolate()\r\n * \"nearest\" : use nearest value, by pd.Series.interpolate()\r\n * \"constant\" : same constant value (given in arg value) for all NaN\r\n * \"mean\" : pd.Series.mean()\r\n * \"median\" : pd.Series.median()\r\n * \"backfill\" ot \"bfill\" : adapted from pd.Series.fillna()\r\n * \"pad\" or \"ffill\" : adapted from pd.Series.fillna()\r\n * \"random\" : random values between pd.Series.min() and .max()\r\n * \"forecaster\" : use an sktime Forecaster, given in arg forecaster\r\n\r\n missing_values : int/float/str, default=None\r\n The placeholder for the missing values. All occurrences of\r\n missing_values will be imputed. If None then np.nan is used.\r\n value : int/float, default=None\r\n Value to use to fill missing values when method=\"constant\".\r\n forecaster : Any Forecaster based on sktime.BaseForecaster, default=None\r\n Use a given Forecaster to impute by insample predictions when\r\n method=\"forecaster\". Before fitting, missing data is imputed with\r\n method=\"ffill\" or \"bfill\" as heuristic.\r\n random_state : int/float/str, optional\r\n Value to set random.seed() if method=\"random\", default None\r\n\r\n Examples\r\n --------\r\n >>> from sktime.transformations.series.impute import Imputer\r\n >>> from sktime.datasets import load_airline\r\n >>> y = load_airline()\r\n >>> transformer = Imputer(method=\"drift\")\r\n >>> y_hat = transformer.fit_transform(y)\r\n \"\"\"\r\n _tags = {\r\n \"fit-in-transform\": True,\r\n \"handles-missing-data\": True,\r\n \"skip-inverse-transform\": True,\r\n }\r\n def __init__(\r\n self, \r\n method=\"drift\", \r\n random_state=None, \r\n value=None,\r\n forecaster=None,\r\n missing_values=None):\r\n\r\n self.transformer = ColumnwiseTransformer(\r\n Imputer(\r\n method=method,\r\n random_state=random_state,\r\n value=value,\r\n forecaster=forecaster,\r\n missing_values=missing_values,\r\n )\r\n )\r\n super(ImputerMultivariate, self).__init__()\r\n \r\n def fit(self, X, y=None):\r\n self._is_fitted = True\r\n self.transformer.fit(X, y)\r\n return self\r\n\r\n def transform(self, X, y=None):\r\n X = self.transformer.transform(X, y)\r\n return X\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3 -u\n# -*- coding: utf-8 -*-\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Utilities to impute series with missing values.\"\"\"\n\n__author__ = [\"Martin Walter\"]\n__all__ = [\"Imputer\"]\n\nfrom sktime.transformations.base import _SeriesToSeriesTransformer\nfrom sktime.utils.validation.series import check_series\nfrom sktime.forecasting.trend import PolynomialTrendForecaster\nfrom sklearn.utils import check_random_state\nfrom sktime.forecasting.base import ForecastingHorizon\nfrom sklearn.base import clone\n\nimport numpy as np\nimport pandas as pd\n\n\nclass Imputer(_SeriesToSeriesTransformer):\n \"\"\"Missing value imputation.\n\n The Imputer transforms input series by replacing missing values according\n to an imputation strategy specified by `method`.\n\n Parameters\n ----------\n method : str, default=\"drift\"\n Method to fill the missing values values.\n\n * \"drift\" : drift/trend values by sktime.PolynomialTrendForecaster()\n * \"linear\" : linear interpolation, by pd.Series.interpolate()\n * \"nearest\" : use nearest value, by pd.Series.interpolate()\n * \"constant\" : same constant value (given in arg value) for all NaN\n * \"mean\" : pd.Series.mean()\n * \"median\" : pd.Series.median()\n * \"backfill\" ot \"bfill\" : adapted from pd.Series.fillna()\n * \"pad\" or \"ffill\" : adapted from pd.Series.fillna()\n * \"random\" : random values between pd.Series.min() and .max()\n * \"forecaster\" : use an sktime Forecaster, given in arg forecaster\n\n missing_values : int/float/str, default=None\n The placeholder for the missing values. All occurrences of\n missing_values will be imputed. If None then np.nan is used.\n value : int/float, default=None\n Value to use to fill missing values when method=\"constant\".\n forecaster : Any Forecaster based on sktime.BaseForecaster, default=None\n Use a given Forecaster to impute by insample predictions when\n method=\"forecaster\". Before fitting, missing data is imputed with\n method=\"ffill\" or \"bfill\" as heuristic.\n random_state : int/float/str, optional\n Value to set random.seed() if method=\"random\", default None\n\n Examples\n --------\n >>> from sktime.transformations.series.impute import Imputer\n >>> from sktime.datasets import load_airline\n >>> y = load_airline()\n >>> transformer = Imputer(method=\"drift\")\n >>> y_hat = transformer.fit_transform(y)\n \"\"\"\n\n _tags = {\n \"fit-in-transform\": True,\n \"handles-missing-data\": True,\n \"skip-inverse-transform\": True,\n }\n\n def __init__(\n self,\n method=\"drift\",\n random_state=None,\n value=None,\n forecaster=None,\n missing_values=None,\n ):\n\n self.method = method\n self.missing_values = missing_values\n self.value = value\n self.forecaster = forecaster\n self.random_state = random_state\n super(Imputer, self).__init__()\n\n def transform(self, Z, X=None):\n \"\"\"Transform data.\n\n Returns a transformed version of Z.\n\n Parameters\n ----------\n Z : pd.Series, pd.DataFrame\n\n Returns\n -------\n Z : pd.Series, pd.DataFrame\n Transformed time series(es).\n \"\"\"\n self.check_is_fitted()\n self._check_method()\n Z = check_series(Z)\n Z = Z.copy()\n\n # replace missing_values with np.nan\n if self.missing_values:\n Z = Z.replace(to_replace=self.missing_values, value=np.nan)\n\n if not _has_missing_values(Z):\n return Z\n\n elif self.method == \"random\":\n if isinstance(Z, pd.DataFrame):\n for col in Z:\n Z[col] = Z[col].apply(\n lambda i: self._get_random(Z[col]) if np.isnan(i) else i\n )\n else:\n Z = Z.apply(lambda i: self._get_random(Z) if np.isnan(i) else i)\n elif self.method == \"constant\":\n Z = Z.fillna(value=self.value)\n elif self.method in [\"backfill\", \"bfill\", \"pad\", \"ffill\"]:\n Z = Z.fillna(method=self.method)\n elif self.method == \"drift\":\n forecaster = PolynomialTrendForecaster(degree=1)\n Z = _impute_with_forecaster(forecaster, Z)\n elif self.method == \"forecaster\":\n forecaster = clone(self.forecaster)\n Z = _impute_with_forecaster(forecaster, Z)\n elif self.method == \"mean\":\n Z = Z.fillna(value=Z.mean())\n elif self.method == \"median\":\n Z = Z.fillna(value=Z.median())\n elif self.method in [\"nearest\", \"linear\"]:\n Z = Z.interpolate(method=self.method)\n else:\n raise ValueError(f\"`method`: {self.method} not available.\")\n # fill first/last elements of series,\n # as some methods (e.g. \"linear\") cant impute those\n Z = Z.fillna(method=\"ffill\").fillna(method=\"backfill\")\n return Z\n\n def _check_method(self):\n if (\n self.value is not None\n and self.method != \"constant\"\n or self.method == \"constant\"\n and self.value is None\n ):\n raise ValueError(\n \"\"\"Imputing with a value can only be\n used if method=\"constant\" and if parameter \"value\" is not None\"\"\"\n )\n elif (\n self.forecaster is not None\n and self.method != \"forecaster\"\n or self.method == \"forecaster\"\n and self.forecaster is None\n ):\n raise ValueError(\n \"\"\"Imputing with a forecaster can only be used if\n method=\\\"forecaster\\\" and if arg forecaster is not None\"\"\"\n )\n else:\n pass\n\n def _get_random(self, Z):\n \"\"\"Create a random int or float value.\n\n :param Z: Series\n :type Z: pd.Series\n :return: Random int or float between min and max of Z\n :rtype: int/float\n \"\"\"\n rng = check_random_state(self.random_state)\n # check if series contains only int or int-like values (e.g. 3.0)\n if (Z.dropna() % 1 == 0).all():\n return rng.randint(Z.min(), Z.max())\n else:\n return rng.uniform(Z.min(), Z.max())\n\n\ndef _impute_with_forecaster(forecaster, Z):\n \"\"\"Use a given forecaster for imputation by in-sample predictions.\n\n Parameters\n ----------\n forecaster: Forecaster\n Forecaster to use for imputation\n Z : pd.Series or pd.DataFrame\n Series to impute.\n\n Returns\n -------\n zt : pd.Series or pd.DataFrame\n Series with imputed values.\n \"\"\"\n if isinstance(Z, pd.Series):\n series = [Z]\n elif isinstance(Z, pd.DataFrame):\n series = [Z[column] for column in Z]\n\n for z in series:\n # define fh based on index of missing values\n na_index = z.index[z.isna()]\n fh = ForecastingHorizon(values=na_index, is_relative=False)\n\n # fill NaN before fitting with ffill and backfill (heuristic)\n forecaster.fit(y=z.fillna(method=\"ffill\").fillna(method=\"backfill\"), fh=fh)\n\n # replace missing values with predicted values\n z[na_index] = forecaster.predict()\n return Z\n\n\ndef _has_missing_values(Z):\n return Z.isnull().to_numpy().any()\n", "path": "sktime/transformations/series/impute.py"}]}
| 3,635 | 254 |
gh_patches_debug_3705
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-323
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make sure everything works with Django-Rest-Framework
We should django-rest-framework's `request.data` instead of trying to extract a structured body ourselves
</issue>
<code>
[start of sentry_sdk/integrations/django/__init__.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import
3
4 import sys
5 import weakref
6
7 from django import VERSION as DJANGO_VERSION # type: ignore
8 from django.db.models.query import QuerySet # type: ignore
9 from django.core import signals # type: ignore
10
11 if False:
12 from typing import Any
13 from typing import Dict
14 from typing import Tuple
15 from typing import Union
16 from sentry_sdk.integrations.wsgi import _ScopedResponse
17 from typing import Callable
18 from django.core.handlers.wsgi import WSGIRequest # type: ignore
19 from django.http.response import HttpResponse # type: ignore
20 from django.http.request import QueryDict # type: ignore
21 from django.utils.datastructures import MultiValueDict # type: ignore
22 from typing import List
23
24
25 try:
26 from django.urls import resolve # type: ignore
27 except ImportError:
28 from django.core.urlresolvers import resolve # type: ignore
29
30 from sentry_sdk import Hub
31 from sentry_sdk.hub import _should_send_default_pii
32 from sentry_sdk.scope import add_global_event_processor
33 from sentry_sdk.utils import (
34 add_global_repr_processor,
35 capture_internal_exceptions,
36 event_from_exception,
37 safe_repr,
38 format_and_strip,
39 transaction_from_function,
40 walk_exception_chain,
41 )
42 from sentry_sdk.integrations import Integration
43 from sentry_sdk.integrations.logging import ignore_logger
44 from sentry_sdk.integrations.wsgi import SentryWsgiMiddleware
45 from sentry_sdk.integrations._wsgi_common import RequestExtractor
46 from sentry_sdk.integrations.django.transactions import LEGACY_RESOLVER
47 from sentry_sdk.integrations.django.templates import get_template_frame_from_exception
48
49
50 if DJANGO_VERSION < (1, 10):
51
52 def is_authenticated(request_user):
53 # type: (Any) -> bool
54 return request_user.is_authenticated()
55
56
57 else:
58
59 def is_authenticated(request_user):
60 # type: (Any) -> bool
61 return request_user.is_authenticated
62
63
64 class DjangoIntegration(Integration):
65 identifier = "django"
66
67 transaction_style = None
68
69 def __init__(self, transaction_style="url"):
70 # type: (str) -> None
71 TRANSACTION_STYLE_VALUES = ("function_name", "url")
72 if transaction_style not in TRANSACTION_STYLE_VALUES:
73 raise ValueError(
74 "Invalid value for transaction_style: %s (must be in %s)"
75 % (transaction_style, TRANSACTION_STYLE_VALUES)
76 )
77 self.transaction_style = transaction_style
78
79 @staticmethod
80 def setup_once():
81 # type: () -> None
82 install_sql_hook()
83 # Patch in our custom middleware.
84
85 # logs an error for every 500
86 ignore_logger("django.server")
87 ignore_logger("django.request")
88
89 from django.core.handlers.wsgi import WSGIHandler
90
91 old_app = WSGIHandler.__call__
92
93 def sentry_patched_wsgi_handler(self, environ, start_response):
94 # type: (Any, Dict[str, str], Callable) -> _ScopedResponse
95 if Hub.current.get_integration(DjangoIntegration) is None:
96 return old_app(self, environ, start_response)
97
98 return SentryWsgiMiddleware(lambda *a, **kw: old_app(self, *a, **kw))(
99 environ, start_response
100 )
101
102 WSGIHandler.__call__ = sentry_patched_wsgi_handler
103
104 # patch get_response, because at that point we have the Django request
105 # object
106 from django.core.handlers.base import BaseHandler # type: ignore
107
108 old_get_response = BaseHandler.get_response
109
110 def sentry_patched_get_response(self, request):
111 # type: (Any, WSGIRequest) -> Union[HttpResponse, BaseException]
112 hub = Hub.current
113 integration = hub.get_integration(DjangoIntegration)
114 if integration is not None:
115 with hub.configure_scope() as scope:
116 scope.add_event_processor(
117 _make_event_processor(weakref.ref(request), integration)
118 )
119 return old_get_response(self, request)
120
121 BaseHandler.get_response = sentry_patched_get_response
122
123 signals.got_request_exception.connect(_got_request_exception)
124
125 @add_global_event_processor
126 def process_django_templates(event, hint):
127 # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]
128 exc_info = hint.get("exc_info", None)
129
130 if exc_info is None:
131 return event
132
133 exception = event.get("exception", None)
134
135 if exception is None:
136 return event
137
138 values = exception.get("values", None)
139
140 if values is None:
141 return event
142
143 for exception, (_, exc_value, _) in zip(
144 values, walk_exception_chain(exc_info)
145 ):
146 frame = get_template_frame_from_exception(exc_value)
147 if frame is not None:
148 frames = exception.get("stacktrace", {}).get("frames", [])
149
150 for i in reversed(range(len(frames))):
151 f = frames[i]
152 if (
153 f.get("function") in ("parse", "render")
154 and f.get("module") == "django.template.base"
155 ):
156 i += 1
157 break
158 else:
159 i = len(frames)
160
161 frames.insert(i, frame)
162
163 return event
164
165 @add_global_repr_processor
166 def _django_queryset_repr(value, hint):
167 if not isinstance(value, QuerySet) or value._result_cache:
168 return NotImplemented
169
170 # Do not call Hub.get_integration here. It is intentional that
171 # running under a new hub does not suddenly start executing
172 # querysets. This might be surprising to the user but it's likely
173 # less annoying.
174
175 return u"<%s from %s at 0x%x>" % (
176 value.__class__.__name__,
177 value.__module__,
178 id(value),
179 )
180
181
182 def _make_event_processor(weak_request, integration):
183 # type: (Callable[[], WSGIRequest], DjangoIntegration) -> Callable
184 def event_processor(event, hint):
185 # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]
186 # if the request is gone we are fine not logging the data from
187 # it. This might happen if the processor is pushed away to
188 # another thread.
189 request = weak_request()
190 if request is None:
191 return event
192
193 try:
194 if integration.transaction_style == "function_name":
195 event["transaction"] = transaction_from_function(
196 resolve(request.path).func
197 )
198 elif integration.transaction_style == "url":
199 event["transaction"] = LEGACY_RESOLVER.resolve(request.path)
200 except Exception:
201 pass
202
203 with capture_internal_exceptions():
204 DjangoRequestExtractor(request).extract_into_event(event)
205
206 if _should_send_default_pii():
207 with capture_internal_exceptions():
208 _set_user_info(request, event)
209
210 return event
211
212 return event_processor
213
214
215 def _got_request_exception(request=None, **kwargs):
216 # type: (WSGIRequest, **Any) -> None
217 hub = Hub.current
218 integration = hub.get_integration(DjangoIntegration)
219 if integration is not None:
220 event, hint = event_from_exception(
221 sys.exc_info(),
222 client_options=hub.client.options,
223 mechanism={"type": "django", "handled": False},
224 )
225 hub.capture_event(event, hint=hint)
226
227
228 class DjangoRequestExtractor(RequestExtractor):
229 def env(self):
230 # type: () -> Dict[str, str]
231 return self.request.META
232
233 def cookies(self):
234 # type: () -> Dict[str, str]
235 return self.request.COOKIES
236
237 def raw_data(self):
238 # type: () -> bytes
239 return self.request.body
240
241 def form(self):
242 # type: () -> QueryDict
243 return self.request.POST
244
245 def files(self):
246 # type: () -> MultiValueDict
247 return self.request.FILES
248
249 def size_of_file(self, file):
250 return file.size
251
252
253 def _set_user_info(request, event):
254 # type: (WSGIRequest, Dict[str, Any]) -> None
255 user_info = event.setdefault("user", {})
256
257 user = getattr(request, "user", None)
258
259 if user is None or not is_authenticated(user):
260 return
261
262 try:
263 user_info["id"] = str(user.pk)
264 except Exception:
265 pass
266
267 try:
268 user_info["email"] = user.email
269 except Exception:
270 pass
271
272 try:
273 user_info["username"] = user.get_username()
274 except Exception:
275 pass
276
277
278 class _FormatConverter(object):
279 def __init__(self, param_mapping):
280 # type: (Dict[str, int]) -> None
281
282 self.param_mapping = param_mapping
283 self.params = [] # type: List[Any]
284
285 def __getitem__(self, val):
286 # type: (str) -> str
287 self.params.append(self.param_mapping.get(val))
288 return "%s"
289
290
291 def format_sql(sql, params):
292 # type: (Any, Any) -> Tuple[str, List[str]]
293 rv = []
294
295 if isinstance(params, dict):
296 # convert sql with named parameters to sql with unnamed parameters
297 conv = _FormatConverter(params)
298 if params:
299 sql = sql % conv
300 params = conv.params
301 else:
302 params = ()
303
304 for param in params or ():
305 if param is None:
306 rv.append("NULL")
307 param = safe_repr(param)
308 rv.append(param)
309
310 return sql, rv
311
312
313 def record_sql(sql, params, cursor=None):
314 # type: (Any, Any, Any) -> None
315 hub = Hub.current
316 if hub.get_integration(DjangoIntegration) is None:
317 return
318
319 with capture_internal_exceptions():
320 if cursor and hasattr(cursor, "mogrify"): # psycopg2
321 real_sql = cursor.mogrify(sql, params)
322 with capture_internal_exceptions():
323 if isinstance(real_sql, bytes):
324 real_sql = real_sql.decode(cursor.connection.encoding)
325 else:
326 real_sql, real_params = format_sql(sql, params)
327
328 if real_params:
329 try:
330 real_sql = format_and_strip(real_sql, real_params)
331 except Exception:
332 pass
333 hub.add_breadcrumb(message=real_sql, category="query")
334
335
336 def install_sql_hook():
337 # type: () -> None
338 """If installed this causes Django's queries to be captured."""
339 try:
340 from django.db.backends.utils import CursorWrapper # type: ignore
341 except ImportError:
342 from django.db.backends.util import CursorWrapper # type: ignore
343
344 try:
345 real_execute = CursorWrapper.execute
346 real_executemany = CursorWrapper.executemany
347 except AttributeError:
348 # This won't work on Django versions < 1.6
349 return
350
351 def record_many_sql(sql, param_list, cursor):
352 for params in param_list:
353 record_sql(sql, params, cursor)
354
355 def execute(self, sql, params=None):
356 try:
357 return real_execute(self, sql, params)
358 finally:
359 record_sql(sql, params, self.cursor)
360
361 def executemany(self, sql, param_list):
362 try:
363 return real_executemany(self, sql, param_list)
364 finally:
365 record_many_sql(sql, param_list, self.cursor)
366
367 CursorWrapper.execute = execute
368 CursorWrapper.executemany = executemany
369 ignore_logger("django.db.backends")
370
[end of sentry_sdk/integrations/django/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry_sdk/integrations/django/__init__.py b/sentry_sdk/integrations/django/__init__.py
--- a/sentry_sdk/integrations/django/__init__.py
+++ b/sentry_sdk/integrations/django/__init__.py
@@ -265,6 +265,12 @@
def size_of_file(self, file):
return file.size
+ def parsed_body(self):
+ try:
+ return self.request.data
+ except AttributeError:
+ return RequestExtractor.parsed_body(self)
+
def _set_user_info(request, event):
# type: (WSGIRequest, Dict[str, Any]) -> None
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/django/__init__.py b/sentry_sdk/integrations/django/__init__.py\n--- a/sentry_sdk/integrations/django/__init__.py\n+++ b/sentry_sdk/integrations/django/__init__.py\n@@ -265,6 +265,12 @@\n def size_of_file(self, file):\n return file.size\n \n+ def parsed_body(self):\n+ try:\n+ return self.request.data\n+ except AttributeError:\n+ return RequestExtractor.parsed_body(self)\n+\n \n def _set_user_info(request, event):\n # type: (WSGIRequest, Dict[str, Any]) -> None\n", "issue": "Make sure everything works with Django-Rest-Framework\nWe should django-rest-framework's `request.data` instead of trying to extract a structured body ourselves\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\n\nimport sys\nimport weakref\n\nfrom django import VERSION as DJANGO_VERSION # type: ignore\nfrom django.db.models.query import QuerySet # type: ignore\nfrom django.core import signals # type: ignore\n\nif False:\n from typing import Any\n from typing import Dict\n from typing import Tuple\n from typing import Union\n from sentry_sdk.integrations.wsgi import _ScopedResponse\n from typing import Callable\n from django.core.handlers.wsgi import WSGIRequest # type: ignore\n from django.http.response import HttpResponse # type: ignore\n from django.http.request import QueryDict # type: ignore\n from django.utils.datastructures import MultiValueDict # type: ignore\n from typing import List\n\n\ntry:\n from django.urls import resolve # type: ignore\nexcept ImportError:\n from django.core.urlresolvers import resolve # type: ignore\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk.hub import _should_send_default_pii\nfrom sentry_sdk.scope import add_global_event_processor\nfrom sentry_sdk.utils import (\n add_global_repr_processor,\n capture_internal_exceptions,\n event_from_exception,\n safe_repr,\n format_and_strip,\n transaction_from_function,\n walk_exception_chain,\n)\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\nfrom sentry_sdk.integrations.wsgi import SentryWsgiMiddleware\nfrom sentry_sdk.integrations._wsgi_common import RequestExtractor\nfrom sentry_sdk.integrations.django.transactions import LEGACY_RESOLVER\nfrom sentry_sdk.integrations.django.templates import get_template_frame_from_exception\n\n\nif DJANGO_VERSION < (1, 10):\n\n def is_authenticated(request_user):\n # type: (Any) -> bool\n return request_user.is_authenticated()\n\n\nelse:\n\n def is_authenticated(request_user):\n # type: (Any) -> bool\n return request_user.is_authenticated\n\n\nclass DjangoIntegration(Integration):\n identifier = \"django\"\n\n transaction_style = None\n\n def __init__(self, transaction_style=\"url\"):\n # type: (str) -> None\n TRANSACTION_STYLE_VALUES = (\"function_name\", \"url\")\n if transaction_style not in TRANSACTION_STYLE_VALUES:\n raise ValueError(\n \"Invalid value for transaction_style: %s (must be in %s)\"\n % (transaction_style, TRANSACTION_STYLE_VALUES)\n )\n self.transaction_style = transaction_style\n\n @staticmethod\n def setup_once():\n # type: () -> None\n install_sql_hook()\n # Patch in our custom middleware.\n\n # logs an error for every 500\n ignore_logger(\"django.server\")\n ignore_logger(\"django.request\")\n\n from django.core.handlers.wsgi import WSGIHandler\n\n old_app = WSGIHandler.__call__\n\n def sentry_patched_wsgi_handler(self, environ, start_response):\n # type: (Any, Dict[str, str], Callable) -> _ScopedResponse\n if Hub.current.get_integration(DjangoIntegration) is None:\n return old_app(self, environ, start_response)\n\n return SentryWsgiMiddleware(lambda *a, **kw: old_app(self, *a, **kw))(\n environ, start_response\n )\n\n WSGIHandler.__call__ = sentry_patched_wsgi_handler\n\n # patch get_response, because at that point we have the Django request\n # object\n from django.core.handlers.base import BaseHandler # type: ignore\n\n old_get_response = BaseHandler.get_response\n\n def sentry_patched_get_response(self, request):\n # type: (Any, WSGIRequest) -> Union[HttpResponse, BaseException]\n hub = Hub.current\n integration = hub.get_integration(DjangoIntegration)\n if integration is not None:\n with hub.configure_scope() as scope:\n scope.add_event_processor(\n _make_event_processor(weakref.ref(request), integration)\n )\n return old_get_response(self, request)\n\n BaseHandler.get_response = sentry_patched_get_response\n\n signals.got_request_exception.connect(_got_request_exception)\n\n @add_global_event_processor\n def process_django_templates(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n exc_info = hint.get(\"exc_info\", None)\n\n if exc_info is None:\n return event\n\n exception = event.get(\"exception\", None)\n\n if exception is None:\n return event\n\n values = exception.get(\"values\", None)\n\n if values is None:\n return event\n\n for exception, (_, exc_value, _) in zip(\n values, walk_exception_chain(exc_info)\n ):\n frame = get_template_frame_from_exception(exc_value)\n if frame is not None:\n frames = exception.get(\"stacktrace\", {}).get(\"frames\", [])\n\n for i in reversed(range(len(frames))):\n f = frames[i]\n if (\n f.get(\"function\") in (\"parse\", \"render\")\n and f.get(\"module\") == \"django.template.base\"\n ):\n i += 1\n break\n else:\n i = len(frames)\n\n frames.insert(i, frame)\n\n return event\n\n @add_global_repr_processor\n def _django_queryset_repr(value, hint):\n if not isinstance(value, QuerySet) or value._result_cache:\n return NotImplemented\n\n # Do not call Hub.get_integration here. It is intentional that\n # running under a new hub does not suddenly start executing\n # querysets. This might be surprising to the user but it's likely\n # less annoying.\n\n return u\"<%s from %s at 0x%x>\" % (\n value.__class__.__name__,\n value.__module__,\n id(value),\n )\n\n\ndef _make_event_processor(weak_request, integration):\n # type: (Callable[[], WSGIRequest], DjangoIntegration) -> Callable\n def event_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n # if the request is gone we are fine not logging the data from\n # it. This might happen if the processor is pushed away to\n # another thread.\n request = weak_request()\n if request is None:\n return event\n\n try:\n if integration.transaction_style == \"function_name\":\n event[\"transaction\"] = transaction_from_function(\n resolve(request.path).func\n )\n elif integration.transaction_style == \"url\":\n event[\"transaction\"] = LEGACY_RESOLVER.resolve(request.path)\n except Exception:\n pass\n\n with capture_internal_exceptions():\n DjangoRequestExtractor(request).extract_into_event(event)\n\n if _should_send_default_pii():\n with capture_internal_exceptions():\n _set_user_info(request, event)\n\n return event\n\n return event_processor\n\n\ndef _got_request_exception(request=None, **kwargs):\n # type: (WSGIRequest, **Any) -> None\n hub = Hub.current\n integration = hub.get_integration(DjangoIntegration)\n if integration is not None:\n event, hint = event_from_exception(\n sys.exc_info(),\n client_options=hub.client.options,\n mechanism={\"type\": \"django\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n\nclass DjangoRequestExtractor(RequestExtractor):\n def env(self):\n # type: () -> Dict[str, str]\n return self.request.META\n\n def cookies(self):\n # type: () -> Dict[str, str]\n return self.request.COOKIES\n\n def raw_data(self):\n # type: () -> bytes\n return self.request.body\n\n def form(self):\n # type: () -> QueryDict\n return self.request.POST\n\n def files(self):\n # type: () -> MultiValueDict\n return self.request.FILES\n\n def size_of_file(self, file):\n return file.size\n\n\ndef _set_user_info(request, event):\n # type: (WSGIRequest, Dict[str, Any]) -> None\n user_info = event.setdefault(\"user\", {})\n\n user = getattr(request, \"user\", None)\n\n if user is None or not is_authenticated(user):\n return\n\n try:\n user_info[\"id\"] = str(user.pk)\n except Exception:\n pass\n\n try:\n user_info[\"email\"] = user.email\n except Exception:\n pass\n\n try:\n user_info[\"username\"] = user.get_username()\n except Exception:\n pass\n\n\nclass _FormatConverter(object):\n def __init__(self, param_mapping):\n # type: (Dict[str, int]) -> None\n\n self.param_mapping = param_mapping\n self.params = [] # type: List[Any]\n\n def __getitem__(self, val):\n # type: (str) -> str\n self.params.append(self.param_mapping.get(val))\n return \"%s\"\n\n\ndef format_sql(sql, params):\n # type: (Any, Any) -> Tuple[str, List[str]]\n rv = []\n\n if isinstance(params, dict):\n # convert sql with named parameters to sql with unnamed parameters\n conv = _FormatConverter(params)\n if params:\n sql = sql % conv\n params = conv.params\n else:\n params = ()\n\n for param in params or ():\n if param is None:\n rv.append(\"NULL\")\n param = safe_repr(param)\n rv.append(param)\n\n return sql, rv\n\n\ndef record_sql(sql, params, cursor=None):\n # type: (Any, Any, Any) -> None\n hub = Hub.current\n if hub.get_integration(DjangoIntegration) is None:\n return\n\n with capture_internal_exceptions():\n if cursor and hasattr(cursor, \"mogrify\"): # psycopg2\n real_sql = cursor.mogrify(sql, params)\n with capture_internal_exceptions():\n if isinstance(real_sql, bytes):\n real_sql = real_sql.decode(cursor.connection.encoding)\n else:\n real_sql, real_params = format_sql(sql, params)\n\n if real_params:\n try:\n real_sql = format_and_strip(real_sql, real_params)\n except Exception:\n pass\n hub.add_breadcrumb(message=real_sql, category=\"query\")\n\n\ndef install_sql_hook():\n # type: () -> None\n \"\"\"If installed this causes Django's queries to be captured.\"\"\"\n try:\n from django.db.backends.utils import CursorWrapper # type: ignore\n except ImportError:\n from django.db.backends.util import CursorWrapper # type: ignore\n\n try:\n real_execute = CursorWrapper.execute\n real_executemany = CursorWrapper.executemany\n except AttributeError:\n # This won't work on Django versions < 1.6\n return\n\n def record_many_sql(sql, param_list, cursor):\n for params in param_list:\n record_sql(sql, params, cursor)\n\n def execute(self, sql, params=None):\n try:\n return real_execute(self, sql, params)\n finally:\n record_sql(sql, params, self.cursor)\n\n def executemany(self, sql, param_list):\n try:\n return real_executemany(self, sql, param_list)\n finally:\n record_many_sql(sql, param_list, self.cursor)\n\n CursorWrapper.execute = execute\n CursorWrapper.executemany = executemany\n ignore_logger(\"django.db.backends\")\n", "path": "sentry_sdk/integrations/django/__init__.py"}]}
| 4,087 | 150 |
gh_patches_debug_4007
|
rasdani/github-patches
|
git_diff
|
translate__pootle-5706
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Stats table shows no zero counts
This can be seen in the following screenshot:

</issue>
<code>
[start of pootle/core/views/display.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.utils.functional import cached_property
10 from django.utils.html import escape, format_html
11 from django.utils.safestring import mark_safe
12
13 from pootle.i18n import formatter
14 from pootle.i18n.gettext import ugettext as _
15 from pootle.local.dates import timesince
16 from pootle_misc.checks import get_qualitycheck_list
17
18
19 class ActionDisplay(object):
20
21 def __init__(self, action):
22 self.action = action
23
24 @property
25 def since(self):
26 return timesince(self.action["mtime"])
27
28 @property
29 def check_name(self):
30 return self.action.get("check_name")
31
32 @property
33 def checks_url(self):
34 return self.action.get("checks_url")
35
36 @property
37 def check_display_name(self):
38 return escape(self.action["check_display_name"])
39
40 @property
41 def display_name(self):
42 return escape(self.action["displayname"])
43
44 @property
45 def profile_url(self):
46 return self.action["profile_url"]
47
48 @property
49 def unit_url(self):
50 return self.action.get("unit_url")
51
52 @property
53 def unit_source(self):
54 return self.action.get("unit_source")
55
56 @property
57 def params(self):
58 params = dict(
59 user=self.formatted_user,
60 source=self.formatted_source)
61 if self.check_name:
62 params["check"] = format_html(
63 u"<a href='{}'>{}</a>",
64 self.checks_url,
65 self.check_display_name)
66 return params
67
68 @property
69 def formatted_user(self):
70 return format_html(
71 u"<a href='{}' class='user-name'>{}</a>",
72 self.profile_url,
73 self.display_name)
74
75 @property
76 def formatted_source(self):
77 return format_html(
78 u"<a href='{}'>{}</a>",
79 self.unit_url,
80 self.unit_source)
81
82 @property
83 def action_type(self):
84 return self.action["type"]
85
86 @property
87 def translation_action_type(self):
88 return self.action.get("translation_action_type")
89
90 @property
91 def message(self):
92 msg = ""
93 params = self.params
94 if (self.action_type == 2):
95 msg = _('%(user)s removed translation for %(source)s', params)
96 if (self.action_type == 3):
97 msg = _('%(user)s accepted suggestion for %(source)s', params)
98 if (self.action_type == 4):
99 msg = _('%(user)s uploaded file', params)
100 if (self.action_type == 6):
101 msg = _('%(user)s muted %(check)s for %(source)s', params)
102 if (self.action_type == 7):
103 msg = _('%(user)s unmuted %(check)s for %(source)s', params)
104 if (self.action_type == 8):
105 msg = _('%(user)s added suggestion for %(source)s', params)
106 if (self.action_type == 9):
107 msg = _('%(user)s rejected suggestion for %(source)s', params)
108 if (self.action_type in [1, 5]):
109 if self.translation_action_type == 0:
110 msg = _('%(user)s translated %(source)s', params)
111 if self.translation_action_type == 1:
112 msg = _('%(user)s edited %(source)s', params)
113 if self.translation_action_type == 2:
114 msg = _('%(user)s pre-translated %(source)s', params)
115 if self.translation_action_type == 3:
116 msg = _('%(user)s removed translation for %(source)s', params)
117 if self.translation_action_type == 4:
118 msg = _('%(user)s reviewed %(source)s', params)
119 if self.translation_action_type == 5:
120 msg = _('%(user)s marked as needs work %(source)s', params)
121 return mark_safe(msg)
122
123
124 class ChecksDisplay(object):
125
126 def __init__(self, context):
127 self.context = context
128
129 @property
130 def check_schema(self):
131 return get_qualitycheck_list(self.context)
132
133 @cached_property
134 def check_data(self):
135 return self.context.data_tool.get_checks()
136
137 @property
138 def checks_by_category(self):
139 _checks = []
140 for check in self.check_schema:
141 if check["code"] not in self.check_data:
142 continue
143 check["count"] = self.check_data[check["code"]]
144 check["count_display"] = formatter.number(check["count"])
145 _checks.append(check)
146 return _checks
147
148
149 class StatsDisplay(object):
150
151 def __init__(self, context, stats=None):
152 self.context = context
153 self._stats = stats
154
155 @staticmethod
156 def make_display_stat(d, keys=["total", "critical", "incomplete",
157 "suggestions", "fuzzy", "untranslated"]):
158 assert isinstance(d, dict)
159 for k in keys:
160 if d.get(k):
161 d[k + '_display'] = formatter.number(d[k])
162
163 @cached_property
164 def stat_data(self):
165 if self._stats is not None:
166 return self._stats
167 return self.context.data_tool.get_stats()
168
169 @cached_property
170 def stats(self):
171 stats = self.stat_data
172 self.add_children_info(stats)
173 self.make_display_stat(stats)
174 if stats.get("last_submission"):
175 stats["last_submission"]["msg"] = (
176 self.get_action_message(stats["last_submission"]))
177 return stats
178
179 def add_children_info(self, stats):
180 for k, child in stats["children"].items():
181 child["incomplete"] = child["total"] - child["translated"]
182 child["untranslated"] = child["total"] - child["translated"]
183 self.make_display_stat(child)
184
185 def get_action_message(self, action):
186 return ActionDisplay(action).message
187
[end of pootle/core/views/display.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pootle/core/views/display.py b/pootle/core/views/display.py
--- a/pootle/core/views/display.py
+++ b/pootle/core/views/display.py
@@ -157,7 +157,7 @@
"suggestions", "fuzzy", "untranslated"]):
assert isinstance(d, dict)
for k in keys:
- if d.get(k):
+ if k in d:
d[k + '_display'] = formatter.number(d[k])
@cached_property
|
{"golden_diff": "diff --git a/pootle/core/views/display.py b/pootle/core/views/display.py\n--- a/pootle/core/views/display.py\n+++ b/pootle/core/views/display.py\n@@ -157,7 +157,7 @@\n \"suggestions\", \"fuzzy\", \"untranslated\"]):\n assert isinstance(d, dict)\n for k in keys:\n- if d.get(k):\n+ if k in d:\n d[k + '_display'] = formatter.number(d[k])\n \n @cached_property\n", "issue": "Stats table shows no zero counts\nThis can be seen in the following screenshot:\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.utils.functional import cached_property\nfrom django.utils.html import escape, format_html\nfrom django.utils.safestring import mark_safe\n\nfrom pootle.i18n import formatter\nfrom pootle.i18n.gettext import ugettext as _\nfrom pootle.local.dates import timesince\nfrom pootle_misc.checks import get_qualitycheck_list\n\n\nclass ActionDisplay(object):\n\n def __init__(self, action):\n self.action = action\n\n @property\n def since(self):\n return timesince(self.action[\"mtime\"])\n\n @property\n def check_name(self):\n return self.action.get(\"check_name\")\n\n @property\n def checks_url(self):\n return self.action.get(\"checks_url\")\n\n @property\n def check_display_name(self):\n return escape(self.action[\"check_display_name\"])\n\n @property\n def display_name(self):\n return escape(self.action[\"displayname\"])\n\n @property\n def profile_url(self):\n return self.action[\"profile_url\"]\n\n @property\n def unit_url(self):\n return self.action.get(\"unit_url\")\n\n @property\n def unit_source(self):\n return self.action.get(\"unit_source\")\n\n @property\n def params(self):\n params = dict(\n user=self.formatted_user,\n source=self.formatted_source)\n if self.check_name:\n params[\"check\"] = format_html(\n u\"<a href='{}'>{}</a>\",\n self.checks_url,\n self.check_display_name)\n return params\n\n @property\n def formatted_user(self):\n return format_html(\n u\"<a href='{}' class='user-name'>{}</a>\",\n self.profile_url,\n self.display_name)\n\n @property\n def formatted_source(self):\n return format_html(\n u\"<a href='{}'>{}</a>\",\n self.unit_url,\n self.unit_source)\n\n @property\n def action_type(self):\n return self.action[\"type\"]\n\n @property\n def translation_action_type(self):\n return self.action.get(\"translation_action_type\")\n\n @property\n def message(self):\n msg = \"\"\n params = self.params\n if (self.action_type == 2):\n msg = _('%(user)s removed translation for %(source)s', params)\n if (self.action_type == 3):\n msg = _('%(user)s accepted suggestion for %(source)s', params)\n if (self.action_type == 4):\n msg = _('%(user)s uploaded file', params)\n if (self.action_type == 6):\n msg = _('%(user)s muted %(check)s for %(source)s', params)\n if (self.action_type == 7):\n msg = _('%(user)s unmuted %(check)s for %(source)s', params)\n if (self.action_type == 8):\n msg = _('%(user)s added suggestion for %(source)s', params)\n if (self.action_type == 9):\n msg = _('%(user)s rejected suggestion for %(source)s', params)\n if (self.action_type in [1, 5]):\n if self.translation_action_type == 0:\n msg = _('%(user)s translated %(source)s', params)\n if self.translation_action_type == 1:\n msg = _('%(user)s edited %(source)s', params)\n if self.translation_action_type == 2:\n msg = _('%(user)s pre-translated %(source)s', params)\n if self.translation_action_type == 3:\n msg = _('%(user)s removed translation for %(source)s', params)\n if self.translation_action_type == 4:\n msg = _('%(user)s reviewed %(source)s', params)\n if self.translation_action_type == 5:\n msg = _('%(user)s marked as needs work %(source)s', params)\n return mark_safe(msg)\n\n\nclass ChecksDisplay(object):\n\n def __init__(self, context):\n self.context = context\n\n @property\n def check_schema(self):\n return get_qualitycheck_list(self.context)\n\n @cached_property\n def check_data(self):\n return self.context.data_tool.get_checks()\n\n @property\n def checks_by_category(self):\n _checks = []\n for check in self.check_schema:\n if check[\"code\"] not in self.check_data:\n continue\n check[\"count\"] = self.check_data[check[\"code\"]]\n check[\"count_display\"] = formatter.number(check[\"count\"])\n _checks.append(check)\n return _checks\n\n\nclass StatsDisplay(object):\n\n def __init__(self, context, stats=None):\n self.context = context\n self._stats = stats\n\n @staticmethod\n def make_display_stat(d, keys=[\"total\", \"critical\", \"incomplete\",\n \"suggestions\", \"fuzzy\", \"untranslated\"]):\n assert isinstance(d, dict)\n for k in keys:\n if d.get(k):\n d[k + '_display'] = formatter.number(d[k])\n\n @cached_property\n def stat_data(self):\n if self._stats is not None:\n return self._stats\n return self.context.data_tool.get_stats()\n\n @cached_property\n def stats(self):\n stats = self.stat_data\n self.add_children_info(stats)\n self.make_display_stat(stats)\n if stats.get(\"last_submission\"):\n stats[\"last_submission\"][\"msg\"] = (\n self.get_action_message(stats[\"last_submission\"]))\n return stats\n\n def add_children_info(self, stats):\n for k, child in stats[\"children\"].items():\n child[\"incomplete\"] = child[\"total\"] - child[\"translated\"]\n child[\"untranslated\"] = child[\"total\"] - child[\"translated\"]\n self.make_display_stat(child)\n\n def get_action_message(self, action):\n return ActionDisplay(action).message\n", "path": "pootle/core/views/display.py"}]}
| 2,403 | 114 |
gh_patches_debug_17543
|
rasdani/github-patches
|
git_diff
|
coreruleset__coreruleset-3002
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move data files from util/regexp-assemble directory to the top level
### Description
Data files used to generate regular expressions have been somehow in a difficult-to-find place, dependent on the tool.
Now with the new crs-toolchain, this is not needed anymore.
So let's move the data files to the top level directory.
### Requirements
- move all data files to the top level dir
- review dependencies and check that all references are updated
</issue>
<code>
[start of util/regexp-assemble/lib/context.py]
1 import argparse
2 from pathlib import Path
3 import logging
4
5
6
7 class Context(object):
8 def __init__(self, root_directory: Path, namespace: argparse.Namespace=None):
9 self.root_directory = root_directory
10 self.rules_directory = self.root_directory / "rules"
11 self.util_directory = self.root_directory / "util"
12 self.regexp_assemble_directory = self.util_directory / "regexp-assemble"
13 self.data_files_directory = self.regexp_assemble_directory / "data"
14 self.include_files_directory = self.regexp_assemble_directory / "data" / "include"
15 self.regexp_assemble_pl_path = self.regexp_assemble_directory / "lib" / "regexp-assemble.pl"
16 self.single_rule_id = namespace.rule_id if namespace else None
17 self.single_chain_offset = None
18 if namespace and "chain_offset" in namespace:
19 self.single_chain_offset = namespace.chain_offset
20
21 self._dump_to_debug_log()
22
23 assert (
24 self.rules_directory.exists()
25 and self.util_directory.exists()
26 and self.regexp_assemble_directory.exists()
27 and self.data_files_directory.exists()
28 and self.include_files_directory.exists()
29 )
30
31
32 def _dump_to_debug_log(self):
33 logger = logging.getLogger()
34 logger.debug("Root directory: %s", self.root_directory)
35 logger.debug("Rules directory: %s", self.rules_directory)
36 logger.debug("Data files directory: %s", self.data_files_directory)
37 logger.debug("Include files directory: %s", self.include_files_directory)
38 logger.debug("Parsed rule ID: %s", self.single_rule_id)
39 logger.debug("Parsed chain offset: %s", self.single_chain_offset)
40
[end of util/regexp-assemble/lib/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/util/regexp-assemble/lib/context.py b/util/regexp-assemble/lib/context.py
--- a/util/regexp-assemble/lib/context.py
+++ b/util/regexp-assemble/lib/context.py
@@ -10,8 +10,8 @@
self.rules_directory = self.root_directory / "rules"
self.util_directory = self.root_directory / "util"
self.regexp_assemble_directory = self.util_directory / "regexp-assemble"
- self.data_files_directory = self.regexp_assemble_directory / "data"
- self.include_files_directory = self.regexp_assemble_directory / "data" / "include"
+ self.data_files_directory = self.root_directory / "data"
+ self.include_files_directory = self.root_directory / "data" / "include"
self.regexp_assemble_pl_path = self.regexp_assemble_directory / "lib" / "regexp-assemble.pl"
self.single_rule_id = namespace.rule_id if namespace else None
self.single_chain_offset = None
|
{"golden_diff": "diff --git a/util/regexp-assemble/lib/context.py b/util/regexp-assemble/lib/context.py\n--- a/util/regexp-assemble/lib/context.py\n+++ b/util/regexp-assemble/lib/context.py\n@@ -10,8 +10,8 @@\n self.rules_directory = self.root_directory / \"rules\"\n self.util_directory = self.root_directory / \"util\"\n self.regexp_assemble_directory = self.util_directory / \"regexp-assemble\"\n- self.data_files_directory = self.regexp_assemble_directory / \"data\"\n- self.include_files_directory = self.regexp_assemble_directory / \"data\" / \"include\"\n+ self.data_files_directory = self.root_directory / \"data\"\n+ self.include_files_directory = self.root_directory / \"data\" / \"include\"\n self.regexp_assemble_pl_path = self.regexp_assemble_directory / \"lib\" / \"regexp-assemble.pl\"\n self.single_rule_id = namespace.rule_id if namespace else None\n self.single_chain_offset = None\n", "issue": "Move data files from util/regexp-assemble directory to the top level\n### Description\r\n\r\nData files used to generate regular expressions have been somehow in a difficult-to-find place, dependent on the tool.\r\n\r\nNow with the new crs-toolchain, this is not needed anymore.\r\n\r\nSo let's move the data files to the top level directory.\r\n\r\n### Requirements\r\n\r\n- move all data files to the top level dir\r\n- review dependencies and check that all references are updated\n", "before_files": [{"content": "import argparse\nfrom pathlib import Path\nimport logging\n\n\n\nclass Context(object):\n def __init__(self, root_directory: Path, namespace: argparse.Namespace=None):\n self.root_directory = root_directory\n self.rules_directory = self.root_directory / \"rules\"\n self.util_directory = self.root_directory / \"util\"\n self.regexp_assemble_directory = self.util_directory / \"regexp-assemble\"\n self.data_files_directory = self.regexp_assemble_directory / \"data\"\n self.include_files_directory = self.regexp_assemble_directory / \"data\" / \"include\"\n self.regexp_assemble_pl_path = self.regexp_assemble_directory / \"lib\" / \"regexp-assemble.pl\"\n self.single_rule_id = namespace.rule_id if namespace else None\n self.single_chain_offset = None\n if namespace and \"chain_offset\" in namespace:\n self.single_chain_offset = namespace.chain_offset\n\n self._dump_to_debug_log()\n\n assert (\n self.rules_directory.exists()\n and self.util_directory.exists()\n and self.regexp_assemble_directory.exists()\n and self.data_files_directory.exists()\n and self.include_files_directory.exists()\n )\n\n\n def _dump_to_debug_log(self):\n logger = logging.getLogger()\n logger.debug(\"Root directory: %s\", self.root_directory)\n logger.debug(\"Rules directory: %s\", self.rules_directory)\n logger.debug(\"Data files directory: %s\", self.data_files_directory)\n logger.debug(\"Include files directory: %s\", self.include_files_directory)\n logger.debug(\"Parsed rule ID: %s\", self.single_rule_id)\n logger.debug(\"Parsed chain offset: %s\", self.single_chain_offset)\n", "path": "util/regexp-assemble/lib/context.py"}]}
| 1,058 | 215 |
gh_patches_debug_4420
|
rasdani/github-patches
|
git_diff
|
ephios-dev__ephios-220
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
List of own upcoming shifts
As a user, I want to see a list of shifts that I have been confirmed for on the main page.
</issue>
<code>
[start of ephios/event_management/templatetags/event_extras.py]
1 from django import template
2 from django.utils.safestring import mark_safe
3
4 from ephios.event_management.models import AbstractParticipation
5
6 register = template.Library()
7
8
9 @register.filter(name="shift_status")
10 def shift_status(shift, user):
11 participation = user.as_participant().participation_for(shift)
12 if participation is not None:
13 color = {
14 AbstractParticipation.States.USER_DECLINED: "text-danger",
15 AbstractParticipation.States.RESPONSIBLE_REJECTED: "text-danger",
16 AbstractParticipation.States.REQUESTED: "text-warning",
17 AbstractParticipation.States.CONFIRMED: "text-success",
18 }[participation.state]
19 return mark_safe(f'<span class="{color}">{participation.get_state_display()}</span><br>')
20 return ""
21
22
23 @register.filter(name="can_sign_up")
24 def can_sign_up(shift, user):
25 return shift.signup_method.can_sign_up(user.as_participant())
26
27
28 @register.filter(name="render_shift_state")
29 def render_shift_state(shift, request):
30 return shift.signup_method.render_shift_state(request)
31
32
33 @register.filter(name="signup_errors")
34 def signup_errors(shift, user):
35 return shift.signup_method.get_signup_errors(user.as_participant())
36
37
38 @register.filter(name="can_decline")
39 def can_decline(shift, user):
40 return shift.signup_method.can_decline(user.as_participant())
41
42
43 @register.filter(name="decline_errors")
44 def decline_errors(shift, user):
45 return shift.signup_method.get_decline_errors(user.as_participant())
46
[end of ephios/event_management/templatetags/event_extras.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ephios/event_management/templatetags/event_extras.py b/ephios/event_management/templatetags/event_extras.py
--- a/ephios/event_management/templatetags/event_extras.py
+++ b/ephios/event_management/templatetags/event_extras.py
@@ -43,3 +43,10 @@
@register.filter(name="decline_errors")
def decline_errors(shift, user):
return shift.signup_method.get_decline_errors(user.as_participant())
+
+
[email protected](name="confirmed_shifts")
+def confirmed_shifts(user):
+ return user.get_shifts(
+ with_participation_state_in=[AbstractParticipation.States.CONFIRMED]
+ ).order_by("start_time")
|
{"golden_diff": "diff --git a/ephios/event_management/templatetags/event_extras.py b/ephios/event_management/templatetags/event_extras.py\n--- a/ephios/event_management/templatetags/event_extras.py\n+++ b/ephios/event_management/templatetags/event_extras.py\n@@ -43,3 +43,10 @@\n @register.filter(name=\"decline_errors\")\n def decline_errors(shift, user):\n return shift.signup_method.get_decline_errors(user.as_participant())\n+\n+\[email protected](name=\"confirmed_shifts\")\n+def confirmed_shifts(user):\n+ return user.get_shifts(\n+ with_participation_state_in=[AbstractParticipation.States.CONFIRMED]\n+ ).order_by(\"start_time\")\n", "issue": "List of own upcoming shifts\nAs a user, I want to see a list of shifts that I have been confirmed for on the main page.\n", "before_files": [{"content": "from django import template\nfrom django.utils.safestring import mark_safe\n\nfrom ephios.event_management.models import AbstractParticipation\n\nregister = template.Library()\n\n\[email protected](name=\"shift_status\")\ndef shift_status(shift, user):\n participation = user.as_participant().participation_for(shift)\n if participation is not None:\n color = {\n AbstractParticipation.States.USER_DECLINED: \"text-danger\",\n AbstractParticipation.States.RESPONSIBLE_REJECTED: \"text-danger\",\n AbstractParticipation.States.REQUESTED: \"text-warning\",\n AbstractParticipation.States.CONFIRMED: \"text-success\",\n }[participation.state]\n return mark_safe(f'<span class=\"{color}\">{participation.get_state_display()}</span><br>')\n return \"\"\n\n\[email protected](name=\"can_sign_up\")\ndef can_sign_up(shift, user):\n return shift.signup_method.can_sign_up(user.as_participant())\n\n\[email protected](name=\"render_shift_state\")\ndef render_shift_state(shift, request):\n return shift.signup_method.render_shift_state(request)\n\n\[email protected](name=\"signup_errors\")\ndef signup_errors(shift, user):\n return shift.signup_method.get_signup_errors(user.as_participant())\n\n\[email protected](name=\"can_decline\")\ndef can_decline(shift, user):\n return shift.signup_method.can_decline(user.as_participant())\n\n\[email protected](name=\"decline_errors\")\ndef decline_errors(shift, user):\n return shift.signup_method.get_decline_errors(user.as_participant())\n", "path": "ephios/event_management/templatetags/event_extras.py"}]}
| 986 | 165 |
gh_patches_debug_38844
|
rasdani/github-patches
|
git_diff
|
carpentries__amy-521
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
New blurred database for development
Update the `db.sql` file.
</issue>
<code>
[start of scripts/anonymizer.py]
1 import sys
2 from datetime import date, timedelta
3 import random
4 import shutil
5 import sqlite3
6
7 #------------------------------------------------------------
8
9 ALPHA = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_'
10
11 LOREM_IPSUM = [
12 '''Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec a
13 diam lectus. Sed sit amet ipsum mauris. Maecenas congue ligula ac quam
14 viverra nec consectetur ante hendrerit. Donec et mollis
15 dolor. Praesent et diam eget libero egestas mattis sit amet vitae
16 augue. Nam tincidunt congue enim, ut porta lorem lacinia
17 consectetur. Donec ut libero sed arcu vehicula ultricies a non
18 tortor. Lorem ipsum dolor sit amet, consectetur adipiscing
19 elit. Aenean ut gravida lorem. Ut turpis felis, pulvinar a semper sed,
20 adipiscing id dolor. Pellentesque auctor nisi id magna consequat
21 sagittis. Curabitur dapibus enim sit amet elit pharetra tincidunt
22 feugiat nisl imperdiet. Ut convallis libero in urna ultrices
23 accumsan. Donec sed odio eros. Donec viverra mi quis quam pulvinar at
24 malesuada arcu rhoncus. Cum sociis natoque penatibus et magnis dis
25 parturient montes, nascetur ridiculus mus. In rutrum accumsan
26 ultricies. Mauris vitae nisi at sem facilisis semper ac in est.'''
27 ,
28 '''Vivamus fermentum semper porta. Nunc diam velit, adipiscing ut
29 tristique vitae, sagittis vel odio. Maecenas convallis ullamcorper
30 ultricies. Curabitur ornare, ligula semper consectetur sagittis, nisi
31 diam iaculis velit, id fringilla sem nunc vel mi. Nam dictum, odio nec
32 pretium volutpat, arcu ante placerat erat, non tristique elit urna et
33 turpis. Quisque mi metus, ornare sit amet fermentum et, tincidunt et
34 orci. Fusce eget orci a orci congue vestibulum. Ut dolor diam,
35 elementum et vestibulum eu, porttitor vel elit. Curabitur venenatis
36 pulvinar tellus gravida ornare. Sed et erat faucibus nunc euismod
37 ultricies ut id justo. Nullam cursus suscipit nisi, et ultrices justo
38 sodales nec. Fusce venenatis facilisis lectus ac semper. Aliquam at
39 massa ipsum. Quisque bibendum purus convallis nulla ultrices
40 ultricies. Nullam aliquam, mi eu aliquam tincidunt, purus velit
41 laoreet tortor, viverra pretium nisi quam vitae mi. Fusce vel volutpat
42 elit. Nam sagittis nisi dui.'''
43 ,
44 '''Suspendisse lectus leo, consectetur in tempor sit amet, placerat quis
45 neque. Etiam luctus porttitor lorem, sed suscipit est rutrum
46 non. Curabitur lobortis nisl a enim congue semper. Aenean commodo
47 ultrices imperdiet. Vestibulum ut justo vel sapien venenatis
48 tincidunt. Phasellus eget dolor sit amet ipsum dapibus condimentum
49 vitae quis lectus. Aliquam ut massa in turpis dapibus
50 convallis. Praesent elit lacus, vestibulum at malesuada et, ornare et
51 est. Ut augue nunc, sodales ut euismod non, adipiscing vitae
52 orci. Mauris ut placerat justo. Mauris in ultricies enim. Quisque nec
53 est eleifend nulla ultrices egestas quis ut quam. Donec sollicitudin
54 lectus a mauris pulvinar id aliquam urna cursus. Cras quis ligula sem,
55 vel elementum mi. Phasellus non ullamcorper urna.'''
56 ,
57 '''Class aptent taciti sociosqu ad litora torquent per conubia nostra,
58 per inceptos himenaeos. In euismod ultrices facilisis. Vestibulum
59 porta sapien adipiscing augue congue id pretium lectus molestie. Proin
60 quis dictum nisl. Morbi id quam sapien, sed vestibulum sem. Duis
61 elementum rutrum mauris sed convallis. Proin vestibulum magna
62 mi. Aenean tristique hendrerit magna, ac facilisis nulla hendrerit
63 ut. Sed non tortor sodales quam auctor elementum. Donec hendrerit nunc
64 eget elit pharetra pulvinar. Suspendisse id tempus tortor. Aenean
65 luctus, elit commodo laoreet commodo, justo nisi consequat massa, sed
66 vulputate quam urna quis eros. Donec vel.''']
67
68 #------------------------------------------------------------
69
70 def get_one(cursor, statement, *params):
71 cursor.execute(statement, params)
72 results = cursor.fetchall()
73 if len(results) == 0:
74 return None
75 return results[0][0]
76
77 RANDWORD_SEEN = set()
78
79 def randword(low, high):
80 while True:
81 r = ''.join([random.choice(ALPHA) for i in range(random.randrange(low, high))])
82 if r not in RANDWORD_SEEN:
83 RANDWORD_SEEN.add(r)
84 return r
85
86 def change(cursor, table, field, func, *args):
87 lower = get_one(cursor, 'select min(id) from {0};'.format(table))
88 upper = get_one(cursor, 'select max(id) from {0};'.format(table))
89 assert (lower is not None) and (upper is not None), \
90 'No lower/upper bounds for {0}.{1}'.format(table, field)
91
92 if isinstance(field, str):
93 stmt = 'update {0} set {1}=? where id=?;'.format(table, field)
94 elif isinstance(field, tuple):
95 filler = ', '.join(['{0}=?'.format(f) for f in field])
96 stmt = 'update {0} set {1} where id=?;'.format(table, filler)
97 else:
98 assert False, 'Unknown field type "{0}" for "{1}"'.format(type(field), field)
99
100 for i in range(lower, upper+1):
101 vals = func(cursor, i, *args) + (i, )
102 try:
103 cursor.execute(stmt, vals)
104 except sqlite3.OperationalError as e:
105 print('FAILED (operational error):', stmt, vals, e)
106 except sqlite3.IntegrityError as e:
107 print('FAILED (integrity error):', stmt, vals, e)
108
109 def tuplify(func):
110 def f(*args, **kwargs):
111 result = func(*args, **kwargs)
112 return (result,)
113 return f
114
115 #------------------------------------------------------------
116
117 def dates(cursor, i):
118 '''Generate start and end dates for workshop.'''
119 start = date(2012, 1, 1) + timedelta(random.randrange(4 * 365))
120 end = start + timedelta(random.randrange(4))
121 if end == start:
122 end = None
123 return (start, end)
124
125 @tuplify
126 def event_reg_key(cursor, i):
127 '''Generate random event registration key.'''
128 return str(1000000 + i)
129
130 @tuplify
131 def event_slug(cursor, i):
132 '''Generate event slugs once start/end dates and site names are set.'''
133 start = get_one(cursor, 'select start from workshops_event where id=?;', i)
134 if start is None:
135 return
136 year, month, day = start.split('-')
137 return '{0}-{1}-{2}-{3}'.format(year, month, day, randword(3, 8))
138
139 @tuplify
140 def url(cursor, i):
141 '''Generate something that looks like a URL.'''
142 _url = get_one(cursor, 'select url from workshops_event where id=?;', i)
143 if not _url:
144 return
145 return 'http://{0}.{1}/{2}-{3}'.format(*[randword(2, 10) for x in range(4)])
146
147 @tuplify
148 def lorem_ipsum(cursor, i):
149 '''Fill in a large text field.'''
150 result = '\n'.join(LOREM_IPSUM[0:random.randrange(len(LOREM_IPSUM))])
151 return result
152
153 @tuplify
154 def monicker(cursor, i):
155 '''Generate a username-style field.'''
156 return randword(2, 10)
157
158 @tuplify
159 def multi_word(cursor, i, prob_multi, prob_null=0.0):
160 '''Fill in a multi-word field (e.g., site name or person's name).'''
161 if random.uniform(0.0, 1.0) < prob_null:
162 return None
163 elif random.uniform(0.0, 1.0) < prob_multi:
164 return '{0} {1}'.format(randword(2, 10), randword(2, 12))
165 else:
166 return randword(2, 10)
167
168 @tuplify
169 def domain(cursor, i):
170 '''Fill in site.domain.'''
171 fields = []
172 for x in range(2, random.randrange(4, 5)):
173 fields.append(randword(2, 10))
174 return '.'.join(fields)
175
176 @tuplify
177 def gender(cursor, i):
178 return random.choice('FMO')
179
180 @tuplify
181 def email(cursor, i):
182 if random.uniform(0.0, 1.0) < 0.05:
183 return None
184 return '{0}@{1}.{2}'.format(*[randword(2, 8) for x in range(3)])
185
186
187 @tuplify
188 def empty_string(cursor, i):
189 return ''
190
191 #------------------------------------------------------------
192
193 def main():
194 assert len(sys.argv) == 4, 'Usage: {0} seed /path/to/source/db /path/to/destination/db'.format(sys.argv[0])
195 assert sys.argv[2] != sys.argv[3], 'Source and destination must be different database'
196
197 seed = int(sys.argv[1])
198 if seed == 0:
199 seed = None
200 db_src = sys.argv[2]
201 db_dst = sys.argv[3]
202
203 random.seed(seed)
204 shutil.copyfile(db_src, db_dst)
205 cnx = sqlite3.connect(db_dst)
206 cur = cnx.cursor()
207
208 change(cur, 'workshops_site', 'domain', domain)
209 change(cur, 'workshops_site', 'fullname', multi_word, 1.0)
210 change(cur, 'workshops_site', 'notes', lorem_ipsum)
211
212 change(cur, 'workshops_person', 'personal', multi_word, 0.1)
213 change(cur, 'workshops_person', 'middle', multi_word, 0.0, 0.9)
214 change(cur, 'workshops_person', 'family', multi_word, 0.1)
215 change(cur, 'workshops_person', 'gender', gender)
216 change(cur, 'workshops_person', 'email', email)
217 change(cur, 'workshops_person', 'github', monicker)
218 change(cur, 'workshops_person', 'twitter', monicker)
219 change(cur, 'workshops_person', 'url', url)
220 change(cur, 'workshops_person', 'username', monicker)
221 change(cur, 'workshops_person', 'password', empty_string)
222
223 change(cur, 'workshops_event', ('start', 'end'), dates)
224 change(cur, 'workshops_event', 'slug', event_slug)
225 change(cur, 'workshops_event', 'url', url)
226 change(cur, 'workshops_event', 'reg_key', event_reg_key)
227 change(cur, 'workshops_event', 'notes', lorem_ipsum)
228
229 # we can't store historical changes!
230 cur.execute('delete from reversion_version;')
231 cur.execute('delete from reversion_revision;')
232
233 cnx.commit()
234 cur.close()
235 cnx.close()
236
237 if __name__ == '__main__':
238 main()
239 # we need to populate reversion_* tables so that no-one needs to do that
240 # upon every `make database` call
241 print("REMEMBER! to run `./manage.py createinitialrevisions` on the new database NOW")
242
[end of scripts/anonymizer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/anonymizer.py b/scripts/anonymizer.py
--- a/scripts/anonymizer.py
+++ b/scripts/anonymizer.py
@@ -147,6 +147,8 @@
@tuplify
def lorem_ipsum(cursor, i):
'''Fill in a large text field.'''
+ if random.uniform(0.0, 1.0) < 0.05:
+ return ''
result = '\n'.join(LOREM_IPSUM[0:random.randrange(len(LOREM_IPSUM))])
return result
@@ -188,6 +190,17 @@
def empty_string(cursor, i):
return ''
+
+@tuplify
+def rand_latitude(cursor, i):
+ return random.uniform(-90, 90)
+
+
+@tuplify
+def rand_longitude(cursor, i):
+ return random.uniform(0, 180)
+
+
#------------------------------------------------------------
def main():
@@ -205,9 +218,9 @@
cnx = sqlite3.connect(db_dst)
cur = cnx.cursor()
- change(cur, 'workshops_site', 'domain', domain)
- change(cur, 'workshops_site', 'fullname', multi_word, 1.0)
- change(cur, 'workshops_site', 'notes', lorem_ipsum)
+ change(cur, 'workshops_host', 'domain', domain)
+ change(cur, 'workshops_host', 'fullname', multi_word, 1.0)
+ change(cur, 'workshops_host', 'notes', lorem_ipsum)
change(cur, 'workshops_person', 'personal', multi_word, 0.1)
change(cur, 'workshops_person', 'middle', multi_word, 0.0, 0.9)
@@ -219,10 +232,16 @@
change(cur, 'workshops_person', 'url', url)
change(cur, 'workshops_person', 'username', monicker)
change(cur, 'workshops_person', 'password', empty_string)
+ change(cur, 'workshops_person', 'affiliation', empty_string)
change(cur, 'workshops_event', ('start', 'end'), dates)
change(cur, 'workshops_event', 'slug', event_slug)
change(cur, 'workshops_event', 'url', url)
+ change(cur, 'workshops_event', 'contact', empty_string)
+ change(cur, 'workshops_event', 'venue', empty_string)
+ change(cur, 'workshops_event', 'address', empty_string)
+ change(cur, 'workshops_event', 'latitude', rand_latitude)
+ change(cur, 'workshops_event', 'longitude', rand_longitude)
change(cur, 'workshops_event', 'reg_key', event_reg_key)
change(cur, 'workshops_event', 'notes', lorem_ipsum)
@@ -238,4 +257,6 @@
main()
# we need to populate reversion_* tables so that no-one needs to do that
# upon every `make database` call
- print("REMEMBER! to run `./manage.py createinitialrevisions` on the new database NOW")
+ print("REMEMBER! to run `./manage.py createinitialrevisions` on the new "
+ "database NOW.")
+ print("Next, run `sqlite3 SRC -cmd '.dump' > DEST.sql`")
|
{"golden_diff": "diff --git a/scripts/anonymizer.py b/scripts/anonymizer.py\n--- a/scripts/anonymizer.py\n+++ b/scripts/anonymizer.py\n@@ -147,6 +147,8 @@\n @tuplify\n def lorem_ipsum(cursor, i):\n '''Fill in a large text field.'''\n+ if random.uniform(0.0, 1.0) < 0.05:\n+ return ''\n result = '\\n'.join(LOREM_IPSUM[0:random.randrange(len(LOREM_IPSUM))])\n return result\n \n@@ -188,6 +190,17 @@\n def empty_string(cursor, i):\n return ''\n \n+\n+@tuplify\n+def rand_latitude(cursor, i):\n+ return random.uniform(-90, 90)\n+\n+\n+@tuplify\n+def rand_longitude(cursor, i):\n+ return random.uniform(0, 180)\n+\n+\n #------------------------------------------------------------\n \n def main():\n@@ -205,9 +218,9 @@\n cnx = sqlite3.connect(db_dst)\n cur = cnx.cursor()\n \n- change(cur, 'workshops_site', 'domain', domain)\n- change(cur, 'workshops_site', 'fullname', multi_word, 1.0)\n- change(cur, 'workshops_site', 'notes', lorem_ipsum)\n+ change(cur, 'workshops_host', 'domain', domain)\n+ change(cur, 'workshops_host', 'fullname', multi_word, 1.0)\n+ change(cur, 'workshops_host', 'notes', lorem_ipsum)\n \n change(cur, 'workshops_person', 'personal', multi_word, 0.1)\n change(cur, 'workshops_person', 'middle', multi_word, 0.0, 0.9)\n@@ -219,10 +232,16 @@\n change(cur, 'workshops_person', 'url', url)\n change(cur, 'workshops_person', 'username', monicker)\n change(cur, 'workshops_person', 'password', empty_string)\n+ change(cur, 'workshops_person', 'affiliation', empty_string)\n \n change(cur, 'workshops_event', ('start', 'end'), dates)\n change(cur, 'workshops_event', 'slug', event_slug)\n change(cur, 'workshops_event', 'url', url)\n+ change(cur, 'workshops_event', 'contact', empty_string)\n+ change(cur, 'workshops_event', 'venue', empty_string)\n+ change(cur, 'workshops_event', 'address', empty_string)\n+ change(cur, 'workshops_event', 'latitude', rand_latitude)\n+ change(cur, 'workshops_event', 'longitude', rand_longitude)\n change(cur, 'workshops_event', 'reg_key', event_reg_key)\n change(cur, 'workshops_event', 'notes', lorem_ipsum)\n \n@@ -238,4 +257,6 @@\n main()\n # we need to populate reversion_* tables so that no-one needs to do that\n # upon every `make database` call\n- print(\"REMEMBER! to run `./manage.py createinitialrevisions` on the new database NOW\")\n+ print(\"REMEMBER! to run `./manage.py createinitialrevisions` on the new \"\n+ \"database NOW.\")\n+ print(\"Next, run `sqlite3 SRC -cmd '.dump' > DEST.sql`\")\n", "issue": "New blurred database for development\nUpdate the `db.sql` file.\n\n", "before_files": [{"content": "import sys\nfrom datetime import date, timedelta\nimport random\nimport shutil\nimport sqlite3\n\n#------------------------------------------------------------\n\nALPHA = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_'\n\nLOREM_IPSUM = [\n'''Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec a\ndiam lectus. Sed sit amet ipsum mauris. Maecenas congue ligula ac quam\nviverra nec consectetur ante hendrerit. Donec et mollis\ndolor. Praesent et diam eget libero egestas mattis sit amet vitae\naugue. Nam tincidunt congue enim, ut porta lorem lacinia\nconsectetur. Donec ut libero sed arcu vehicula ultricies a non\ntortor. Lorem ipsum dolor sit amet, consectetur adipiscing\nelit. Aenean ut gravida lorem. Ut turpis felis, pulvinar a semper sed,\nadipiscing id dolor. Pellentesque auctor nisi id magna consequat\nsagittis. Curabitur dapibus enim sit amet elit pharetra tincidunt\nfeugiat nisl imperdiet. Ut convallis libero in urna ultrices\naccumsan. Donec sed odio eros. Donec viverra mi quis quam pulvinar at\nmalesuada arcu rhoncus. Cum sociis natoque penatibus et magnis dis\nparturient montes, nascetur ridiculus mus. In rutrum accumsan\nultricies. Mauris vitae nisi at sem facilisis semper ac in est.'''\n,\n'''Vivamus fermentum semper porta. Nunc diam velit, adipiscing ut\ntristique vitae, sagittis vel odio. Maecenas convallis ullamcorper\nultricies. Curabitur ornare, ligula semper consectetur sagittis, nisi\ndiam iaculis velit, id fringilla sem nunc vel mi. Nam dictum, odio nec\npretium volutpat, arcu ante placerat erat, non tristique elit urna et\nturpis. Quisque mi metus, ornare sit amet fermentum et, tincidunt et\norci. Fusce eget orci a orci congue vestibulum. Ut dolor diam,\nelementum et vestibulum eu, porttitor vel elit. Curabitur venenatis\npulvinar tellus gravida ornare. Sed et erat faucibus nunc euismod\nultricies ut id justo. Nullam cursus suscipit nisi, et ultrices justo\nsodales nec. Fusce venenatis facilisis lectus ac semper. Aliquam at\nmassa ipsum. Quisque bibendum purus convallis nulla ultrices\nultricies. Nullam aliquam, mi eu aliquam tincidunt, purus velit\nlaoreet tortor, viverra pretium nisi quam vitae mi. Fusce vel volutpat\nelit. Nam sagittis nisi dui.'''\n,\n'''Suspendisse lectus leo, consectetur in tempor sit amet, placerat quis\nneque. Etiam luctus porttitor lorem, sed suscipit est rutrum\nnon. Curabitur lobortis nisl a enim congue semper. Aenean commodo\nultrices imperdiet. Vestibulum ut justo vel sapien venenatis\ntincidunt. Phasellus eget dolor sit amet ipsum dapibus condimentum\nvitae quis lectus. Aliquam ut massa in turpis dapibus\nconvallis. Praesent elit lacus, vestibulum at malesuada et, ornare et\nest. Ut augue nunc, sodales ut euismod non, adipiscing vitae\norci. Mauris ut placerat justo. Mauris in ultricies enim. Quisque nec\nest eleifend nulla ultrices egestas quis ut quam. Donec sollicitudin\nlectus a mauris pulvinar id aliquam urna cursus. Cras quis ligula sem,\nvel elementum mi. Phasellus non ullamcorper urna.'''\n,\n'''Class aptent taciti sociosqu ad litora torquent per conubia nostra,\nper inceptos himenaeos. In euismod ultrices facilisis. Vestibulum\nporta sapien adipiscing augue congue id pretium lectus molestie. Proin\nquis dictum nisl. Morbi id quam sapien, sed vestibulum sem. Duis\nelementum rutrum mauris sed convallis. Proin vestibulum magna\nmi. Aenean tristique hendrerit magna, ac facilisis nulla hendrerit\nut. Sed non tortor sodales quam auctor elementum. Donec hendrerit nunc\neget elit pharetra pulvinar. Suspendisse id tempus tortor. Aenean\nluctus, elit commodo laoreet commodo, justo nisi consequat massa, sed\nvulputate quam urna quis eros. Donec vel.''']\n\n#------------------------------------------------------------\n\ndef get_one(cursor, statement, *params):\n cursor.execute(statement, params)\n results = cursor.fetchall()\n if len(results) == 0:\n return None\n return results[0][0]\n\nRANDWORD_SEEN = set()\n\ndef randword(low, high):\n while True:\n r = ''.join([random.choice(ALPHA) for i in range(random.randrange(low, high))])\n if r not in RANDWORD_SEEN:\n RANDWORD_SEEN.add(r)\n return r\n\ndef change(cursor, table, field, func, *args):\n lower = get_one(cursor, 'select min(id) from {0};'.format(table))\n upper = get_one(cursor, 'select max(id) from {0};'.format(table))\n assert (lower is not None) and (upper is not None), \\\n 'No lower/upper bounds for {0}.{1}'.format(table, field)\n\n if isinstance(field, str):\n stmt = 'update {0} set {1}=? where id=?;'.format(table, field)\n elif isinstance(field, tuple):\n filler = ', '.join(['{0}=?'.format(f) for f in field])\n stmt = 'update {0} set {1} where id=?;'.format(table, filler)\n else:\n assert False, 'Unknown field type \"{0}\" for \"{1}\"'.format(type(field), field)\n\n for i in range(lower, upper+1):\n vals = func(cursor, i, *args) + (i, )\n try:\n cursor.execute(stmt, vals)\n except sqlite3.OperationalError as e:\n print('FAILED (operational error):', stmt, vals, e)\n except sqlite3.IntegrityError as e:\n print('FAILED (integrity error):', stmt, vals, e)\n\ndef tuplify(func):\n def f(*args, **kwargs):\n result = func(*args, **kwargs)\n return (result,)\n return f\n\n#------------------------------------------------------------\n\ndef dates(cursor, i):\n '''Generate start and end dates for workshop.'''\n start = date(2012, 1, 1) + timedelta(random.randrange(4 * 365))\n end = start + timedelta(random.randrange(4))\n if end == start:\n end = None\n return (start, end)\n\n@tuplify\ndef event_reg_key(cursor, i):\n '''Generate random event registration key.'''\n return str(1000000 + i)\n\n@tuplify\ndef event_slug(cursor, i):\n '''Generate event slugs once start/end dates and site names are set.'''\n start = get_one(cursor, 'select start from workshops_event where id=?;', i)\n if start is None:\n return\n year, month, day = start.split('-')\n return '{0}-{1}-{2}-{3}'.format(year, month, day, randword(3, 8))\n\n@tuplify\ndef url(cursor, i):\n '''Generate something that looks like a URL.'''\n _url = get_one(cursor, 'select url from workshops_event where id=?;', i)\n if not _url:\n return\n return 'http://{0}.{1}/{2}-{3}'.format(*[randword(2, 10) for x in range(4)])\n\n@tuplify\ndef lorem_ipsum(cursor, i):\n '''Fill in a large text field.'''\n result = '\\n'.join(LOREM_IPSUM[0:random.randrange(len(LOREM_IPSUM))])\n return result\n\n@tuplify\ndef monicker(cursor, i):\n '''Generate a username-style field.'''\n return randword(2, 10)\n\n@tuplify\ndef multi_word(cursor, i, prob_multi, prob_null=0.0):\n '''Fill in a multi-word field (e.g., site name or person's name).'''\n if random.uniform(0.0, 1.0) < prob_null:\n return None\n elif random.uniform(0.0, 1.0) < prob_multi:\n return '{0} {1}'.format(randword(2, 10), randword(2, 12))\n else:\n return randword(2, 10)\n\n@tuplify\ndef domain(cursor, i):\n '''Fill in site.domain.'''\n fields = []\n for x in range(2, random.randrange(4, 5)):\n fields.append(randword(2, 10))\n return '.'.join(fields)\n\n@tuplify\ndef gender(cursor, i):\n return random.choice('FMO')\n\n@tuplify\ndef email(cursor, i):\n if random.uniform(0.0, 1.0) < 0.05:\n return None\n return '{0}@{1}.{2}'.format(*[randword(2, 8) for x in range(3)])\n\n\n@tuplify\ndef empty_string(cursor, i):\n return ''\n\n#------------------------------------------------------------\n\ndef main():\n assert len(sys.argv) == 4, 'Usage: {0} seed /path/to/source/db /path/to/destination/db'.format(sys.argv[0])\n assert sys.argv[2] != sys.argv[3], 'Source and destination must be different database'\n\n seed = int(sys.argv[1])\n if seed == 0:\n seed = None\n db_src = sys.argv[2]\n db_dst = sys.argv[3]\n\n random.seed(seed)\n shutil.copyfile(db_src, db_dst)\n cnx = sqlite3.connect(db_dst)\n cur = cnx.cursor()\n\n change(cur, 'workshops_site', 'domain', domain)\n change(cur, 'workshops_site', 'fullname', multi_word, 1.0)\n change(cur, 'workshops_site', 'notes', lorem_ipsum)\n\n change(cur, 'workshops_person', 'personal', multi_word, 0.1)\n change(cur, 'workshops_person', 'middle', multi_word, 0.0, 0.9)\n change(cur, 'workshops_person', 'family', multi_word, 0.1)\n change(cur, 'workshops_person', 'gender', gender)\n change(cur, 'workshops_person', 'email', email)\n change(cur, 'workshops_person', 'github', monicker)\n change(cur, 'workshops_person', 'twitter', monicker)\n change(cur, 'workshops_person', 'url', url)\n change(cur, 'workshops_person', 'username', monicker)\n change(cur, 'workshops_person', 'password', empty_string)\n\n change(cur, 'workshops_event', ('start', 'end'), dates)\n change(cur, 'workshops_event', 'slug', event_slug)\n change(cur, 'workshops_event', 'url', url)\n change(cur, 'workshops_event', 'reg_key', event_reg_key)\n change(cur, 'workshops_event', 'notes', lorem_ipsum)\n\n # we can't store historical changes!\n cur.execute('delete from reversion_version;')\n cur.execute('delete from reversion_revision;')\n\n cnx.commit()\n cur.close()\n cnx.close()\n\nif __name__ == '__main__':\n main()\n # we need to populate reversion_* tables so that no-one needs to do that\n # upon every `make database` call\n print(\"REMEMBER! to run `./manage.py createinitialrevisions` on the new database NOW\")\n", "path": "scripts/anonymizer.py"}]}
| 3,852 | 758 |
gh_patches_debug_8348
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2999
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No shortcut for replacements editor
##### Steps to reproduce the problem:
1. run `mitmproxy`
2. type `O` to open the options editor
3. move the cursor down to `replacements` and press Enter
4. type `a` then `/~s/foo/bar` to add a replacement, and press Esc to commit
5. type `q` and again `q` to return to the flows list
6. the status bar now says “[Replacing]” with the ‘R’ highlighted, as if it were a shortcut
7. however, typing `R` doesn’t do anything
##### Any other comments? What have you tried so far?
It seems like `R` was intended to be a shortcut for the replacements editor (which would be very convenient), but left out. It’s not listed in the online help, either.
If it wasn’t intended to be a shortcut, it shouldn’t be highlighted in the status bar.
##### System information
Mitmproxy: 3.0.3 binary
Python: 3.5.2
OpenSSL: OpenSSL 1.1.0g 2 Nov 2017
Platform: Linux-4.4.0-116-generic-x86_64-with-debian-stretch-sid
</issue>
<code>
[start of mitmproxy/tools/console/statusbar.py]
1 import os.path
2
3 import urwid
4
5 from mitmproxy.tools.console import common
6 from mitmproxy.tools.console import signals
7 from mitmproxy.tools.console import commandexecutor
8 import mitmproxy.tools.console.master # noqa
9 from mitmproxy.tools.console.commander import commander
10
11
12 class PromptPath:
13 def __init__(self, callback, args):
14 self.callback, self.args = callback, args
15
16 def __call__(self, pth):
17 if not pth:
18 return
19 pth = os.path.expanduser(pth)
20 try:
21 return self.callback(pth, *self.args)
22 except IOError as v:
23 signals.status_message.send(message=v.strerror)
24
25
26 class PromptStub:
27 def __init__(self, callback, args):
28 self.callback, self.args = callback, args
29
30 def __call__(self, txt):
31 return self.callback(txt, *self.args)
32
33
34 class ActionBar(urwid.WidgetWrap):
35
36 def __init__(self, master):
37 self.master = master
38 urwid.WidgetWrap.__init__(self, None)
39 self.clear()
40 signals.status_message.connect(self.sig_message)
41 signals.status_prompt.connect(self.sig_prompt)
42 signals.status_prompt_onekey.connect(self.sig_prompt_onekey)
43 signals.status_prompt_command.connect(self.sig_prompt_command)
44
45 self.prompting = None
46
47 self.onekey = False
48
49 def sig_message(self, sender, message, expire=1):
50 if self.prompting:
51 return
52 cols, _ = self.master.ui.get_cols_rows()
53 w = urwid.Text(self.shorten_message(message, cols))
54 self._w = w
55 if expire:
56 def cb(*args):
57 if w == self._w:
58 self.clear()
59 signals.call_in.send(seconds=expire, callback=cb)
60
61 def prep_prompt(self, p):
62 return p.strip() + ": "
63
64 def shorten_message(self, msg, max_width):
65 """
66 Shorten message so that it fits into a single line in the statusbar.
67 """
68 if isinstance(msg, tuple):
69 disp_attr, msg_text = msg
70 elif isinstance(msg, str):
71 disp_attr, msg_text = None, msg
72 else:
73 return msg
74 msg_end = "\u2026" # unicode ellipsis for the end of shortened message
75 prompt = "(more in eventlog)"
76
77 msg_lines = msg_text.split("\n")
78 first_line = msg_lines[0]
79 if len(msg_lines) > 1:
80 # First line of messages with a few lines must end with prompt.
81 line_length = len(first_line) + len(prompt)
82 else:
83 line_length = len(first_line)
84
85 if line_length > max_width:
86 shortening_index = max(0, max_width - len(prompt) - len(msg_end))
87 first_line = first_line[:shortening_index] + msg_end
88 else:
89 if len(msg_lines) == 1:
90 prompt = ""
91
92 return [(disp_attr, first_line), ("warn", prompt)]
93
94 def sig_prompt(self, sender, prompt, text, callback, args=()):
95 signals.focus.send(self, section="footer")
96 self._w = urwid.Edit(self.prep_prompt(prompt), text or "")
97 self.prompting = PromptStub(callback, args)
98
99 def sig_prompt_command(self, sender, partial=""):
100 signals.focus.send(self, section="footer")
101 self._w = commander.CommandEdit(self.master, partial)
102 self.prompting = commandexecutor.CommandExecutor(self.master)
103
104 def sig_prompt_onekey(self, sender, prompt, keys, callback, args=()):
105 """
106 Keys are a set of (word, key) tuples. The appropriate key in the
107 word is highlighted.
108 """
109 signals.focus.send(self, section="footer")
110 prompt = [prompt, " ("]
111 mkup = []
112 for i, e in enumerate(keys):
113 mkup.extend(common.highlight_key(e[0], e[1]))
114 if i < len(keys) - 1:
115 mkup.append(",")
116 prompt.extend(mkup)
117 prompt.append(")? ")
118 self.onekey = set(i[1] for i in keys)
119 self._w = urwid.Edit(prompt, "")
120 self.prompting = PromptStub(callback, args)
121
122 def selectable(self):
123 return True
124
125 def keypress(self, size, k):
126 if self.prompting:
127 if k == "esc":
128 self.prompt_done()
129 elif self.onekey:
130 if k == "enter":
131 self.prompt_done()
132 elif k in self.onekey:
133 self.prompt_execute(k)
134 elif k == "enter":
135 self.prompt_execute(self._w.get_edit_text())
136 else:
137 if common.is_keypress(k):
138 self._w.keypress(size, k)
139 else:
140 return k
141
142 def clear(self):
143 self._w = urwid.Text("")
144 self.prompting = None
145
146 def prompt_done(self):
147 self.prompting = None
148 self.onekey = False
149 signals.status_message.send(message="")
150 signals.focus.send(self, section="body")
151
152 def prompt_execute(self, txt):
153 p = self.prompting
154 self.prompt_done()
155 msg = p(txt)
156 if msg:
157 signals.status_message.send(message=msg, expire=1)
158
159
160 class StatusBar(urwid.WidgetWrap):
161 keyctx = ""
162
163 def __init__(
164 self, master: "mitmproxy.tools.console.master.ConsoleMaster"
165 ) -> None:
166 self.master = master
167 self.ib = urwid.WidgetWrap(urwid.Text(""))
168 self.ab = ActionBar(self.master)
169 super().__init__(urwid.Pile([self.ib, self.ab]))
170 signals.update_settings.connect(self.sig_update)
171 signals.flowlist_change.connect(self.sig_update)
172 master.options.changed.connect(self.sig_update)
173 master.view.focus.sig_change.connect(self.sig_update)
174 master.view.sig_view_add.connect(self.sig_update)
175 self.redraw()
176
177 def sig_update(self, sender, flow=None, updated=None):
178 self.redraw()
179
180 def keypress(self, *args, **kwargs):
181 return self.ab.keypress(*args, **kwargs)
182
183 def get_status(self):
184 r = []
185
186 sreplay = self.master.addons.get("serverplayback")
187 creplay = self.master.addons.get("clientplayback")
188
189 if len(self.master.options.setheaders):
190 r.append("[")
191 r.append(("heading_key", "H"))
192 r.append("eaders]")
193 if len(self.master.options.replacements):
194 r.append("[")
195 r.append(("heading_key", "R"))
196 r.append("eplacing]")
197 if creplay.count():
198 r.append("[")
199 r.append(("heading_key", "cplayback"))
200 r.append(":%s]" % creplay.count())
201 if sreplay.count():
202 r.append("[")
203 r.append(("heading_key", "splayback"))
204 r.append(":%s]" % sreplay.count())
205 if self.master.options.ignore_hosts:
206 r.append("[")
207 r.append(("heading_key", "I"))
208 r.append("gnore:%d]" % len(self.master.options.ignore_hosts))
209 if self.master.options.tcp_hosts:
210 r.append("[")
211 r.append(("heading_key", "T"))
212 r.append("CP:%d]" % len(self.master.options.tcp_hosts))
213 if self.master.options.intercept:
214 r.append("[")
215 if not self.master.options.intercept_active:
216 r.append("X")
217 r.append(("heading_key", "i"))
218 r.append(":%s]" % self.master.options.intercept)
219 if self.master.options.view_filter:
220 r.append("[")
221 r.append(("heading_key", "f"))
222 r.append(":%s]" % self.master.options.view_filter)
223 if self.master.options.stickycookie:
224 r.append("[")
225 r.append(("heading_key", "t"))
226 r.append(":%s]" % self.master.options.stickycookie)
227 if self.master.options.stickyauth:
228 r.append("[")
229 r.append(("heading_key", "u"))
230 r.append(":%s]" % self.master.options.stickyauth)
231 if self.master.options.console_default_contentview != 'auto':
232 r.append("[contentview:%s]" % (self.master.options.console_default_contentview))
233 if self.master.options.has_changed("view_order"):
234 r.append("[")
235 r.append(("heading_key", "o"))
236 r.append(":%s]" % self.master.options.view_order)
237
238 opts = []
239 if self.master.options.anticache:
240 opts.append("anticache")
241 if self.master.options.anticomp:
242 opts.append("anticomp")
243 if self.master.options.showhost:
244 opts.append("showhost")
245 if not self.master.options.server_replay_refresh:
246 opts.append("norefresh")
247 if self.master.options.server_replay_kill_extra:
248 opts.append("killextra")
249 if not self.master.options.upstream_cert:
250 opts.append("no-upstream-cert")
251 if self.master.options.console_focus_follow:
252 opts.append("following")
253 if self.master.options.stream_large_bodies:
254 opts.append(self.master.options.stream_large_bodies)
255
256 if opts:
257 r.append("[%s]" % (":".join(opts)))
258
259 if self.master.options.mode != "regular":
260 r.append("[%s]" % self.master.options.mode)
261 if self.master.options.scripts:
262 r.append("[scripts:%s]" % len(self.master.options.scripts))
263
264 if self.master.options.save_stream_file:
265 r.append("[W:%s]" % self.master.options.save_stream_file)
266
267 return r
268
269 def redraw(self):
270 fc = len(self.master.view)
271 if self.master.view.focus.flow is None:
272 offset = 0
273 else:
274 offset = self.master.view.focus.index + 1
275
276 if self.master.options.view_order_reversed:
277 arrow = common.SYMBOL_UP
278 else:
279 arrow = common.SYMBOL_DOWN
280
281 marked = ""
282 if self.master.view.show_marked:
283 marked = "M"
284
285 t = [
286 ('heading', ("%s %s [%s/%s]" % (arrow, marked, offset, fc)).ljust(11)),
287 ]
288
289 if self.master.options.server:
290 host = self.master.options.listen_host
291 if host == "0.0.0.0" or host == "":
292 host = "*"
293 boundaddr = "[%s:%s]" % (host, self.master.options.listen_port)
294 else:
295 boundaddr = ""
296 t.extend(self.get_status())
297 status = urwid.AttrWrap(urwid.Columns([
298 urwid.Text(t),
299 urwid.Text(boundaddr, align="right"),
300 ]), "heading")
301 self.ib._w = status
302
303 def selectable(self):
304 return True
305
[end of mitmproxy/tools/console/statusbar.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mitmproxy/tools/console/statusbar.py b/mitmproxy/tools/console/statusbar.py
--- a/mitmproxy/tools/console/statusbar.py
+++ b/mitmproxy/tools/console/statusbar.py
@@ -191,9 +191,7 @@
r.append(("heading_key", "H"))
r.append("eaders]")
if len(self.master.options.replacements):
- r.append("[")
- r.append(("heading_key", "R"))
- r.append("eplacing]")
+ r.append("[%d replacements]" % len(self.master.options.replacements))
if creplay.count():
r.append("[")
r.append(("heading_key", "cplayback"))
|
{"golden_diff": "diff --git a/mitmproxy/tools/console/statusbar.py b/mitmproxy/tools/console/statusbar.py\n--- a/mitmproxy/tools/console/statusbar.py\n+++ b/mitmproxy/tools/console/statusbar.py\n@@ -191,9 +191,7 @@\n r.append((\"heading_key\", \"H\"))\n r.append(\"eaders]\")\n if len(self.master.options.replacements):\n- r.append(\"[\")\n- r.append((\"heading_key\", \"R\"))\n- r.append(\"eplacing]\")\n+ r.append(\"[%d replacements]\" % len(self.master.options.replacements))\n if creplay.count():\n r.append(\"[\")\n r.append((\"heading_key\", \"cplayback\"))\n", "issue": "No shortcut for replacements editor\n##### Steps to reproduce the problem:\r\n\r\n1. run `mitmproxy`\r\n2. type `O` to open the options editor\r\n3. move the cursor down to `replacements` and press Enter\r\n4. type `a` then `/~s/foo/bar` to add a replacement, and press Esc to commit\r\n5. type `q` and again `q` to return to the flows list\r\n6. the status bar now says \u201c[Replacing]\u201d with the \u2018R\u2019 highlighted, as if it were a shortcut\r\n7. however, typing `R` doesn\u2019t do anything\r\n\r\n##### Any other comments? What have you tried so far?\r\n\r\nIt seems like `R` was intended to be a shortcut for the replacements editor (which would be very convenient), but left out. It\u2019s not listed in the online help, either.\r\n\r\nIf it wasn\u2019t intended to be a shortcut, it shouldn\u2019t be highlighted in the status bar.\r\n\r\n##### System information\r\n\r\nMitmproxy: 3.0.3 binary\r\nPython: 3.5.2\r\nOpenSSL: OpenSSL 1.1.0g 2 Nov 2017\r\nPlatform: Linux-4.4.0-116-generic-x86_64-with-debian-stretch-sid\n", "before_files": [{"content": "import os.path\n\nimport urwid\n\nfrom mitmproxy.tools.console import common\nfrom mitmproxy.tools.console import signals\nfrom mitmproxy.tools.console import commandexecutor\nimport mitmproxy.tools.console.master # noqa\nfrom mitmproxy.tools.console.commander import commander\n\n\nclass PromptPath:\n def __init__(self, callback, args):\n self.callback, self.args = callback, args\n\n def __call__(self, pth):\n if not pth:\n return\n pth = os.path.expanduser(pth)\n try:\n return self.callback(pth, *self.args)\n except IOError as v:\n signals.status_message.send(message=v.strerror)\n\n\nclass PromptStub:\n def __init__(self, callback, args):\n self.callback, self.args = callback, args\n\n def __call__(self, txt):\n return self.callback(txt, *self.args)\n\n\nclass ActionBar(urwid.WidgetWrap):\n\n def __init__(self, master):\n self.master = master\n urwid.WidgetWrap.__init__(self, None)\n self.clear()\n signals.status_message.connect(self.sig_message)\n signals.status_prompt.connect(self.sig_prompt)\n signals.status_prompt_onekey.connect(self.sig_prompt_onekey)\n signals.status_prompt_command.connect(self.sig_prompt_command)\n\n self.prompting = None\n\n self.onekey = False\n\n def sig_message(self, sender, message, expire=1):\n if self.prompting:\n return\n cols, _ = self.master.ui.get_cols_rows()\n w = urwid.Text(self.shorten_message(message, cols))\n self._w = w\n if expire:\n def cb(*args):\n if w == self._w:\n self.clear()\n signals.call_in.send(seconds=expire, callback=cb)\n\n def prep_prompt(self, p):\n return p.strip() + \": \"\n\n def shorten_message(self, msg, max_width):\n \"\"\"\n Shorten message so that it fits into a single line in the statusbar.\n \"\"\"\n if isinstance(msg, tuple):\n disp_attr, msg_text = msg\n elif isinstance(msg, str):\n disp_attr, msg_text = None, msg\n else:\n return msg\n msg_end = \"\\u2026\" # unicode ellipsis for the end of shortened message\n prompt = \"(more in eventlog)\"\n\n msg_lines = msg_text.split(\"\\n\")\n first_line = msg_lines[0]\n if len(msg_lines) > 1:\n # First line of messages with a few lines must end with prompt.\n line_length = len(first_line) + len(prompt)\n else:\n line_length = len(first_line)\n\n if line_length > max_width:\n shortening_index = max(0, max_width - len(prompt) - len(msg_end))\n first_line = first_line[:shortening_index] + msg_end\n else:\n if len(msg_lines) == 1:\n prompt = \"\"\n\n return [(disp_attr, first_line), (\"warn\", prompt)]\n\n def sig_prompt(self, sender, prompt, text, callback, args=()):\n signals.focus.send(self, section=\"footer\")\n self._w = urwid.Edit(self.prep_prompt(prompt), text or \"\")\n self.prompting = PromptStub(callback, args)\n\n def sig_prompt_command(self, sender, partial=\"\"):\n signals.focus.send(self, section=\"footer\")\n self._w = commander.CommandEdit(self.master, partial)\n self.prompting = commandexecutor.CommandExecutor(self.master)\n\n def sig_prompt_onekey(self, sender, prompt, keys, callback, args=()):\n \"\"\"\n Keys are a set of (word, key) tuples. The appropriate key in the\n word is highlighted.\n \"\"\"\n signals.focus.send(self, section=\"footer\")\n prompt = [prompt, \" (\"]\n mkup = []\n for i, e in enumerate(keys):\n mkup.extend(common.highlight_key(e[0], e[1]))\n if i < len(keys) - 1:\n mkup.append(\",\")\n prompt.extend(mkup)\n prompt.append(\")? \")\n self.onekey = set(i[1] for i in keys)\n self._w = urwid.Edit(prompt, \"\")\n self.prompting = PromptStub(callback, args)\n\n def selectable(self):\n return True\n\n def keypress(self, size, k):\n if self.prompting:\n if k == \"esc\":\n self.prompt_done()\n elif self.onekey:\n if k == \"enter\":\n self.prompt_done()\n elif k in self.onekey:\n self.prompt_execute(k)\n elif k == \"enter\":\n self.prompt_execute(self._w.get_edit_text())\n else:\n if common.is_keypress(k):\n self._w.keypress(size, k)\n else:\n return k\n\n def clear(self):\n self._w = urwid.Text(\"\")\n self.prompting = None\n\n def prompt_done(self):\n self.prompting = None\n self.onekey = False\n signals.status_message.send(message=\"\")\n signals.focus.send(self, section=\"body\")\n\n def prompt_execute(self, txt):\n p = self.prompting\n self.prompt_done()\n msg = p(txt)\n if msg:\n signals.status_message.send(message=msg, expire=1)\n\n\nclass StatusBar(urwid.WidgetWrap):\n keyctx = \"\"\n\n def __init__(\n self, master: \"mitmproxy.tools.console.master.ConsoleMaster\"\n ) -> None:\n self.master = master\n self.ib = urwid.WidgetWrap(urwid.Text(\"\"))\n self.ab = ActionBar(self.master)\n super().__init__(urwid.Pile([self.ib, self.ab]))\n signals.update_settings.connect(self.sig_update)\n signals.flowlist_change.connect(self.sig_update)\n master.options.changed.connect(self.sig_update)\n master.view.focus.sig_change.connect(self.sig_update)\n master.view.sig_view_add.connect(self.sig_update)\n self.redraw()\n\n def sig_update(self, sender, flow=None, updated=None):\n self.redraw()\n\n def keypress(self, *args, **kwargs):\n return self.ab.keypress(*args, **kwargs)\n\n def get_status(self):\n r = []\n\n sreplay = self.master.addons.get(\"serverplayback\")\n creplay = self.master.addons.get(\"clientplayback\")\n\n if len(self.master.options.setheaders):\n r.append(\"[\")\n r.append((\"heading_key\", \"H\"))\n r.append(\"eaders]\")\n if len(self.master.options.replacements):\n r.append(\"[\")\n r.append((\"heading_key\", \"R\"))\n r.append(\"eplacing]\")\n if creplay.count():\n r.append(\"[\")\n r.append((\"heading_key\", \"cplayback\"))\n r.append(\":%s]\" % creplay.count())\n if sreplay.count():\n r.append(\"[\")\n r.append((\"heading_key\", \"splayback\"))\n r.append(\":%s]\" % sreplay.count())\n if self.master.options.ignore_hosts:\n r.append(\"[\")\n r.append((\"heading_key\", \"I\"))\n r.append(\"gnore:%d]\" % len(self.master.options.ignore_hosts))\n if self.master.options.tcp_hosts:\n r.append(\"[\")\n r.append((\"heading_key\", \"T\"))\n r.append(\"CP:%d]\" % len(self.master.options.tcp_hosts))\n if self.master.options.intercept:\n r.append(\"[\")\n if not self.master.options.intercept_active:\n r.append(\"X\")\n r.append((\"heading_key\", \"i\"))\n r.append(\":%s]\" % self.master.options.intercept)\n if self.master.options.view_filter:\n r.append(\"[\")\n r.append((\"heading_key\", \"f\"))\n r.append(\":%s]\" % self.master.options.view_filter)\n if self.master.options.stickycookie:\n r.append(\"[\")\n r.append((\"heading_key\", \"t\"))\n r.append(\":%s]\" % self.master.options.stickycookie)\n if self.master.options.stickyauth:\n r.append(\"[\")\n r.append((\"heading_key\", \"u\"))\n r.append(\":%s]\" % self.master.options.stickyauth)\n if self.master.options.console_default_contentview != 'auto':\n r.append(\"[contentview:%s]\" % (self.master.options.console_default_contentview))\n if self.master.options.has_changed(\"view_order\"):\n r.append(\"[\")\n r.append((\"heading_key\", \"o\"))\n r.append(\":%s]\" % self.master.options.view_order)\n\n opts = []\n if self.master.options.anticache:\n opts.append(\"anticache\")\n if self.master.options.anticomp:\n opts.append(\"anticomp\")\n if self.master.options.showhost:\n opts.append(\"showhost\")\n if not self.master.options.server_replay_refresh:\n opts.append(\"norefresh\")\n if self.master.options.server_replay_kill_extra:\n opts.append(\"killextra\")\n if not self.master.options.upstream_cert:\n opts.append(\"no-upstream-cert\")\n if self.master.options.console_focus_follow:\n opts.append(\"following\")\n if self.master.options.stream_large_bodies:\n opts.append(self.master.options.stream_large_bodies)\n\n if opts:\n r.append(\"[%s]\" % (\":\".join(opts)))\n\n if self.master.options.mode != \"regular\":\n r.append(\"[%s]\" % self.master.options.mode)\n if self.master.options.scripts:\n r.append(\"[scripts:%s]\" % len(self.master.options.scripts))\n\n if self.master.options.save_stream_file:\n r.append(\"[W:%s]\" % self.master.options.save_stream_file)\n\n return r\n\n def redraw(self):\n fc = len(self.master.view)\n if self.master.view.focus.flow is None:\n offset = 0\n else:\n offset = self.master.view.focus.index + 1\n\n if self.master.options.view_order_reversed:\n arrow = common.SYMBOL_UP\n else:\n arrow = common.SYMBOL_DOWN\n\n marked = \"\"\n if self.master.view.show_marked:\n marked = \"M\"\n\n t = [\n ('heading', (\"%s %s [%s/%s]\" % (arrow, marked, offset, fc)).ljust(11)),\n ]\n\n if self.master.options.server:\n host = self.master.options.listen_host\n if host == \"0.0.0.0\" or host == \"\":\n host = \"*\"\n boundaddr = \"[%s:%s]\" % (host, self.master.options.listen_port)\n else:\n boundaddr = \"\"\n t.extend(self.get_status())\n status = urwid.AttrWrap(urwid.Columns([\n urwid.Text(t),\n urwid.Text(boundaddr, align=\"right\"),\n ]), \"heading\")\n self.ib._w = status\n\n def selectable(self):\n return True\n", "path": "mitmproxy/tools/console/statusbar.py"}]}
| 3,941 | 148 |
gh_patches_debug_37517
|
rasdani/github-patches
|
git_diff
|
Flexget__Flexget-1138
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Some words confuse movie name parsing.
Entry title is "dan in real life 2007", parser stops at "real"
```
2016-05-01 09:32 DEBUG imdb_lookup library_movies_cleanup_test lookup for d
an in real life 2007
2016-05-01 09:32 VERBOSE imdb_lookup library_movies_cleanup_test Searching fr
om imdb `dan in real life 2007`
2016-05-01 09:32 DEBUG parser_internal library_movies_cleanup_test Parsing mo
vie: `dan in real life 2007` kwargs: {}
2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test parts: [u'da
n', u'in', u'real', u'life', u'2007'], cut is: real
2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test after parts
check, cut data would be: `dan in` abs_cut: 6
2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test data cut to
`dan in` - this will be the name
2016-05-01 09:32 DEBUG parser_internal library_movies_cleanup_test Parsing re
sult: <MovieParser(name=dan in,year=2007,quality=unknown)> (in 0.92 ms)
2016-05-01 09:32 DEBUG utils.imdb library_movies_cleanup_test smart_match
name=dan in year=2007
2016-05-01 09:32 DEBUG utils.imdb library_movies_cleanup_test Searching: d
an in
```
</issue>
<code>
[start of flexget/utils/titles/movie.py]
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # pylint: disable=unused-import, redefined-builtin
3
4 import logging
5 import re
6
7 from flexget.utils.titles.parser import TitleParser
8 from flexget.utils import qualities
9 from flexget.utils.tools import str_to_int
10
11 log = logging.getLogger('movieparser')
12
13
14 def diff_pos(string1, string2):
15 """Returns first position where string1 and string2 differ."""
16 for (count, c) in enumerate(string1):
17 if len(string2) <= count:
18 return count
19 if string2[count] != c:
20 return count
21
22
23 class MovieParser(TitleParser):
24
25 def __init__(self):
26 self.data = None
27 self.reset()
28 TitleParser.__init__(self)
29
30 @property
31 def fields(self):
32 """
33 Return a dict of all parser fields
34 """
35 return {
36 'movie_parser': self,
37 'movie_name': self.name,
38 'movie_year': self.year,
39 'proper': self.proper,
40 'proper_count': self.proper_count
41 }
42
43 @property
44 def valid(self):
45 return True
46
47 @property
48 def proper(self):
49 return self.proper_count > 0
50
51 @property
52 def is_series(self):
53 return False
54
55 @property
56 def is_movie(self):
57 return True
58
59 def reset(self):
60 # parsing results
61 self.name = None
62 self.year = None
63 self.quality = qualities.Quality()
64 self.proper_count = 0
65
66 def __str__(self):
67 return "<MovieParser(name=%s,year=%s,quality=%s)>" % (self.name, self.year, self.quality)
68
69 def parse(self, data=None):
70 """Parse movie name. Populates name, year, quality and proper_count attributes"""
71
72 # Reset before parsing, so the parser can be reused.
73 self.reset()
74
75 if data is None:
76 data = self.data
77
78 # Move anything in leading brackets to the end
79 data = re.sub(r'^\[(.*?)\](.*)', r'\2 \1', data)
80
81 for char in '[]()_,.':
82 data = data.replace(char, ' ')
83
84 # if there are no spaces
85 if data.find(' ') == -1:
86 data = data.replace('-', ' ')
87
88 # remove unwanted words (imax, ..)
89 self.remove_words(data, self.remove)
90
91 data = self.strip_spaces(data)
92
93 # split to parts
94 parts = data.split(' ')
95 cut_part = 256
96 all_caps = True
97 for part_pos, part in enumerate(parts):
98 cut = False
99 # Don't let the first word be cutoff word
100 if part_pos < 1:
101 continue
102 # check for year
103 num = str_to_int(part)
104 if num is not None:
105 if 1930 < num < 2050:
106 self.year = num
107 cut = True
108 # Don't consider all caps words cut words if the whole title has been all caps
109 if not part.isupper():
110 all_caps = False
111 # if length > 3 and whole word in uppers, consider as cut word (most likely a group name)
112 if len(part) > 3 and part.isupper() and part.isalpha() and not all_caps:
113 cut = True
114 # check for cutoff words
115 if part.lower() in self.cutoffs:
116 cut = True
117 # check for propers
118 if part.lower() in self.propers:
119 self.proper_count += 1
120 cut = True
121 # update cut position
122 if cut and parts.index(part) < cut_part:
123 cut_part = part_pos
124
125 if cut_part != 256:
126 log.debug('parts: %s, cut is: %s', parts, parts[cut_part])
127
128 # calculate cut positon from cut_part
129 abs_cut = len(' '.join(parts[:cut_part]))
130
131 log.debug('after parts check, cut data would be: `%s` abs_cut: %i', data[:abs_cut], abs_cut)
132
133 # parse quality
134 quality = qualities.Quality(data)
135 if quality:
136 self.quality = quality
137 # remaining string is same as data but quality information removed
138 # find out position where there is first difference, this is earliest
139 # quality bit, anything after that has no relevance to the movie name
140 dp = diff_pos(data, quality.clean_text)
141 if dp is not None:
142 log.debug('quality start: %s', dp)
143 if dp < abs_cut:
144 log.debug('quality cut is even shorter')
145 abs_cut = dp
146
147 # make cut
148 data = data[:abs_cut].strip()
149 log.debug('data cut to `%s` - this will be the name', data)
150
151 # save results
152 self.name = data
153
[end of flexget/utils/titles/movie.py]
[start of flexget/utils/titles/parser.py]
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # pylint: disable=unused-import, redefined-builtin
3
4 import re
5
6
7 class TitleParser(object):
8 propers = ['proper', 'repack', 'rerip', 'real', 'final']
9
10 specials = ['special', 'bonus', 'extra', 'omake', 'ova']
11
12 editions = ['dc', 'extended', 'uncut', 'remastered', 'unrated', 'theatrical', 'chrono', 'se']
13
14 # TODO: All of the quality related keywords can probably be removed from here, as the quality module handles them
15 codecs = ['x264', 'x.264', 'h264', 'h.264', 'XViD']
16
17 # lowercase required
18 cutoffs = ['limited', 'xvid', 'h264', 'x264', 'h.264', 'x.264', 'screener', 'unrated', '3d', 'extended',
19 'directors', 'director\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + propers + specials + editions
20
21 remove = ['imax']
22
23 sounds = ['AC3', 'DD5.1', 'DTS']
24
25 @staticmethod
26 def re_not_in_word(regexp):
27 return r'(?<![^\W_])' + regexp + r'(?![^\W_])'
28
29 @staticmethod
30 def strip_spaces(text):
31 """Removes all unnecessary duplicate spaces from a text"""
32 return ' '.join(text.split())
33
34 @staticmethod
35 def remove_words(text, words, not_in_word=False):
36 """Clean all given :words: from :text: case insensitively"""
37 for word in words:
38 text = TitleParser.ireplace(text, word, '', not_in_word=not_in_word)
39 # remove duplicate spaces
40 text = ' '.join(text.split())
41 return text
42
43 @staticmethod
44 def ireplace(data, old, new, count=0, not_in_word=False):
45 """Case insensitive string replace"""
46 old = re.escape(old)
47 if not_in_word:
48 old = TitleParser.re_not_in_word(old)
49 pattern = re.compile(old, re.I)
50 return re.sub(pattern, new, data, count)
51
[end of flexget/utils/titles/parser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/flexget/utils/titles/movie.py b/flexget/utils/titles/movie.py
--- a/flexget/utils/titles/movie.py
+++ b/flexget/utils/titles/movie.py
@@ -7,6 +7,7 @@
from flexget.utils.titles.parser import TitleParser
from flexget.utils import qualities
from flexget.utils.tools import str_to_int
+from datetime import datetime
log = logging.getLogger('movieparser')
@@ -60,6 +61,7 @@
# parsing results
self.name = None
self.year = None
+ self.year_pos = None
self.quality = qualities.Quality()
self.proper_count = 0
@@ -102,8 +104,13 @@
# check for year
num = str_to_int(part)
if num is not None:
- if 1930 < num < 2050:
+ if 1930 < num <= datetime.now().year:
+ if self.year_pos == cut_part:
+ # Looks like a year, but we already set the cutpoint to a year, let's move it forward
+ cut_part = part_pos
+
self.year = num
+ self.year_pos = part_pos
cut = True
# Don't consider all caps words cut words if the whole title has been all caps
if not part.isupper():
@@ -116,8 +123,10 @@
cut = True
# check for propers
if part.lower() in self.propers:
- self.proper_count += 1
- cut = True
+ # 'real' and 'final' are too common in movie titles, only cut if it comes after year
+ if part.lower() not in ['real', 'final'] or self.year:
+ self.proper_count += 1
+ cut = True
# update cut position
if cut and parts.index(part) < cut_part:
cut_part = part_pos
diff --git a/flexget/utils/titles/parser.py b/flexget/utils/titles/parser.py
--- a/flexget/utils/titles/parser.py
+++ b/flexget/utils/titles/parser.py
@@ -16,7 +16,7 @@
# lowercase required
cutoffs = ['limited', 'xvid', 'h264', 'x264', 'h.264', 'x.264', 'screener', 'unrated', '3d', 'extended',
- 'directors', 'director\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + propers + specials + editions
+ 'directors', 'director\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + specials + editions
remove = ['imax']
|
{"golden_diff": "diff --git a/flexget/utils/titles/movie.py b/flexget/utils/titles/movie.py\n--- a/flexget/utils/titles/movie.py\n+++ b/flexget/utils/titles/movie.py\n@@ -7,6 +7,7 @@\n from flexget.utils.titles.parser import TitleParser\n from flexget.utils import qualities\n from flexget.utils.tools import str_to_int\n+from datetime import datetime\n \n log = logging.getLogger('movieparser')\n \n@@ -60,6 +61,7 @@\n # parsing results\n self.name = None\n self.year = None\n+ self.year_pos = None\n self.quality = qualities.Quality()\n self.proper_count = 0\n \n@@ -102,8 +104,13 @@\n # check for year\n num = str_to_int(part)\n if num is not None:\n- if 1930 < num < 2050:\n+ if 1930 < num <= datetime.now().year:\n+ if self.year_pos == cut_part:\n+ # Looks like a year, but we already set the cutpoint to a year, let's move it forward\n+ cut_part = part_pos\n+ \n self.year = num\n+ self.year_pos = part_pos\n cut = True\n # Don't consider all caps words cut words if the whole title has been all caps\n if not part.isupper():\n@@ -116,8 +123,10 @@\n cut = True\n # check for propers\n if part.lower() in self.propers:\n- self.proper_count += 1\n- cut = True\n+ # 'real' and 'final' are too common in movie titles, only cut if it comes after year\n+ if part.lower() not in ['real', 'final'] or self.year:\n+ self.proper_count += 1\n+ cut = True\n # update cut position\n if cut and parts.index(part) < cut_part:\n cut_part = part_pos\ndiff --git a/flexget/utils/titles/parser.py b/flexget/utils/titles/parser.py\n--- a/flexget/utils/titles/parser.py\n+++ b/flexget/utils/titles/parser.py\n@@ -16,7 +16,7 @@\n \n # lowercase required\n cutoffs = ['limited', 'xvid', 'h264', 'x264', 'h.264', 'x.264', 'screener', 'unrated', '3d', 'extended',\n- 'directors', 'director\\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + propers + specials + editions\n+ 'directors', 'director\\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + specials + editions\n \n remove = ['imax']\n", "issue": "Some words confuse movie name parsing.\nEntry title is \"dan in real life 2007\", parser stops at \"real\"\n\n```\n2016-05-01 09:32 DEBUG imdb_lookup library_movies_cleanup_test lookup for d\nan in real life 2007\n2016-05-01 09:32 VERBOSE imdb_lookup library_movies_cleanup_test Searching fr\nom imdb `dan in real life 2007`\n2016-05-01 09:32 DEBUG parser_internal library_movies_cleanup_test Parsing mo\nvie: `dan in real life 2007` kwargs: {}\n2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test parts: [u'da\nn', u'in', u'real', u'life', u'2007'], cut is: real\n2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test after parts \ncheck, cut data would be: `dan in` abs_cut: 6\n2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test data cut to \n`dan in` - this will be the name\n2016-05-01 09:32 DEBUG parser_internal library_movies_cleanup_test Parsing re\nsult: <MovieParser(name=dan in,year=2007,quality=unknown)> (in 0.92 ms)\n2016-05-01 09:32 DEBUG utils.imdb library_movies_cleanup_test smart_match \nname=dan in year=2007\n2016-05-01 09:32 DEBUG utils.imdb library_movies_cleanup_test Searching: d\nan in\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # pylint: disable=unused-import, redefined-builtin\n\nimport logging\nimport re\n\nfrom flexget.utils.titles.parser import TitleParser\nfrom flexget.utils import qualities\nfrom flexget.utils.tools import str_to_int\n\nlog = logging.getLogger('movieparser')\n\n\ndef diff_pos(string1, string2):\n \"\"\"Returns first position where string1 and string2 differ.\"\"\"\n for (count, c) in enumerate(string1):\n if len(string2) <= count:\n return count\n if string2[count] != c:\n return count\n\n\nclass MovieParser(TitleParser):\n\n def __init__(self):\n self.data = None\n self.reset()\n TitleParser.__init__(self)\n\n @property\n def fields(self):\n \"\"\"\n Return a dict of all parser fields\n \"\"\"\n return {\n 'movie_parser': self,\n 'movie_name': self.name,\n 'movie_year': self.year,\n 'proper': self.proper,\n 'proper_count': self.proper_count\n }\n\n @property\n def valid(self):\n return True\n\n @property\n def proper(self):\n return self.proper_count > 0\n\n @property\n def is_series(self):\n return False\n\n @property\n def is_movie(self):\n return True\n\n def reset(self):\n # parsing results\n self.name = None\n self.year = None\n self.quality = qualities.Quality()\n self.proper_count = 0\n\n def __str__(self):\n return \"<MovieParser(name=%s,year=%s,quality=%s)>\" % (self.name, self.year, self.quality)\n\n def parse(self, data=None):\n \"\"\"Parse movie name. Populates name, year, quality and proper_count attributes\"\"\"\n\n # Reset before parsing, so the parser can be reused.\n self.reset()\n\n if data is None:\n data = self.data\n\n # Move anything in leading brackets to the end\n data = re.sub(r'^\\[(.*?)\\](.*)', r'\\2 \\1', data)\n\n for char in '[]()_,.':\n data = data.replace(char, ' ')\n\n # if there are no spaces\n if data.find(' ') == -1:\n data = data.replace('-', ' ')\n\n # remove unwanted words (imax, ..)\n self.remove_words(data, self.remove)\n\n data = self.strip_spaces(data)\n\n # split to parts\n parts = data.split(' ')\n cut_part = 256\n all_caps = True\n for part_pos, part in enumerate(parts):\n cut = False\n # Don't let the first word be cutoff word\n if part_pos < 1:\n continue\n # check for year\n num = str_to_int(part)\n if num is not None:\n if 1930 < num < 2050:\n self.year = num\n cut = True\n # Don't consider all caps words cut words if the whole title has been all caps\n if not part.isupper():\n all_caps = False\n # if length > 3 and whole word in uppers, consider as cut word (most likely a group name)\n if len(part) > 3 and part.isupper() and part.isalpha() and not all_caps:\n cut = True\n # check for cutoff words\n if part.lower() in self.cutoffs:\n cut = True\n # check for propers\n if part.lower() in self.propers:\n self.proper_count += 1\n cut = True\n # update cut position\n if cut and parts.index(part) < cut_part:\n cut_part = part_pos\n\n if cut_part != 256:\n log.debug('parts: %s, cut is: %s', parts, parts[cut_part])\n\n # calculate cut positon from cut_part\n abs_cut = len(' '.join(parts[:cut_part]))\n\n log.debug('after parts check, cut data would be: `%s` abs_cut: %i', data[:abs_cut], abs_cut)\n\n # parse quality\n quality = qualities.Quality(data)\n if quality:\n self.quality = quality\n # remaining string is same as data but quality information removed\n # find out position where there is first difference, this is earliest\n # quality bit, anything after that has no relevance to the movie name\n dp = diff_pos(data, quality.clean_text)\n if dp is not None:\n log.debug('quality start: %s', dp)\n if dp < abs_cut:\n log.debug('quality cut is even shorter')\n abs_cut = dp\n\n # make cut\n data = data[:abs_cut].strip()\n log.debug('data cut to `%s` - this will be the name', data)\n\n # save results\n self.name = data\n", "path": "flexget/utils/titles/movie.py"}, {"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # pylint: disable=unused-import, redefined-builtin\n\nimport re\n\n\nclass TitleParser(object):\n propers = ['proper', 'repack', 'rerip', 'real', 'final']\n\n specials = ['special', 'bonus', 'extra', 'omake', 'ova']\n\n editions = ['dc', 'extended', 'uncut', 'remastered', 'unrated', 'theatrical', 'chrono', 'se']\n\n # TODO: All of the quality related keywords can probably be removed from here, as the quality module handles them\n codecs = ['x264', 'x.264', 'h264', 'h.264', 'XViD']\n\n # lowercase required\n cutoffs = ['limited', 'xvid', 'h264', 'x264', 'h.264', 'x.264', 'screener', 'unrated', '3d', 'extended',\n 'directors', 'director\\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + propers + specials + editions\n\n remove = ['imax']\n\n sounds = ['AC3', 'DD5.1', 'DTS']\n\n @staticmethod\n def re_not_in_word(regexp):\n return r'(?<![^\\W_])' + regexp + r'(?![^\\W_])'\n\n @staticmethod\n def strip_spaces(text):\n \"\"\"Removes all unnecessary duplicate spaces from a text\"\"\"\n return ' '.join(text.split())\n\n @staticmethod\n def remove_words(text, words, not_in_word=False):\n \"\"\"Clean all given :words: from :text: case insensitively\"\"\"\n for word in words:\n text = TitleParser.ireplace(text, word, '', not_in_word=not_in_word)\n # remove duplicate spaces\n text = ' '.join(text.split())\n return text\n\n @staticmethod\n def ireplace(data, old, new, count=0, not_in_word=False):\n \"\"\"Case insensitive string replace\"\"\"\n old = re.escape(old)\n if not_in_word:\n old = TitleParser.re_not_in_word(old)\n pattern = re.compile(old, re.I)\n return re.sub(pattern, new, data, count)\n", "path": "flexget/utils/titles/parser.py"}]}
| 3,040 | 638 |
gh_patches_debug_15929
|
rasdani/github-patches
|
git_diff
|
microsoft__DeepSpeed-4405
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[REQUEST] Add timeout as entry-point option or environment variable
**Is your feature request related to a problem? Please describe.**
I am using Hugging Face `transformers` for my deep learning, and it has a nice option to restrict specific processing to the main process only. This is useful if a function caches the result: the main process does the processing while the other processes wait, and when main is done, the other processes can just load from the cache. That's pretty neat.
The problem arises when these are long running processes. In distributed environment (torch or deepspeed, for instance), the communication between processes has a default timeout. If no communication has occurred for `timeout` seconds, the whole program will exit.
**Describe the solution you'd like**
Both [`torch`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) and [`deepspeed`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) provide options in the Python init methods to set the timeout parameter to a higher value than the default 30 minutes, but this option is not available from the command-line or through an environment, which is what I would like.
**Describe alternatives you've considered**
I could make a custom fork but I think that this is something that more people might need as soon as they scale to larger projects.
**Additional context**
I can work on this, depending on what you suggest as a solution (CLI argument for the `deepspeed` command or as environment variable).
</issue>
<code>
[start of deepspeed/constants.py]
1 # Copyright (c) Microsoft Corporation.
2 # SPDX-License-Identifier: Apache-2.0
3
4 # DeepSpeed Team
5
6 from datetime import timedelta
7
8 #############################################
9 # Torch distributed constants
10 #############################################
11 TORCH_DISTRIBUTED_DEFAULT_PORT = 29500
12
13 # Default process group wide timeout, if applicable.
14 # This only applies to the gloo and nccl backends
15 # (only if NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1).
16 # To make an attempt at backwards compatibility with THD, we use an
17 # extraordinarily high default timeout, given that THD did not have timeouts.
18 default_pg_timeout = timedelta(minutes=30)
19 INFERENCE_GENERIC_MODE = 'generic'
20 INFERENCE_SPECIALIZED_MODE = 'specialized'
21
[end of deepspeed/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/deepspeed/constants.py b/deepspeed/constants.py
--- a/deepspeed/constants.py
+++ b/deepspeed/constants.py
@@ -3,6 +3,7 @@
# DeepSpeed Team
+import os
from datetime import timedelta
#############################################
@@ -15,6 +16,6 @@
# (only if NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1).
# To make an attempt at backwards compatibility with THD, we use an
# extraordinarily high default timeout, given that THD did not have timeouts.
-default_pg_timeout = timedelta(minutes=30)
+default_pg_timeout = timedelta(minutes=int(os.getenv("DEEPSPEED_TIMEOUT", default=30)))
INFERENCE_GENERIC_MODE = 'generic'
INFERENCE_SPECIALIZED_MODE = 'specialized'
|
{"golden_diff": "diff --git a/deepspeed/constants.py b/deepspeed/constants.py\n--- a/deepspeed/constants.py\n+++ b/deepspeed/constants.py\n@@ -3,6 +3,7 @@\n \n # DeepSpeed Team\n \n+import os\n from datetime import timedelta\n \n #############################################\n@@ -15,6 +16,6 @@\n # (only if NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1).\n # To make an attempt at backwards compatibility with THD, we use an\n # extraordinarily high default timeout, given that THD did not have timeouts.\n-default_pg_timeout = timedelta(minutes=30)\n+default_pg_timeout = timedelta(minutes=int(os.getenv(\"DEEPSPEED_TIMEOUT\", default=30)))\n INFERENCE_GENERIC_MODE = 'generic'\n INFERENCE_SPECIALIZED_MODE = 'specialized'\n", "issue": "[REQUEST] Add timeout as entry-point option or environment variable\n**Is your feature request related to a problem? Please describe.**\r\nI am using Hugging Face `transformers` for my deep learning, and it has a nice option to restrict specific processing to the main process only. This is useful if a function caches the result: the main process does the processing while the other processes wait, and when main is done, the other processes can just load from the cache. That's pretty neat.\r\n\r\nThe problem arises when these are long running processes. In distributed environment (torch or deepspeed, for instance), the communication between processes has a default timeout. If no communication has occurred for `timeout` seconds, the whole program will exit. \r\n\r\n**Describe the solution you'd like**\r\n\r\nBoth [`torch`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) and [`deepspeed`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) provide options in the Python init methods to set the timeout parameter to a higher value than the default 30 minutes, but this option is not available from the command-line or through an environment, which is what I would like.\r\n\r\n**Describe alternatives you've considered**\r\nI could make a custom fork but I think that this is something that more people might need as soon as they scale to larger projects.\r\n\r\n**Additional context**\r\n\r\nI can work on this, depending on what you suggest as a solution (CLI argument for the `deepspeed` command or as environment variable).\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# SPDX-License-Identifier: Apache-2.0\n\n# DeepSpeed Team\n\nfrom datetime import timedelta\n\n#############################################\n# Torch distributed constants\n#############################################\nTORCH_DISTRIBUTED_DEFAULT_PORT = 29500\n\n# Default process group wide timeout, if applicable.\n# This only applies to the gloo and nccl backends\n# (only if NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1).\n# To make an attempt at backwards compatibility with THD, we use an\n# extraordinarily high default timeout, given that THD did not have timeouts.\ndefault_pg_timeout = timedelta(minutes=30)\nINFERENCE_GENERIC_MODE = 'generic'\nINFERENCE_SPECIALIZED_MODE = 'specialized'\n", "path": "deepspeed/constants.py"}]}
| 1,052 | 174 |
gh_patches_debug_6053
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-3123
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Double import
I noticed that in `networkx/algorithms/__init__.py`the statement `from networkx.algorithms.triads import *` occurs twice. Is there any reason for this or is this just a blunder?
</issue>
<code>
[start of networkx/algorithms/__init__.py]
1 from networkx.algorithms.assortativity import *
2 from networkx.algorithms.boundary import *
3 from networkx.algorithms.bridges import *
4 from networkx.algorithms.chains import *
5 from networkx.algorithms.centrality import *
6 from networkx.algorithms.chordal import *
7 from networkx.algorithms.cluster import *
8 from networkx.algorithms.clique import *
9 from networkx.algorithms.communicability_alg import *
10 from networkx.algorithms.components import *
11 from networkx.algorithms.coloring import *
12 from networkx.algorithms.core import *
13 from networkx.algorithms.covering import *
14 from networkx.algorithms.cycles import *
15 from networkx.algorithms.cuts import *
16 from networkx.algorithms.dag import *
17 from networkx.algorithms.distance_measures import *
18 from networkx.algorithms.distance_regular import *
19 from networkx.algorithms.dominance import *
20 from networkx.algorithms.dominating import *
21 from networkx.algorithms.efficiency import *
22 from networkx.algorithms.euler import *
23 from networkx.algorithms.graphical import *
24 from networkx.algorithms.hierarchy import *
25 from networkx.algorithms.hybrid import *
26 from networkx.algorithms.link_analysis import *
27 from networkx.algorithms.link_prediction import *
28 from networkx.algorithms.lowest_common_ancestors import *
29 from networkx.algorithms.isolate import *
30 from networkx.algorithms.matching import *
31 from networkx.algorithms.minors import *
32 from networkx.algorithms.mis import *
33 from networkx.algorithms.operators import *
34 from networkx.algorithms.planarity import *
35 from networkx.algorithms.reciprocity import *
36 from networkx.algorithms.richclub import *
37 from networkx.algorithms.shortest_paths import *
38 from networkx.algorithms.similarity import *
39 from networkx.algorithms.simple_paths import *
40 from networkx.algorithms.smallworld import *
41 from networkx.algorithms.smetric import *
42 from networkx.algorithms.structuralholes import *
43 from networkx.algorithms.triads import *
44 from networkx.algorithms.sparsifiers import *
45 from networkx.algorithms.swap import *
46 from networkx.algorithms.traversal import *
47 from networkx.algorithms.triads import *
48 from networkx.algorithms.vitality import *
49 from networkx.algorithms.voronoi import *
50 from networkx.algorithms.wiener import *
51
52 # Make certain subpackages available to the user as direct imports from
53 # the `networkx` namespace.
54 import networkx.algorithms.assortativity
55 import networkx.algorithms.bipartite
56 import networkx.algorithms.node_classification
57 import networkx.algorithms.centrality
58 import networkx.algorithms.chordal
59 import networkx.algorithms.cluster
60 import networkx.algorithms.clique
61 import networkx.algorithms.components
62 import networkx.algorithms.connectivity
63 import networkx.algorithms.community
64 import networkx.algorithms.coloring
65 import networkx.algorithms.flow
66 import networkx.algorithms.isomorphism
67 import networkx.algorithms.link_analysis
68 import networkx.algorithms.lowest_common_ancestors
69 import networkx.algorithms.operators
70 import networkx.algorithms.shortest_paths
71 import networkx.algorithms.tournament
72 import networkx.algorithms.traversal
73 import networkx.algorithms.tree
74
75 # Make certain functions from some of the previous subpackages available
76 # to the user as direct imports from the `networkx` namespace.
77 from networkx.algorithms.bipartite import complete_bipartite_graph
78 from networkx.algorithms.bipartite import is_bipartite
79 from networkx.algorithms.bipartite import project
80 from networkx.algorithms.bipartite import projected_graph
81 from networkx.algorithms.connectivity import all_pairs_node_connectivity
82 from networkx.algorithms.connectivity import all_node_cuts
83 from networkx.algorithms.connectivity import average_node_connectivity
84 from networkx.algorithms.connectivity import edge_connectivity
85 from networkx.algorithms.connectivity import edge_disjoint_paths
86 from networkx.algorithms.connectivity import k_components
87 from networkx.algorithms.connectivity import k_edge_components
88 from networkx.algorithms.connectivity import k_edge_subgraphs
89 from networkx.algorithms.connectivity import k_edge_augmentation
90 from networkx.algorithms.connectivity import is_k_edge_connected
91 from networkx.algorithms.connectivity import minimum_edge_cut
92 from networkx.algorithms.connectivity import minimum_node_cut
93 from networkx.algorithms.connectivity import node_connectivity
94 from networkx.algorithms.connectivity import node_disjoint_paths
95 from networkx.algorithms.connectivity import stoer_wagner
96 from networkx.algorithms.flow import capacity_scaling
97 from networkx.algorithms.flow import cost_of_flow
98 from networkx.algorithms.flow import gomory_hu_tree
99 from networkx.algorithms.flow import max_flow_min_cost
100 from networkx.algorithms.flow import maximum_flow
101 from networkx.algorithms.flow import maximum_flow_value
102 from networkx.algorithms.flow import min_cost_flow
103 from networkx.algorithms.flow import min_cost_flow_cost
104 from networkx.algorithms.flow import minimum_cut
105 from networkx.algorithms.flow import minimum_cut_value
106 from networkx.algorithms.flow import network_simplex
107 from networkx.algorithms.isomorphism import could_be_isomorphic
108 from networkx.algorithms.isomorphism import fast_could_be_isomorphic
109 from networkx.algorithms.isomorphism import faster_could_be_isomorphic
110 from networkx.algorithms.isomorphism import is_isomorphic
111 from networkx.algorithms.tree.branchings import maximum_branching
112 from networkx.algorithms.tree.branchings import maximum_spanning_arborescence
113 from networkx.algorithms.tree.branchings import minimum_branching
114 from networkx.algorithms.tree.branchings import minimum_spanning_arborescence
115 from networkx.algorithms.tree.coding import *
116 from networkx.algorithms.tree.operations import *
117 from networkx.algorithms.tree.recognition import *
118 from networkx.algorithms.tree.mst import *
119
[end of networkx/algorithms/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/networkx/algorithms/__init__.py b/networkx/algorithms/__init__.py
--- a/networkx/algorithms/__init__.py
+++ b/networkx/algorithms/__init__.py
@@ -40,7 +40,6 @@
from networkx.algorithms.smallworld import *
from networkx.algorithms.smetric import *
from networkx.algorithms.structuralholes import *
-from networkx.algorithms.triads import *
from networkx.algorithms.sparsifiers import *
from networkx.algorithms.swap import *
from networkx.algorithms.traversal import *
|
{"golden_diff": "diff --git a/networkx/algorithms/__init__.py b/networkx/algorithms/__init__.py\n--- a/networkx/algorithms/__init__.py\n+++ b/networkx/algorithms/__init__.py\n@@ -40,7 +40,6 @@\n from networkx.algorithms.smallworld import *\n from networkx.algorithms.smetric import *\n from networkx.algorithms.structuralholes import *\n-from networkx.algorithms.triads import *\n from networkx.algorithms.sparsifiers import *\n from networkx.algorithms.swap import *\n from networkx.algorithms.traversal import *\n", "issue": "Double import\nI noticed that in `networkx/algorithms/__init__.py`the statement `from networkx.algorithms.triads import *` occurs twice. Is there any reason for this or is this just a blunder?\n", "before_files": [{"content": "from networkx.algorithms.assortativity import *\nfrom networkx.algorithms.boundary import *\nfrom networkx.algorithms.bridges import *\nfrom networkx.algorithms.chains import *\nfrom networkx.algorithms.centrality import *\nfrom networkx.algorithms.chordal import *\nfrom networkx.algorithms.cluster import *\nfrom networkx.algorithms.clique import *\nfrom networkx.algorithms.communicability_alg import *\nfrom networkx.algorithms.components import *\nfrom networkx.algorithms.coloring import *\nfrom networkx.algorithms.core import *\nfrom networkx.algorithms.covering import *\nfrom networkx.algorithms.cycles import *\nfrom networkx.algorithms.cuts import *\nfrom networkx.algorithms.dag import *\nfrom networkx.algorithms.distance_measures import *\nfrom networkx.algorithms.distance_regular import *\nfrom networkx.algorithms.dominance import *\nfrom networkx.algorithms.dominating import *\nfrom networkx.algorithms.efficiency import *\nfrom networkx.algorithms.euler import *\nfrom networkx.algorithms.graphical import *\nfrom networkx.algorithms.hierarchy import *\nfrom networkx.algorithms.hybrid import *\nfrom networkx.algorithms.link_analysis import *\nfrom networkx.algorithms.link_prediction import *\nfrom networkx.algorithms.lowest_common_ancestors import *\nfrom networkx.algorithms.isolate import *\nfrom networkx.algorithms.matching import *\nfrom networkx.algorithms.minors import *\nfrom networkx.algorithms.mis import *\nfrom networkx.algorithms.operators import *\nfrom networkx.algorithms.planarity import *\nfrom networkx.algorithms.reciprocity import *\nfrom networkx.algorithms.richclub import *\nfrom networkx.algorithms.shortest_paths import *\nfrom networkx.algorithms.similarity import *\nfrom networkx.algorithms.simple_paths import *\nfrom networkx.algorithms.smallworld import *\nfrom networkx.algorithms.smetric import *\nfrom networkx.algorithms.structuralholes import *\nfrom networkx.algorithms.triads import *\nfrom networkx.algorithms.sparsifiers import *\nfrom networkx.algorithms.swap import *\nfrom networkx.algorithms.traversal import *\nfrom networkx.algorithms.triads import *\nfrom networkx.algorithms.vitality import *\nfrom networkx.algorithms.voronoi import *\nfrom networkx.algorithms.wiener import *\n\n# Make certain subpackages available to the user as direct imports from\n# the `networkx` namespace.\nimport networkx.algorithms.assortativity\nimport networkx.algorithms.bipartite\nimport networkx.algorithms.node_classification\nimport networkx.algorithms.centrality\nimport networkx.algorithms.chordal\nimport networkx.algorithms.cluster\nimport networkx.algorithms.clique\nimport networkx.algorithms.components\nimport networkx.algorithms.connectivity\nimport networkx.algorithms.community\nimport networkx.algorithms.coloring\nimport networkx.algorithms.flow\nimport networkx.algorithms.isomorphism\nimport networkx.algorithms.link_analysis\nimport networkx.algorithms.lowest_common_ancestors\nimport networkx.algorithms.operators\nimport networkx.algorithms.shortest_paths\nimport networkx.algorithms.tournament\nimport networkx.algorithms.traversal\nimport networkx.algorithms.tree\n\n# Make certain functions from some of the previous subpackages available\n# to the user as direct imports from the `networkx` namespace.\nfrom networkx.algorithms.bipartite import complete_bipartite_graph\nfrom networkx.algorithms.bipartite import is_bipartite\nfrom networkx.algorithms.bipartite import project\nfrom networkx.algorithms.bipartite import projected_graph\nfrom networkx.algorithms.connectivity import all_pairs_node_connectivity\nfrom networkx.algorithms.connectivity import all_node_cuts\nfrom networkx.algorithms.connectivity import average_node_connectivity\nfrom networkx.algorithms.connectivity import edge_connectivity\nfrom networkx.algorithms.connectivity import edge_disjoint_paths\nfrom networkx.algorithms.connectivity import k_components\nfrom networkx.algorithms.connectivity import k_edge_components\nfrom networkx.algorithms.connectivity import k_edge_subgraphs\nfrom networkx.algorithms.connectivity import k_edge_augmentation\nfrom networkx.algorithms.connectivity import is_k_edge_connected\nfrom networkx.algorithms.connectivity import minimum_edge_cut\nfrom networkx.algorithms.connectivity import minimum_node_cut\nfrom networkx.algorithms.connectivity import node_connectivity\nfrom networkx.algorithms.connectivity import node_disjoint_paths\nfrom networkx.algorithms.connectivity import stoer_wagner\nfrom networkx.algorithms.flow import capacity_scaling\nfrom networkx.algorithms.flow import cost_of_flow\nfrom networkx.algorithms.flow import gomory_hu_tree\nfrom networkx.algorithms.flow import max_flow_min_cost\nfrom networkx.algorithms.flow import maximum_flow\nfrom networkx.algorithms.flow import maximum_flow_value\nfrom networkx.algorithms.flow import min_cost_flow\nfrom networkx.algorithms.flow import min_cost_flow_cost\nfrom networkx.algorithms.flow import minimum_cut\nfrom networkx.algorithms.flow import minimum_cut_value\nfrom networkx.algorithms.flow import network_simplex\nfrom networkx.algorithms.isomorphism import could_be_isomorphic\nfrom networkx.algorithms.isomorphism import fast_could_be_isomorphic\nfrom networkx.algorithms.isomorphism import faster_could_be_isomorphic\nfrom networkx.algorithms.isomorphism import is_isomorphic\nfrom networkx.algorithms.tree.branchings import maximum_branching\nfrom networkx.algorithms.tree.branchings import maximum_spanning_arborescence\nfrom networkx.algorithms.tree.branchings import minimum_branching\nfrom networkx.algorithms.tree.branchings import minimum_spanning_arborescence\nfrom networkx.algorithms.tree.coding import *\nfrom networkx.algorithms.tree.operations import *\nfrom networkx.algorithms.tree.recognition import *\nfrom networkx.algorithms.tree.mst import *\n", "path": "networkx/algorithms/__init__.py"}]}
| 2,057 | 121 |
gh_patches_debug_28442
|
rasdani/github-patches
|
git_diff
|
pypa__pipenv-1326
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pipenv starts slow when IPython is installed.
IPython is imported when importing dotenv.
(ref: theskumar/python-dotenv#84 and [import profile](https://paste.ubuntu.com/26409167/))
Since pipenv uses patched version of dotenv, pipenv should port upstream fix
or patch `dotenv/__init__.py` to stop importing dotenv.ipython.
##### Describe your environment
1. Ubuntu 17.10
1. Python version: 3.7.0a4
1. Pipenv version: 9.0.3
##### Steps to replicate
* Install Python 3.7.0a4 or newer
* ` PYTHONPROFILEIMPORTTIME=1 path/to/pipenv --version 2>pipenv-version`
</issue>
<code>
[start of pipenv/patched/dotenv/__init__.py]
1 from .cli import get_cli_string
2 from .main import load_dotenv, get_key, set_key, unset_key, find_dotenv
3 try:
4 from .ipython import load_ipython_extension
5 except ImportError:
6 pass
7
8 __all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv', 'load_ipython_extension']
9
[end of pipenv/patched/dotenv/__init__.py]
[start of pipenv/patched/dotenv/ipython.py]
1 from __future__ import print_function
2 from .main import load_dotenv, find_dotenv
3
4 from IPython.core.magic import Magics, magics_class, line_magic
5 from IPython.core.magic_arguments import (argument, magic_arguments,
6 parse_argstring)
7
8
9 @magics_class
10 class IPythonDotEnv(Magics):
11
12 @magic_arguments()
13 @argument(
14 '-o', '--override', action='store_true',
15 help="Indicate to override existing variables"
16 )
17 @argument(
18 '-v', '--verbose', action='store_true',
19 help="Indicate function calls to be verbose"
20 )
21 @argument('dotenv_path', nargs='?', type=str, default='.env',
22 help='Search in increasingly higher folders for the `dotenv_path`')
23 @line_magic
24 def dotenv(self, line):
25 args = parse_argstring(self.dotenv, line)
26 # Locate the .env file
27 dotenv_path = args.dotenv_path
28 try:
29 dotenv_path = find_dotenv(dotenv_path, True, True)
30 except IOError:
31 print("cannot find .env file")
32 return
33
34 # Load the .env file
35 load_dotenv(dotenv_path, verbose=args.verbose, override=args.override)
36
37
38 def load_ipython_extension(ipython):
39 """Register the %dotenv magic."""
40 ipython.register_magics(IPythonDotEnv)
41
[end of pipenv/patched/dotenv/ipython.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pipenv/patched/dotenv/__init__.py b/pipenv/patched/dotenv/__init__.py
--- a/pipenv/patched/dotenv/__init__.py
+++ b/pipenv/patched/dotenv/__init__.py
@@ -1,8 +1,4 @@
from .cli import get_cli_string
from .main import load_dotenv, get_key, set_key, unset_key, find_dotenv
-try:
- from .ipython import load_ipython_extension
-except ImportError:
- pass
-__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv', 'load_ipython_extension']
+__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv']
diff --git a/pipenv/patched/dotenv/ipython.py b/pipenv/patched/dotenv/ipython.py
deleted file mode 100644
--- a/pipenv/patched/dotenv/ipython.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from __future__ import print_function
-from .main import load_dotenv, find_dotenv
-
-from IPython.core.magic import Magics, magics_class, line_magic
-from IPython.core.magic_arguments import (argument, magic_arguments,
- parse_argstring)
-
-
-@magics_class
-class IPythonDotEnv(Magics):
-
- @magic_arguments()
- @argument(
- '-o', '--override', action='store_true',
- help="Indicate to override existing variables"
- )
- @argument(
- '-v', '--verbose', action='store_true',
- help="Indicate function calls to be verbose"
- )
- @argument('dotenv_path', nargs='?', type=str, default='.env',
- help='Search in increasingly higher folders for the `dotenv_path`')
- @line_magic
- def dotenv(self, line):
- args = parse_argstring(self.dotenv, line)
- # Locate the .env file
- dotenv_path = args.dotenv_path
- try:
- dotenv_path = find_dotenv(dotenv_path, True, True)
- except IOError:
- print("cannot find .env file")
- return
-
- # Load the .env file
- load_dotenv(dotenv_path, verbose=args.verbose, override=args.override)
-
-
-def load_ipython_extension(ipython):
- """Register the %dotenv magic."""
- ipython.register_magics(IPythonDotEnv)
|
{"golden_diff": "diff --git a/pipenv/patched/dotenv/__init__.py b/pipenv/patched/dotenv/__init__.py\n--- a/pipenv/patched/dotenv/__init__.py\n+++ b/pipenv/patched/dotenv/__init__.py\n@@ -1,8 +1,4 @@\n from .cli import get_cli_string\n from .main import load_dotenv, get_key, set_key, unset_key, find_dotenv\n-try:\n- from .ipython import load_ipython_extension\n-except ImportError:\n- pass\n \n-__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv', 'load_ipython_extension']\n+__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv']\ndiff --git a/pipenv/patched/dotenv/ipython.py b/pipenv/patched/dotenv/ipython.py\ndeleted file mode 100644\n--- a/pipenv/patched/dotenv/ipython.py\n+++ /dev/null\n@@ -1,40 +0,0 @@\n-from __future__ import print_function\n-from .main import load_dotenv, find_dotenv\n-\n-from IPython.core.magic import Magics, magics_class, line_magic\n-from IPython.core.magic_arguments import (argument, magic_arguments,\n- parse_argstring)\n-\n-\n-@magics_class\n-class IPythonDotEnv(Magics):\n-\n- @magic_arguments()\n- @argument(\n- '-o', '--override', action='store_true',\n- help=\"Indicate to override existing variables\"\n- )\n- @argument(\n- '-v', '--verbose', action='store_true',\n- help=\"Indicate function calls to be verbose\"\n- )\n- @argument('dotenv_path', nargs='?', type=str, default='.env',\n- help='Search in increasingly higher folders for the `dotenv_path`')\n- @line_magic\n- def dotenv(self, line):\n- args = parse_argstring(self.dotenv, line)\n- # Locate the .env file\n- dotenv_path = args.dotenv_path\n- try:\n- dotenv_path = find_dotenv(dotenv_path, True, True)\n- except IOError:\n- print(\"cannot find .env file\")\n- return\n-\n- # Load the .env file\n- load_dotenv(dotenv_path, verbose=args.verbose, override=args.override)\n-\n-\n-def load_ipython_extension(ipython):\n- \"\"\"Register the %dotenv magic.\"\"\"\n- ipython.register_magics(IPythonDotEnv)\n", "issue": "pipenv starts slow when IPython is installed.\nIPython is imported when importing dotenv. \r\n(ref: theskumar/python-dotenv#84 and [import profile](https://paste.ubuntu.com/26409167/))\r\n\r\nSince pipenv uses patched version of dotenv, pipenv should port upstream fix\r\nor patch `dotenv/__init__.py` to stop importing dotenv.ipython.\r\n\r\n##### Describe your environment\r\n\r\n1. Ubuntu 17.10\r\n1. Python version: 3.7.0a4\r\n1. Pipenv version: 9.0.3\r\n\r\n##### Steps to replicate\r\n\r\n* Install Python 3.7.0a4 or newer\r\n* ` PYTHONPROFILEIMPORTTIME=1 path/to/pipenv --version 2>pipenv-version`\n", "before_files": [{"content": "from .cli import get_cli_string\nfrom .main import load_dotenv, get_key, set_key, unset_key, find_dotenv\ntry:\n from .ipython import load_ipython_extension\nexcept ImportError:\n pass\n\n__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv', 'load_ipython_extension']\n", "path": "pipenv/patched/dotenv/__init__.py"}, {"content": "from __future__ import print_function\nfrom .main import load_dotenv, find_dotenv\n\nfrom IPython.core.magic import Magics, magics_class, line_magic\nfrom IPython.core.magic_arguments import (argument, magic_arguments,\n parse_argstring)\n\n\n@magics_class\nclass IPythonDotEnv(Magics):\n\n @magic_arguments()\n @argument(\n '-o', '--override', action='store_true',\n help=\"Indicate to override existing variables\"\n )\n @argument(\n '-v', '--verbose', action='store_true',\n help=\"Indicate function calls to be verbose\"\n )\n @argument('dotenv_path', nargs='?', type=str, default='.env',\n help='Search in increasingly higher folders for the `dotenv_path`')\n @line_magic\n def dotenv(self, line):\n args = parse_argstring(self.dotenv, line)\n # Locate the .env file\n dotenv_path = args.dotenv_path\n try:\n dotenv_path = find_dotenv(dotenv_path, True, True)\n except IOError:\n print(\"cannot find .env file\")\n return\n\n # Load the .env file\n load_dotenv(dotenv_path, verbose=args.verbose, override=args.override)\n\n\ndef load_ipython_extension(ipython):\n \"\"\"Register the %dotenv magic.\"\"\"\n ipython.register_magics(IPythonDotEnv)\n", "path": "pipenv/patched/dotenv/ipython.py"}]}
| 1,212 | 594 |
gh_patches_debug_4224
|
rasdani/github-patches
|
git_diff
|
pypa__pip-5146
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Development version number triggers a false positive warning
* Pip version: 10.0.0b1
* Python version: 3.6.4
* Operating system: Linux
### Description:
Say a package `foo` depends on `bar>=1.0.0`. If the installed version of `bar` is a development version such as `1.0.1.dev42`, pip issues an incompatible version warning upon installation of `foo`. Pip shouldn't issue any warning since `1.0.1.dev42>=1.0.0`. The weird thing is that pip is satisfied with that version when scanning the dependencies of `foo`, but issues that warning anyway.
For that matter, the real life scenario is installing a development library with a `setuptools_scm`-generated version number and then installing a library that depends on it.
### What I've run:
```
% tree
.
├── bar
│ └── setup.py
└── foo
└── setup.py
2 directories, 2 files
```
```
% cat bar/setup.py
from setuptools import setup
setup(
name='bar',
version='1.0.1.dev42')
```
```
% cat foo/setup.py
from setuptools import setup
setup(
name='foo',
install_requires=['bar>=1.0.0'],
version='3.14.15')
```
```
# setting up virtual environment
% python3 -m venv compat
% source compat/bin/activate
% pip install pip==10.0.0b1
```
```
% pip install ./bar
Processing ./bar
Installing collected packages: bar
Running setup.py install for bar ... done
Successfully installed bar-1.0.1.dev42
```
```
% pip install ./foo
Processing ./foo
Requirement already satisfied: bar>=1.0.0 in ./compat/lib/python3.6/site-packages (from foo==3.14.15) (1.0.1.dev42)
foo 3.14.15 has requirement bar>=1.0.0, but you'll have bar 1.0.1.dev42 which is incompatible.
Installing collected packages: foo
Running setup.py install for foo ... done
Successfully installed foo-3.14.15
```
</issue>
<code>
[start of src/pip/_internal/operations/check.py]
1 """Validation of dependencies of packages
2 """
3
4 from collections import namedtuple
5
6 from pip._vendor.packaging.utils import canonicalize_name
7
8 from pip._internal.operations.prepare import make_abstract_dist
9
10 from pip._internal.utils.misc import get_installed_distributions
11 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
12
13 if MYPY_CHECK_RUNNING:
14 from pip._internal.req.req_install import InstallRequirement
15 from typing import Any, Dict, Iterator, Set, Tuple, List
16
17 # Shorthands
18 PackageSet = Dict[str, 'PackageDetails']
19 Missing = Tuple[str, Any]
20 Conflicting = Tuple[str, str, Any]
21
22 MissingDict = Dict[str, List[Missing]]
23 ConflictingDict = Dict[str, List[Conflicting]]
24 CheckResult = Tuple[MissingDict, ConflictingDict]
25
26 PackageDetails = namedtuple('PackageDetails', ['version', 'requires'])
27
28
29 def create_package_set_from_installed(**kwargs):
30 # type: (**Any) -> PackageSet
31 """Converts a list of distributions into a PackageSet.
32 """
33 retval = {}
34 for dist in get_installed_distributions(**kwargs):
35 name = canonicalize_name(dist.project_name)
36 retval[name] = PackageDetails(dist.version, dist.requires())
37 return retval
38
39
40 def check_package_set(package_set):
41 # type: (PackageSet) -> CheckResult
42 """Check if a package set is consistent
43 """
44 missing = dict()
45 conflicting = dict()
46
47 for package_name in package_set:
48 # Info about dependencies of package_name
49 missing_deps = set() # type: Set[Missing]
50 conflicting_deps = set() # type: Set[Conflicting]
51
52 for req in package_set[package_name].requires:
53 name = canonicalize_name(req.project_name) # type: str
54
55 # Check if it's missing
56 if name not in package_set:
57 missed = True
58 if req.marker is not None:
59 missed = req.marker.evaluate()
60 if missed:
61 missing_deps.add((name, req))
62 continue
63
64 # Check if there's a conflict
65 version = package_set[name].version # type: str
66 if version not in req.specifier:
67 conflicting_deps.add((name, version, req))
68
69 def str_key(x):
70 return str(x)
71
72 if missing_deps:
73 missing[package_name] = sorted(missing_deps, key=str_key)
74 if conflicting_deps:
75 conflicting[package_name] = sorted(conflicting_deps, key=str_key)
76
77 return missing, conflicting
78
79
80 def check_install_conflicts(to_install):
81 # type: (List[InstallRequirement]) -> Tuple[PackageSet, CheckResult]
82 """For checking if the dependency graph would be consistent after \
83 installing given requirements
84 """
85 # Start from the current state
86 state = create_package_set_from_installed()
87 _simulate_installation_of(to_install, state)
88 return state, check_package_set(state)
89
90
91 # NOTE from @pradyunsg
92 # This required a minor update in dependency link handling logic over at
93 # operations.prepare.IsSDist.dist() to get it working
94 def _simulate_installation_of(to_install, state):
95 # type: (List[InstallRequirement], PackageSet) -> None
96 """Computes the version of packages after installing to_install.
97 """
98
99 # Modify it as installing requirement_set would (assuming no errors)
100 for inst_req in to_install:
101 dist = make_abstract_dist(inst_req).dist(finder=None)
102 state[dist.key] = PackageDetails(dist.version, dist.requires())
103
[end of src/pip/_internal/operations/check.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pip/_internal/operations/check.py b/src/pip/_internal/operations/check.py
--- a/src/pip/_internal/operations/check.py
+++ b/src/pip/_internal/operations/check.py
@@ -63,7 +63,7 @@
# Check if there's a conflict
version = package_set[name].version # type: str
- if version not in req.specifier:
+ if not req.specifier.contains(version, prereleases=True):
conflicting_deps.add((name, version, req))
def str_key(x):
|
{"golden_diff": "diff --git a/src/pip/_internal/operations/check.py b/src/pip/_internal/operations/check.py\n--- a/src/pip/_internal/operations/check.py\n+++ b/src/pip/_internal/operations/check.py\n@@ -63,7 +63,7 @@\n \n # Check if there's a conflict\n version = package_set[name].version # type: str\n- if version not in req.specifier:\n+ if not req.specifier.contains(version, prereleases=True):\n conflicting_deps.add((name, version, req))\n \n def str_key(x):\n", "issue": "Development version number triggers a false positive warning\n* Pip version: 10.0.0b1\r\n* Python version: 3.6.4\r\n* Operating system: Linux\r\n\r\n### Description:\r\n\r\nSay a package `foo` depends on `bar>=1.0.0`. If the installed version of `bar` is a development version such as `1.0.1.dev42`, pip issues an incompatible version warning upon installation of `foo`. Pip shouldn't issue any warning since `1.0.1.dev42>=1.0.0`. The weird thing is that pip is satisfied with that version when scanning the dependencies of `foo`, but issues that warning anyway.\r\n\r\nFor that matter, the real life scenario is installing a development library with a `setuptools_scm`-generated version number and then installing a library that depends on it.\r\n\r\n### What I've run:\r\n\r\n```\r\n% tree\r\n.\r\n\u251c\u2500\u2500 bar\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 setup.py\r\n\u2514\u2500\u2500 foo\r\n \u2514\u2500\u2500 setup.py\r\n\r\n2 directories, 2 files\r\n```\r\n\r\n```\r\n% cat bar/setup.py\r\nfrom setuptools import setup\r\n\r\nsetup(\r\n name='bar',\r\n version='1.0.1.dev42')\r\n```\r\n\r\n```\r\n% cat foo/setup.py\r\nfrom setuptools import setup\r\n\r\nsetup(\r\n name='foo',\r\n install_requires=['bar>=1.0.0'],\r\n version='3.14.15')\r\n```\r\n\r\n```\r\n# setting up virtual environment\r\n% python3 -m venv compat\r\n% source compat/bin/activate\r\n% pip install pip==10.0.0b1\r\n```\r\n\r\n```\r\n% pip install ./bar\r\nProcessing ./bar\r\nInstalling collected packages: bar\r\n Running setup.py install for bar ... done\r\nSuccessfully installed bar-1.0.1.dev42\r\n```\r\n\r\n```\r\n% pip install ./foo\r\nProcessing ./foo\r\nRequirement already satisfied: bar>=1.0.0 in ./compat/lib/python3.6/site-packages (from foo==3.14.15) (1.0.1.dev42)\r\nfoo 3.14.15 has requirement bar>=1.0.0, but you'll have bar 1.0.1.dev42 which is incompatible.\r\nInstalling collected packages: foo\r\n Running setup.py install for foo ... done\r\nSuccessfully installed foo-3.14.15\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Validation of dependencies of packages\n\"\"\"\n\nfrom collections import namedtuple\n\nfrom pip._vendor.packaging.utils import canonicalize_name\n\nfrom pip._internal.operations.prepare import make_abstract_dist\n\nfrom pip._internal.utils.misc import get_installed_distributions\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from pip._internal.req.req_install import InstallRequirement\n from typing import Any, Dict, Iterator, Set, Tuple, List\n\n # Shorthands\n PackageSet = Dict[str, 'PackageDetails']\n Missing = Tuple[str, Any]\n Conflicting = Tuple[str, str, Any]\n\n MissingDict = Dict[str, List[Missing]]\n ConflictingDict = Dict[str, List[Conflicting]]\n CheckResult = Tuple[MissingDict, ConflictingDict]\n\nPackageDetails = namedtuple('PackageDetails', ['version', 'requires'])\n\n\ndef create_package_set_from_installed(**kwargs):\n # type: (**Any) -> PackageSet\n \"\"\"Converts a list of distributions into a PackageSet.\n \"\"\"\n retval = {}\n for dist in get_installed_distributions(**kwargs):\n name = canonicalize_name(dist.project_name)\n retval[name] = PackageDetails(dist.version, dist.requires())\n return retval\n\n\ndef check_package_set(package_set):\n # type: (PackageSet) -> CheckResult\n \"\"\"Check if a package set is consistent\n \"\"\"\n missing = dict()\n conflicting = dict()\n\n for package_name in package_set:\n # Info about dependencies of package_name\n missing_deps = set() # type: Set[Missing]\n conflicting_deps = set() # type: Set[Conflicting]\n\n for req in package_set[package_name].requires:\n name = canonicalize_name(req.project_name) # type: str\n\n # Check if it's missing\n if name not in package_set:\n missed = True\n if req.marker is not None:\n missed = req.marker.evaluate()\n if missed:\n missing_deps.add((name, req))\n continue\n\n # Check if there's a conflict\n version = package_set[name].version # type: str\n if version not in req.specifier:\n conflicting_deps.add((name, version, req))\n\n def str_key(x):\n return str(x)\n\n if missing_deps:\n missing[package_name] = sorted(missing_deps, key=str_key)\n if conflicting_deps:\n conflicting[package_name] = sorted(conflicting_deps, key=str_key)\n\n return missing, conflicting\n\n\ndef check_install_conflicts(to_install):\n # type: (List[InstallRequirement]) -> Tuple[PackageSet, CheckResult]\n \"\"\"For checking if the dependency graph would be consistent after \\\n installing given requirements\n \"\"\"\n # Start from the current state\n state = create_package_set_from_installed()\n _simulate_installation_of(to_install, state)\n return state, check_package_set(state)\n\n\n# NOTE from @pradyunsg\n# This required a minor update in dependency link handling logic over at\n# operations.prepare.IsSDist.dist() to get it working\ndef _simulate_installation_of(to_install, state):\n # type: (List[InstallRequirement], PackageSet) -> None\n \"\"\"Computes the version of packages after installing to_install.\n \"\"\"\n\n # Modify it as installing requirement_set would (assuming no errors)\n for inst_req in to_install:\n dist = make_abstract_dist(inst_req).dist(finder=None)\n state[dist.key] = PackageDetails(dist.version, dist.requires())\n", "path": "src/pip/_internal/operations/check.py"}]}
| 2,019 | 126 |
gh_patches_debug_51874
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-5287
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`json_graph.tree_data` can cause maximum recursion depth error.
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
Currently the algorithm compares the `n_nodes` with `n_edges` to check if `G` is a tree. https://github.com/networkx/networkx/blob/0cc70051fa0a979b1f1eab4af5b6587a6ebf8334/networkx/readwrite/json_graph/tree.py#L74-L75
This check can be bypassed with specific inputs and cause a recursion error.
### Expected Behavior
<!--- Tell us what should happen -->
The code should check whether there are cycles with `root` as the source and raise an exception.
Another possible fix would be to check if the graph is not weakly connected.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
```Python3
>>> import networkx as nx
>>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1)])
>>> G.add_node(4)
>>> data = nx.json_graph.tree_data(G, 1)
RecursionError: maximum recursion depth exceeded
```
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.8.10
NetworkX version: 2.7rc1.dev0
</issue>
<code>
[start of networkx/readwrite/json_graph/tree.py]
1 from itertools import chain
2 import networkx as nx
3
4 __all__ = ["tree_data", "tree_graph"]
5
6
7 # NOTE: Remove attrs from signature in 3.0
8 def tree_data(G, root, attrs=None, ident="id", children="children"):
9 """Returns data in tree format that is suitable for JSON serialization
10 and use in Javascript documents.
11
12 Parameters
13 ----------
14 G : NetworkX graph
15 G must be an oriented tree
16
17 root : node
18 The root of the tree
19
20 attrs : dict
21 A dictionary that contains two keys 'id' and 'children'. The
22 corresponding values provide the attribute names for storing
23 NetworkX-internal graph data. The values should be unique. Default
24 value: :samp:`dict(id='id', children='children')`.
25
26 If some user-defined graph data use these attribute names as data keys,
27 they may be silently dropped.
28
29 .. deprecated:: 2.6
30
31 The `attrs` keyword argument is replaced by `ident` and `children`
32 and will be removed in networkx 3.0
33
34 ident : string
35 Attribute name for storing NetworkX-internal graph data. `ident` must
36 have a different value than `children`. The default is 'id'.
37
38 children : string
39 Attribute name for storing NetworkX-internal graph data. `children`
40 must have a different value than `ident`. The default is 'children'.
41
42 Returns
43 -------
44 data : dict
45 A dictionary with node-link formatted data.
46
47 Raises
48 ------
49 NetworkXError
50 If `children` and `ident` attributes are identical.
51
52 Examples
53 --------
54 >>> from networkx.readwrite import json_graph
55 >>> G = nx.DiGraph([(1, 2)])
56 >>> data = json_graph.tree_data(G, root=1)
57
58 To serialize with json
59
60 >>> import json
61 >>> s = json.dumps(data)
62
63 Notes
64 -----
65 Node attributes are stored in this format but keys
66 for attributes must be strings if you want to serialize with JSON.
67
68 Graph and edge attributes are not stored.
69
70 See Also
71 --------
72 tree_graph, node_link_data, adjacency_data
73 """
74 if G.number_of_nodes() != G.number_of_edges() + 1:
75 raise TypeError("G is not a tree.")
76 if not G.is_directed():
77 raise TypeError("G is not directed.")
78
79 # NOTE: to be removed in 3.0
80 if attrs is not None:
81 import warnings
82
83 msg = (
84 "\nThe `attrs` keyword argument of tree_data is deprecated\n"
85 "and will be removed in networkx 3.0.\n"
86 "It is replaced with explicit `ident` and `children` "
87 "keyword arguments.\n"
88 "To make this warning go away and ensure usage is forward\n"
89 "compatible, replace `attrs` with `ident` and `children,\n"
90 "for example:\n\n"
91 " >>> tree_data(G, root, attrs={'id': 'foo', 'children': 'bar'})\n\n"
92 "should instead be written as\n\n"
93 " >>> tree_data(G, root, ident='foo', children='bar')\n\n"
94 "The default values of 'id' and 'children' will not change."
95 )
96 warnings.warn(msg, DeprecationWarning, stacklevel=2)
97
98 ident = attrs["id"]
99 children = attrs["children"]
100
101 if ident == children:
102 raise nx.NetworkXError("The values for `id` and `children` must be different.")
103
104 def add_children(n, G):
105 nbrs = G[n]
106 if len(nbrs) == 0:
107 return []
108 children_ = []
109 for child in nbrs:
110 d = dict(chain(G.nodes[child].items(), [(ident, child)]))
111 c = add_children(child, G)
112 if c:
113 d[children] = c
114 children_.append(d)
115 return children_
116
117 data = dict(chain(G.nodes[root].items(), [(ident, root)]))
118 data[children] = add_children(root, G)
119 return data
120
121
122 def tree_graph(data, attrs=None, ident="id", children="children"):
123 """Returns graph from tree data format.
124
125 Parameters
126 ----------
127 data : dict
128 Tree formatted graph data
129 attrs : dict
130 A dictionary that contains two keys 'id' and 'children'. The
131 corresponding values provide the attribute names for storing
132 NetworkX-internal graph data. The values should be unique. Default
133 value: :samp:`dict(id='id', children='children')`.
134
135 .. deprecated:: 2.6
136
137 The `attrs` keyword argument is replaced by `ident` and `children`
138 and will be removed in networkx 3.0
139
140 ident : string
141 Attribute name for storing NetworkX-internal graph data. `ident` must
142 have a different value than `children`. The default is 'id'.
143
144 children : string
145 Attribute name for storing NetworkX-internal graph data. `children`
146 must have a different value than `ident`. The default is 'children'.
147
148 Returns
149 -------
150 G : NetworkX DiGraph
151
152 Examples
153 --------
154 >>> from networkx.readwrite import json_graph
155 >>> G = nx.DiGraph([(1, 2)])
156 >>> data = json_graph.tree_data(G, root=1)
157 >>> H = json_graph.tree_graph(data)
158
159 See Also
160 --------
161 tree_data, node_link_data, adjacency_data
162 """
163 graph = nx.DiGraph()
164 if attrs is not None:
165 import warnings
166
167 msg = (
168 "\nThe `attrs` keyword argument of tree_graph is deprecated\n"
169 "and will be removed in networkx 3.0.\n"
170 "It is replaced with explicit `ident` and `children` "
171 "keyword arguments.\n"
172 "To make this warning go away and ensure usage is\n"
173 "forward compatible, replace `attrs` with `ident` and `children,\n"
174 "for example:\n\n"
175 " >>> tree_graph(data, attrs={'id': 'foo', 'children': 'bar'})\n\n"
176 "should instead be written as\n\n"
177 " >>> tree_graph(data, ident='foo', children='bar')\n\n"
178 "The default values of 'id' and 'children' will not change."
179 )
180 warnings.warn(msg, DeprecationWarning, stacklevel=2)
181
182 ident = attrs["id"]
183 children = attrs["children"]
184
185 def add_children(parent, children_):
186 for data in children_:
187 child = data[ident]
188 graph.add_edge(parent, child)
189 grandchildren = data.get(children, [])
190 if grandchildren:
191 add_children(child, grandchildren)
192 nodedata = {
193 str(k): v for k, v in data.items() if k != ident and k != children
194 }
195 graph.add_node(child, **nodedata)
196
197 root = data[ident]
198 children_ = data.get(children, [])
199 nodedata = {str(k): v for k, v in data.items() if k != ident and k != children}
200 graph.add_node(root, **nodedata)
201 add_children(root, children_)
202 return graph
203
[end of networkx/readwrite/json_graph/tree.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/networkx/readwrite/json_graph/tree.py b/networkx/readwrite/json_graph/tree.py
--- a/networkx/readwrite/json_graph/tree.py
+++ b/networkx/readwrite/json_graph/tree.py
@@ -75,6 +75,8 @@
raise TypeError("G is not a tree.")
if not G.is_directed():
raise TypeError("G is not directed.")
+ if not nx.is_weakly_connected(G):
+ raise TypeError("G is not weakly connected.")
# NOTE: to be removed in 3.0
if attrs is not None:
|
{"golden_diff": "diff --git a/networkx/readwrite/json_graph/tree.py b/networkx/readwrite/json_graph/tree.py\n--- a/networkx/readwrite/json_graph/tree.py\n+++ b/networkx/readwrite/json_graph/tree.py\n@@ -75,6 +75,8 @@\n raise TypeError(\"G is not a tree.\")\n if not G.is_directed():\n raise TypeError(\"G is not directed.\")\n+ if not nx.is_weakly_connected(G):\n+ raise TypeError(\"G is not weakly connected.\")\n \n # NOTE: to be removed in 3.0\n if attrs is not None:\n", "issue": "`json_graph.tree_data` can cause maximum recursion depth error.\n<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->\r\n\r\n<!--- Provide a general summary of the issue in the Title above -->\r\n\r\n### Current Behavior\r\n<!--- Tell us what happens instead of the expected behavior -->\r\nCurrently the algorithm compares the `n_nodes` with `n_edges` to check if `G` is a tree. https://github.com/networkx/networkx/blob/0cc70051fa0a979b1f1eab4af5b6587a6ebf8334/networkx/readwrite/json_graph/tree.py#L74-L75 \r\nThis check can be bypassed with specific inputs and cause a recursion error.\r\n\r\n### Expected Behavior\r\n<!--- Tell us what should happen -->\r\nThe code should check whether there are cycles with `root` as the source and raise an exception.\r\nAnother possible fix would be to check if the graph is not weakly connected.\r\n\r\n### Steps to Reproduce\r\n<!--- Provide a minimal example that reproduces the bug -->\r\n```Python3\r\n>>> import networkx as nx\r\n>>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1)])\r\n>>> G.add_node(4)\r\n>>> data = nx.json_graph.tree_data(G, 1)\r\nRecursionError: maximum recursion depth exceeded\r\n```\r\n\r\n### Environment\r\n<!--- Please provide details about your local environment -->\r\nPython version: 3.8.10\r\nNetworkX version: 2.7rc1.dev0\r\n\n", "before_files": [{"content": "from itertools import chain\nimport networkx as nx\n\n__all__ = [\"tree_data\", \"tree_graph\"]\n\n\n# NOTE: Remove attrs from signature in 3.0\ndef tree_data(G, root, attrs=None, ident=\"id\", children=\"children\"):\n \"\"\"Returns data in tree format that is suitable for JSON serialization\n and use in Javascript documents.\n\n Parameters\n ----------\n G : NetworkX graph\n G must be an oriented tree\n\n root : node\n The root of the tree\n\n attrs : dict\n A dictionary that contains two keys 'id' and 'children'. The\n corresponding values provide the attribute names for storing\n NetworkX-internal graph data. The values should be unique. Default\n value: :samp:`dict(id='id', children='children')`.\n\n If some user-defined graph data use these attribute names as data keys,\n they may be silently dropped.\n\n .. deprecated:: 2.6\n\n The `attrs` keyword argument is replaced by `ident` and `children`\n and will be removed in networkx 3.0\n\n ident : string\n Attribute name for storing NetworkX-internal graph data. `ident` must\n have a different value than `children`. The default is 'id'.\n\n children : string\n Attribute name for storing NetworkX-internal graph data. `children`\n must have a different value than `ident`. The default is 'children'.\n\n Returns\n -------\n data : dict\n A dictionary with node-link formatted data.\n\n Raises\n ------\n NetworkXError\n If `children` and `ident` attributes are identical.\n\n Examples\n --------\n >>> from networkx.readwrite import json_graph\n >>> G = nx.DiGraph([(1, 2)])\n >>> data = json_graph.tree_data(G, root=1)\n\n To serialize with json\n\n >>> import json\n >>> s = json.dumps(data)\n\n Notes\n -----\n Node attributes are stored in this format but keys\n for attributes must be strings if you want to serialize with JSON.\n\n Graph and edge attributes are not stored.\n\n See Also\n --------\n tree_graph, node_link_data, adjacency_data\n \"\"\"\n if G.number_of_nodes() != G.number_of_edges() + 1:\n raise TypeError(\"G is not a tree.\")\n if not G.is_directed():\n raise TypeError(\"G is not directed.\")\n\n # NOTE: to be removed in 3.0\n if attrs is not None:\n import warnings\n\n msg = (\n \"\\nThe `attrs` keyword argument of tree_data is deprecated\\n\"\n \"and will be removed in networkx 3.0.\\n\"\n \"It is replaced with explicit `ident` and `children` \"\n \"keyword arguments.\\n\"\n \"To make this warning go away and ensure usage is forward\\n\"\n \"compatible, replace `attrs` with `ident` and `children,\\n\"\n \"for example:\\n\\n\"\n \" >>> tree_data(G, root, attrs={'id': 'foo', 'children': 'bar'})\\n\\n\"\n \"should instead be written as\\n\\n\"\n \" >>> tree_data(G, root, ident='foo', children='bar')\\n\\n\"\n \"The default values of 'id' and 'children' will not change.\"\n )\n warnings.warn(msg, DeprecationWarning, stacklevel=2)\n\n ident = attrs[\"id\"]\n children = attrs[\"children\"]\n\n if ident == children:\n raise nx.NetworkXError(\"The values for `id` and `children` must be different.\")\n\n def add_children(n, G):\n nbrs = G[n]\n if len(nbrs) == 0:\n return []\n children_ = []\n for child in nbrs:\n d = dict(chain(G.nodes[child].items(), [(ident, child)]))\n c = add_children(child, G)\n if c:\n d[children] = c\n children_.append(d)\n return children_\n\n data = dict(chain(G.nodes[root].items(), [(ident, root)]))\n data[children] = add_children(root, G)\n return data\n\n\ndef tree_graph(data, attrs=None, ident=\"id\", children=\"children\"):\n \"\"\"Returns graph from tree data format.\n\n Parameters\n ----------\n data : dict\n Tree formatted graph data\n attrs : dict\n A dictionary that contains two keys 'id' and 'children'. The\n corresponding values provide the attribute names for storing\n NetworkX-internal graph data. The values should be unique. Default\n value: :samp:`dict(id='id', children='children')`.\n\n .. deprecated:: 2.6\n\n The `attrs` keyword argument is replaced by `ident` and `children`\n and will be removed in networkx 3.0\n\n ident : string\n Attribute name for storing NetworkX-internal graph data. `ident` must\n have a different value than `children`. The default is 'id'.\n\n children : string\n Attribute name for storing NetworkX-internal graph data. `children`\n must have a different value than `ident`. The default is 'children'.\n\n Returns\n -------\n G : NetworkX DiGraph\n\n Examples\n --------\n >>> from networkx.readwrite import json_graph\n >>> G = nx.DiGraph([(1, 2)])\n >>> data = json_graph.tree_data(G, root=1)\n >>> H = json_graph.tree_graph(data)\n\n See Also\n --------\n tree_data, node_link_data, adjacency_data\n \"\"\"\n graph = nx.DiGraph()\n if attrs is not None:\n import warnings\n\n msg = (\n \"\\nThe `attrs` keyword argument of tree_graph is deprecated\\n\"\n \"and will be removed in networkx 3.0.\\n\"\n \"It is replaced with explicit `ident` and `children` \"\n \"keyword arguments.\\n\"\n \"To make this warning go away and ensure usage is\\n\"\n \"forward compatible, replace `attrs` with `ident` and `children,\\n\"\n \"for example:\\n\\n\"\n \" >>> tree_graph(data, attrs={'id': 'foo', 'children': 'bar'})\\n\\n\"\n \"should instead be written as\\n\\n\"\n \" >>> tree_graph(data, ident='foo', children='bar')\\n\\n\"\n \"The default values of 'id' and 'children' will not change.\"\n )\n warnings.warn(msg, DeprecationWarning, stacklevel=2)\n\n ident = attrs[\"id\"]\n children = attrs[\"children\"]\n\n def add_children(parent, children_):\n for data in children_:\n child = data[ident]\n graph.add_edge(parent, child)\n grandchildren = data.get(children, [])\n if grandchildren:\n add_children(child, grandchildren)\n nodedata = {\n str(k): v for k, v in data.items() if k != ident and k != children\n }\n graph.add_node(child, **nodedata)\n\n root = data[ident]\n children_ = data.get(children, [])\n nodedata = {str(k): v for k, v in data.items() if k != ident and k != children}\n graph.add_node(root, **nodedata)\n add_children(root, children_)\n return graph\n", "path": "networkx/readwrite/json_graph/tree.py"}]}
| 2,982 | 127 |
gh_patches_debug_63956
|
rasdani/github-patches
|
git_diff
|
redis__redis-py-1780
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Module installation fails due to missing dependency
https://github.com/redis/redis-py/blob/039488d97ec545b37e903d1b791a88bac8f77973/redis/connection.py#L1
the deprecated distutils was replaced with the packaging module as part of release v4.0.0b1
packaging is not a builtin python module but was not added to setup.py as a dependency which causes applications that require redis-py to fail if packaging isn't already installed on the machine.
the packaging module should probably be added as a dependency in setup.py to resolve this
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 from setuptools import find_packages, setup
3
4 import redis
5
6 setup(
7 name="redis",
8 description="Python client for Redis database and key-value store",
9 long_description=open("README.md").read().strip(),
10 long_description_content_type="text/markdown",
11 keywords=["Redis", "key-value store", "database"],
12 license="MIT",
13 version=redis.__version__,
14 packages=find_packages(
15 include=[
16 "redis",
17 "redis.commands",
18 "redis.commands.bf",
19 "redis.commands.json",
20 "redis.commands.search",
21 "redis.commands.timeseries",
22 "redis.commands.graph",
23 ]
24 ),
25 url="https://github.com/redis/redis-py",
26 author="Redis Inc.",
27 author_email="[email protected]",
28 python_requires=">=3.6",
29 install_requires=[
30 "deprecated==1.2.3",
31 "packaging==21.3",
32 ],
33 classifiers=[
34 "Development Status :: 5 - Production/Stable",
35 "Environment :: Console",
36 "Intended Audience :: Developers",
37 "License :: OSI Approved :: MIT License",
38 "Operating System :: OS Independent",
39 "Programming Language :: Python",
40 "Programming Language :: Python :: 3",
41 "Programming Language :: Python :: 3 :: Only",
42 "Programming Language :: Python :: 3.6",
43 "Programming Language :: Python :: 3.7",
44 "Programming Language :: Python :: 3.8",
45 "Programming Language :: Python :: 3.9",
46 "Programming Language :: Python :: 3.10",
47 "Programming Language :: Python :: Implementation :: CPython",
48 "Programming Language :: Python :: Implementation :: PyPy",
49 ],
50 extras_require={
51 "hiredis": ["hiredis>=1.0.0"],
52 },
53 )
54
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,9 +26,12 @@
author="Redis Inc.",
author_email="[email protected]",
python_requires=">=3.6",
+ setup_requires=[
+ "packaging>=21.3",
+ ],
install_requires=[
- "deprecated==1.2.3",
- "packaging==21.3",
+ "deprecated>=1.2.3",
+ "packaging>=21.3",
],
classifiers=[
"Development Status :: 5 - Production/Stable",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,9 +26,12 @@\n author=\"Redis Inc.\",\n author_email=\"[email protected]\",\n python_requires=\">=3.6\",\n+ setup_requires=[\n+ \"packaging>=21.3\",\n+ ],\n install_requires=[\n- \"deprecated==1.2.3\",\n- \"packaging==21.3\",\n+ \"deprecated>=1.2.3\",\n+ \"packaging>=21.3\",\n ],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n", "issue": "Module installation fails due to missing dependency\nhttps://github.com/redis/redis-py/blob/039488d97ec545b37e903d1b791a88bac8f77973/redis/connection.py#L1\r\nthe deprecated distutils was replaced with the packaging module as part of release v4.0.0b1\r\npackaging is not a builtin python module but was not added to setup.py as a dependency which causes applications that require redis-py to fail if packaging isn't already installed on the machine.\r\nthe packaging module should probably be added as a dependency in setup.py to resolve this\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import find_packages, setup\n\nimport redis\n\nsetup(\n name=\"redis\",\n description=\"Python client for Redis database and key-value store\",\n long_description=open(\"README.md\").read().strip(),\n long_description_content_type=\"text/markdown\",\n keywords=[\"Redis\", \"key-value store\", \"database\"],\n license=\"MIT\",\n version=redis.__version__,\n packages=find_packages(\n include=[\n \"redis\",\n \"redis.commands\",\n \"redis.commands.bf\",\n \"redis.commands.json\",\n \"redis.commands.search\",\n \"redis.commands.timeseries\",\n \"redis.commands.graph\",\n ]\n ),\n url=\"https://github.com/redis/redis-py\",\n author=\"Redis Inc.\",\n author_email=\"[email protected]\",\n python_requires=\">=3.6\",\n install_requires=[\n \"deprecated==1.2.3\",\n \"packaging==21.3\",\n ],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n extras_require={\n \"hiredis\": [\"hiredis>=1.0.0\"],\n },\n)\n", "path": "setup.py"}]}
| 1,162 | 141 |
gh_patches_debug_560
|
rasdani/github-patches
|
git_diff
|
ethereum__consensus-specs-1130
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BLS and testing
Decided I wanted to get this out to explain the current state of testing, and **collect feedback** (implementers please comment) on what you need from testing, and your feelings about BLS usage in tests.
# BLS and testing
The two pain-points to get a pretty (and large) set of test-vectors out for clients are:
- BLS Signature creation
- BLS Signature verification
And side-issue, but easily resolved:
*efficient creation of a genesis state*:
When BLS functionality is implemented in test-code (creation of signed deposits, and verification).
Solution would be to either cache it, or create it directly, without going through the spec functions (current temporary solution on experiment branch).
## Status
Talking about the status on [`spectest-deco` PR 1052](https://github.com/ethereum/eth2.0-specs/pull/1052) here, based on the `v06x` branch, where we are developing 0.6 improvements. (to be merged back into dev later)
### The testing pipeline currently looks like:
- py-spec, calls BLS stub
- test-helpers, don't create self-signed objects with valid signatures
- py-test code, unified with test-vector-creation (see [PR 1052](https://github.com/ethereum/eth2.0-specs/pull/1052))
- py-test runner to run spec-tests, purely for assertions
- test-generator running the spec-tests, passing `generator_mode=true` to each of them, making them output a test-vector.
### Pytests status:
- move from `tests/` to `eth2spec/test`, i.e. part of package
- removed use of `pytest`
- annotated with `@spec_test` or similar (see PR 1052)
- as part of test-generation effort, yay for shared effort:
- expanded in block-operation testing: [coverage checklist here](https://github.com/ethereum/eth2.0-specs/issues/927)
- slightly faster, less deep-copies
- stuck on BLS stub (no sig creation/verification)
### Test-generation status:
- BLS, SSZ-generic, SSZ-static, shuffling test generators still all in place and up to date (`v06x` branch)
- `operations` test-gen uses test-package ability to output test-vectors for each test-case
- but no valid signatures
- lack of a definition how to handle this signature problem as a test-consumer
- there are no signature-related testcases
- turning BLS off would effectively let you check conformance, but it's hacky, and not remotely a good practice to have even an option for...
- it's approx. ~140MB worth (iirc) of yaml encoded state-transitions, covering many edge-cases. Worth to get in the hands of implementers quick.
- `sanity` tests updated and can be cleanly used for test-generation, but requires more work to define the format of the test-vectors, as they is more variety.
- `epoch` processing tests also updated, also can be used, not as complete as block-processing, lower priority.
## Possible ways forward:
- Simple but hacky: "turn BLS off for testing"
- No "BLS off", BLS ON on client side, but only partially on spec side. Rely on signature verification not being hit before anything else during testing
- valid test cases generated with valid signatures
- invalid test cases marked: does it error because of BLS? And runners should check the reason for aborting processing: if it doesn't match, the test should fail. Now these pytests don't need full BLS update work, and can be released somewhat quicker
- "BLS on", more work (~1 week)
- slower on test-generation, but we get the best kind of test-vectors: correct, BLS verification ON.
- blocker: what if a test case fails because of a signature error (test setup not creating the sig correctly), instead of a real assertion case. Spec will look correct, passes tests, but things are not right. We need to mark Sig-verification errors distinctly, so we can catch these problems when we turn BLS on in the pyspec. How: instead of `assert verify_...`, just `verify_...`, and make it raise a special `BLSVerificationError` (or something like that)
- We likely still want to mark tests as "signature related" or not, so implementers can catch it easily if their code is not aborting properly before signature verification, to assure invalid inputs are not costly.
A work-in-progress introduction of actual full BLS usage in the pytests is started here: [`tests-with-sigs` branch](https://github.com/ethereum/eth2.0-specs/tree/tests-with-sigs)
Suggestions welcome.
</issue>
<code>
[start of scripts/phase0/build_spec.py]
1 import sys
2 import function_puller
3
4
5 def build_phase0_spec(sourcefile, outfile):
6 code_lines = []
7 code_lines.append("""
8 from typing import (
9 Any,
10 Dict,
11 List,
12 NewType,
13 Tuple,
14 )
15 from eth2spec.utils.minimal_ssz import *
16 from eth2spec.utils.bls_stub import *
17
18 """)
19 for i in (1, 2, 3, 4, 8, 32, 48, 96):
20 code_lines.append("def int_to_bytes%d(x): return x.to_bytes(%d, 'little')" % (i, i))
21
22 code_lines.append("""
23
24 # stub, will get overwritten by real var
25 SLOTS_PER_EPOCH = 64
26
27
28 Slot = NewType('Slot', int) # uint64
29 Epoch = NewType('Epoch', int) # uint64
30 Shard = NewType('Shard', int) # uint64
31 ValidatorIndex = NewType('ValidatorIndex', int) # uint64
32 Gwei = NewType('Gwei', int) # uint64
33 Bytes32 = NewType('Bytes32', bytes) # bytes32
34 BLSPubkey = NewType('BLSPubkey', bytes) # bytes48
35 BLSSignature = NewType('BLSSignature', bytes) # bytes96
36 Store = None
37 """)
38
39 code_lines += function_puller.get_spec(sourcefile)
40
41 code_lines.append("""
42 # Monkey patch validator compute committee code
43 _compute_committee = compute_committee
44 committee_cache = {}
45
46
47 def compute_committee(indices: List[ValidatorIndex], seed: Bytes32, index: int, count: int) -> List[ValidatorIndex]:
48 param_hash = (hash_tree_root(indices), seed, index, count)
49
50 if param_hash in committee_cache:
51 return committee_cache[param_hash]
52 else:
53 ret = _compute_committee(indices, seed, index, count)
54 committee_cache[param_hash] = ret
55 return ret
56
57
58 # Monkey patch hash cache
59 _hash = hash
60 hash_cache = {}
61
62
63 def hash(x):
64 if x in hash_cache:
65 return hash_cache[x]
66 else:
67 ret = _hash(x)
68 hash_cache[x] = ret
69 return ret
70
71 # Access to overwrite spec constants based on configuration
72 def apply_constants_preset(preset: Dict[str, Any]):
73 global_vars = globals()
74 for k, v in preset.items():
75 global_vars[k] = v
76
77 # Deal with derived constants
78 global_vars['GENESIS_EPOCH'] = slot_to_epoch(GENESIS_SLOT)
79
80 # Initialize SSZ types again, to account for changed lengths
81 init_SSZ_types()
82 """)
83
84 with open(outfile, 'w') as out:
85 out.write("\n".join(code_lines))
86
87
88 if __name__ == '__main__':
89 if len(sys.argv) < 3:
90 print("Usage: <source phase0> <output phase0 pyspec>")
91 build_phase0_spec(sys.argv[1], sys.argv[2])
92
93
[end of scripts/phase0/build_spec.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/phase0/build_spec.py b/scripts/phase0/build_spec.py
--- a/scripts/phase0/build_spec.py
+++ b/scripts/phase0/build_spec.py
@@ -13,7 +13,7 @@
Tuple,
)
from eth2spec.utils.minimal_ssz import *
-from eth2spec.utils.bls_stub import *
+from eth2spec.utils.bls import *
""")
for i in (1, 2, 3, 4, 8, 32, 48, 96):
|
{"golden_diff": "diff --git a/scripts/phase0/build_spec.py b/scripts/phase0/build_spec.py\n--- a/scripts/phase0/build_spec.py\n+++ b/scripts/phase0/build_spec.py\n@@ -13,7 +13,7 @@\n Tuple,\n )\n from eth2spec.utils.minimal_ssz import *\n-from eth2spec.utils.bls_stub import *\n+from eth2spec.utils.bls import *\n \n \"\"\")\n for i in (1, 2, 3, 4, 8, 32, 48, 96):\n", "issue": "BLS and testing\nDecided I wanted to get this out to explain the current state of testing, and **collect feedback** (implementers please comment) on what you need from testing, and your feelings about BLS usage in tests.\r\n\r\n# BLS and testing\r\n\r\nThe two pain-points to get a pretty (and large) set of test-vectors out for clients are:\r\n- BLS Signature creation\r\n- BLS Signature verification\r\n\r\nAnd side-issue, but easily resolved:\r\n*efficient creation of a genesis state*:\r\nWhen BLS functionality is implemented in test-code (creation of signed deposits, and verification).\r\nSolution would be to either cache it, or create it directly, without going through the spec functions (current temporary solution on experiment branch).\r\n\r\n## Status\r\n\r\nTalking about the status on [`spectest-deco` PR 1052](https://github.com/ethereum/eth2.0-specs/pull/1052) here, based on the `v06x` branch, where we are developing 0.6 improvements. (to be merged back into dev later)\r\n\r\n### The testing pipeline currently looks like:\r\n\r\n- py-spec, calls BLS stub\r\n- test-helpers, don't create self-signed objects with valid signatures\r\n- py-test code, unified with test-vector-creation (see [PR 1052](https://github.com/ethereum/eth2.0-specs/pull/1052))\r\n- py-test runner to run spec-tests, purely for assertions\r\n- test-generator running the spec-tests, passing `generator_mode=true` to each of them, making them output a test-vector.\r\n\r\n### Pytests status:\r\n\r\n- move from `tests/` to `eth2spec/test`, i.e. part of package\r\n - removed use of `pytest`\r\n - annotated with `@spec_test` or similar (see PR 1052)\r\n- as part of test-generation effort, yay for shared effort:\r\n - expanded in block-operation testing: [coverage checklist here](https://github.com/ethereum/eth2.0-specs/issues/927)\r\n - slightly faster, less deep-copies\r\n- stuck on BLS stub (no sig creation/verification)\r\n\r\n### Test-generation status:\r\n\r\n- BLS, SSZ-generic, SSZ-static, shuffling test generators still all in place and up to date (`v06x` branch)\r\n- `operations` test-gen uses test-package ability to output test-vectors for each test-case\r\n - but no valid signatures\r\n - lack of a definition how to handle this signature problem as a test-consumer\r\n - there are no signature-related testcases\r\n - turning BLS off would effectively let you check conformance, but it's hacky, and not remotely a good practice to have even an option for...\r\n - it's approx. ~140MB worth (iirc) of yaml encoded state-transitions, covering many edge-cases. Worth to get in the hands of implementers quick.\r\n- `sanity` tests updated and can be cleanly used for test-generation, but requires more work to define the format of the test-vectors, as they is more variety.\r\n- `epoch` processing tests also updated, also can be used, not as complete as block-processing, lower priority.\r\n\r\n## Possible ways forward:\r\n\r\n- Simple but hacky: \"turn BLS off for testing\"\r\n- No \"BLS off\", BLS ON on client side, but only partially on spec side. Rely on signature verification not being hit before anything else during testing\r\n - valid test cases generated with valid signatures\r\n - invalid test cases marked: does it error because of BLS? And runners should check the reason for aborting processing: if it doesn't match, the test should fail. Now these pytests don't need full BLS update work, and can be released somewhat quicker\r\n- \"BLS on\", more work (~1 week)\r\n - slower on test-generation, but we get the best kind of test-vectors: correct, BLS verification ON.\r\n - blocker: what if a test case fails because of a signature error (test setup not creating the sig correctly), instead of a real assertion case. Spec will look correct, passes tests, but things are not right. We need to mark Sig-verification errors distinctly, so we can catch these problems when we turn BLS on in the pyspec. How: instead of `assert verify_...`, just `verify_...`, and make it raise a special `BLSVerificationError` (or something like that)\r\n - We likely still want to mark tests as \"signature related\" or not, so implementers can catch it easily if their code is not aborting properly before signature verification, to assure invalid inputs are not costly.\r\n\r\nA work-in-progress introduction of actual full BLS usage in the pytests is started here: [`tests-with-sigs` branch](https://github.com/ethereum/eth2.0-specs/tree/tests-with-sigs)\r\n\r\nSuggestions welcome.\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import sys\nimport function_puller\n\n\ndef build_phase0_spec(sourcefile, outfile):\n code_lines = []\n code_lines.append(\"\"\"\nfrom typing import (\n Any,\n Dict,\n List,\n NewType,\n Tuple,\n)\nfrom eth2spec.utils.minimal_ssz import *\nfrom eth2spec.utils.bls_stub import *\n\n\"\"\")\n for i in (1, 2, 3, 4, 8, 32, 48, 96):\n code_lines.append(\"def int_to_bytes%d(x): return x.to_bytes(%d, 'little')\" % (i, i))\n\n code_lines.append(\"\"\"\n\n# stub, will get overwritten by real var\nSLOTS_PER_EPOCH = 64\n\n\nSlot = NewType('Slot', int) # uint64\nEpoch = NewType('Epoch', int) # uint64\nShard = NewType('Shard', int) # uint64\nValidatorIndex = NewType('ValidatorIndex', int) # uint64\nGwei = NewType('Gwei', int) # uint64\nBytes32 = NewType('Bytes32', bytes) # bytes32\nBLSPubkey = NewType('BLSPubkey', bytes) # bytes48\nBLSSignature = NewType('BLSSignature', bytes) # bytes96\nStore = None\n\"\"\")\n\n code_lines += function_puller.get_spec(sourcefile)\n\n code_lines.append(\"\"\"\n# Monkey patch validator compute committee code\n_compute_committee = compute_committee\ncommittee_cache = {}\n\n\ndef compute_committee(indices: List[ValidatorIndex], seed: Bytes32, index: int, count: int) -> List[ValidatorIndex]:\n param_hash = (hash_tree_root(indices), seed, index, count)\n\n if param_hash in committee_cache:\n return committee_cache[param_hash]\n else:\n ret = _compute_committee(indices, seed, index, count)\n committee_cache[param_hash] = ret\n return ret\n\n\n# Monkey patch hash cache\n_hash = hash\nhash_cache = {}\n\n\ndef hash(x):\n if x in hash_cache:\n return hash_cache[x]\n else:\n ret = _hash(x)\n hash_cache[x] = ret\n return ret\n\n# Access to overwrite spec constants based on configuration\ndef apply_constants_preset(preset: Dict[str, Any]):\n global_vars = globals()\n for k, v in preset.items():\n global_vars[k] = v\n\n # Deal with derived constants\n global_vars['GENESIS_EPOCH'] = slot_to_epoch(GENESIS_SLOT)\n\n # Initialize SSZ types again, to account for changed lengths\n init_SSZ_types()\n\"\"\")\n\n with open(outfile, 'w') as out:\n out.write(\"\\n\".join(code_lines))\n\n\nif __name__ == '__main__':\n if len(sys.argv) < 3:\n print(\"Usage: <source phase0> <output phase0 pyspec>\")\n build_phase0_spec(sys.argv[1], sys.argv[2])\n\n", "path": "scripts/phase0/build_spec.py"}]}
| 2,439 | 121 |
gh_patches_debug_20147
|
rasdani/github-patches
|
git_diff
|
kartoza__prj.app-447
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
After creating a new organization it should appear in the pending approval menu
Please make sure if a user adds an organization the Pending Approval menu is updated
http://staging.changelog.qgis.org/en/qgis/pending-certifyingorganisation/list/
</issue>
<code>
[start of django_project/core/custom_middleware.py]
1 # coding=utf-8
2 # flake8: noqa
3 """
4 core.custom_middleware
5 """
6 from base.models import Project, Version
7 from changes.models import Category, SponsorshipLevel, SponsorshipPeriod, Entry
8
9
10 class NavContextMiddleware(object):
11 """
12 Adds the required navigation variables to each response
13 """
14
15 def __init__(self):
16 pass
17
18 @staticmethod
19 def process_template_response(request, response):
20 """
21 Add 'the_project', 'the_entry', 'the_version' to context for the
22 navigation.
23
24 Justification: To make the navigation functional, we need to know
25 which Project (or Version, Committee etc) the current context
26 relates to. This is required for URLs. Rather than include lots of
27 if/else in the navigation template, it seems cleaner to add the
28 above variables to the context here.
29
30 :param request: Http Request obj
31 :param response: Http Response obj
32 :return: context :rtype: dict
33 """
34 context = response.context_data
35
36 if context.get('project', None):
37 context['the_project'] = context.get('project')
38 versions = Version.objects.filter(project=context.get('project'))
39 context['has_pending_versions'] = (
40 Version.unapproved_objects.filter(
41 project=context.get('project')).exists())
42 context['has_pending_categories'] = (
43 Category.unapproved_objects.filter(
44 project=context.get('project')).exists())
45 context['has_pending_sponsor_lvl'] = (
46 SponsorshipLevel.unapproved_objects.filter(
47 project=context.get('project')).exists())
48 context['has_pending_sponsor_period'] = (
49 SponsorshipPeriod.unapproved_objects.filter(
50 project=context.get('project')).exists())
51 if versions:
52 context['has_pending_entries'] = (
53 Entry.unapproved_objects.filter(
54 version__in=versions).exists())
55
56 else:
57 if request.user.is_staff:
58 context['the_projects'] = Project.objects.all()
59 else:
60 context['the_projects'] = Project.approved_objects.filter(
61 private=False
62 )
63
64 if context.get('version', None):
65 context['the_version'] = context.get('version')
66 context['the_project'] = context.get('version').project
67
68 if context.get('committee', None):
69 context['the_committee'] = context.get('committee')
70 context['the_project'] = context.get('committee').project
71
72 if context.get('ballot', None):
73 context['the_committee'] = context.get('ballot').committee
74 context['the_project'] = context.get('ballot').committee.project
75
76 if context.get('category', None):
77 context['the_project'] = context.get('category').project
78
79 if context.get('ballots', None):
80 try:
81 context['the_project'] = \
82 context.get('ballots')[0].committee.project
83 except (KeyError, IndexError):
84 pass
85
86 if context.get('entry', None):
87 context['the_entry'] = context.get('entry')
88 context['the_version'] = context.get('entry').version
89 context['the_project'] = context.get('entry').version.project
90
91 if context.get('committees', None):
92 try:
93 context['the_project'] = context.get('committees')[0].project
94 except (KeyError, IndexError):
95 pass
96
97 if context.get('versions', None):
98 try:
99 context['the_project'] = context.get('versions')[0].project
100 except (KeyError, IndexError):
101 pass
102
103 if context.get('entries', None):
104 try:
105 context['the_version'] = context.get('entries')[0].version
106 context['the_project'] = \
107 context.get('entries')[0].version.project
108 except (KeyError, IndexError):
109 pass
110
111 if context.get('categories', None):
112 try:
113 context['the_project'] = \
114 context.get('categories')[0].project
115 except (KeyError, IndexError):
116 pass
117
118 return response
119
[end of django_project/core/custom_middleware.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/django_project/core/custom_middleware.py b/django_project/core/custom_middleware.py
--- a/django_project/core/custom_middleware.py
+++ b/django_project/core/custom_middleware.py
@@ -5,6 +5,7 @@
"""
from base.models import Project, Version
from changes.models import Category, SponsorshipLevel, SponsorshipPeriod, Entry
+from certification.models import CertifyingOrganisation
class NavContextMiddleware(object):
@@ -48,6 +49,9 @@
context['has_pending_sponsor_period'] = (
SponsorshipPeriod.unapproved_objects.filter(
project=context.get('project')).exists())
+ context['has_pending_organisations'] = (
+ CertifyingOrganisation.unapproved_objects.filter(
+ project=context.get('project')).exists())
if versions:
context['has_pending_entries'] = (
Entry.unapproved_objects.filter(
|
{"golden_diff": "diff --git a/django_project/core/custom_middleware.py b/django_project/core/custom_middleware.py\n--- a/django_project/core/custom_middleware.py\n+++ b/django_project/core/custom_middleware.py\n@@ -5,6 +5,7 @@\n \"\"\"\n from base.models import Project, Version\n from changes.models import Category, SponsorshipLevel, SponsorshipPeriod, Entry\n+from certification.models import CertifyingOrganisation\n \n \n class NavContextMiddleware(object):\n@@ -48,6 +49,9 @@\n context['has_pending_sponsor_period'] = (\n SponsorshipPeriod.unapproved_objects.filter(\n project=context.get('project')).exists())\n+ context['has_pending_organisations'] = (\n+ CertifyingOrganisation.unapproved_objects.filter(\n+ project=context.get('project')).exists())\n if versions:\n context['has_pending_entries'] = (\n Entry.unapproved_objects.filter(\n", "issue": "After creating a new organization it should appear in the pending approval menu\nPlease make sure if a user adds an organization the Pending Approval menu is updated\r\n\r\nhttp://staging.changelog.qgis.org/en/qgis/pending-certifyingorganisation/list/\n", "before_files": [{"content": "# coding=utf-8\n# flake8: noqa\n\"\"\"\ncore.custom_middleware\n\"\"\"\nfrom base.models import Project, Version\nfrom changes.models import Category, SponsorshipLevel, SponsorshipPeriod, Entry\n\n\nclass NavContextMiddleware(object):\n \"\"\"\n Adds the required navigation variables to each response\n \"\"\"\n\n def __init__(self):\n pass\n\n @staticmethod\n def process_template_response(request, response):\n \"\"\"\n Add 'the_project', 'the_entry', 'the_version' to context for the\n navigation.\n\n Justification: To make the navigation functional, we need to know\n which Project (or Version, Committee etc) the current context\n relates to. This is required for URLs. Rather than include lots of\n if/else in the navigation template, it seems cleaner to add the\n above variables to the context here.\n\n :param request: Http Request obj\n :param response: Http Response obj\n :return: context :rtype: dict\n \"\"\"\n context = response.context_data\n\n if context.get('project', None):\n context['the_project'] = context.get('project')\n versions = Version.objects.filter(project=context.get('project'))\n context['has_pending_versions'] = (\n Version.unapproved_objects.filter(\n project=context.get('project')).exists())\n context['has_pending_categories'] = (\n Category.unapproved_objects.filter(\n project=context.get('project')).exists())\n context['has_pending_sponsor_lvl'] = (\n SponsorshipLevel.unapproved_objects.filter(\n project=context.get('project')).exists())\n context['has_pending_sponsor_period'] = (\n SponsorshipPeriod.unapproved_objects.filter(\n project=context.get('project')).exists())\n if versions:\n context['has_pending_entries'] = (\n Entry.unapproved_objects.filter(\n version__in=versions).exists())\n\n else:\n if request.user.is_staff:\n context['the_projects'] = Project.objects.all()\n else:\n context['the_projects'] = Project.approved_objects.filter(\n private=False\n )\n\n if context.get('version', None):\n context['the_version'] = context.get('version')\n context['the_project'] = context.get('version').project\n\n if context.get('committee', None):\n context['the_committee'] = context.get('committee')\n context['the_project'] = context.get('committee').project\n\n if context.get('ballot', None):\n context['the_committee'] = context.get('ballot').committee\n context['the_project'] = context.get('ballot').committee.project\n\n if context.get('category', None):\n context['the_project'] = context.get('category').project\n\n if context.get('ballots', None):\n try:\n context['the_project'] = \\\n context.get('ballots')[0].committee.project\n except (KeyError, IndexError):\n pass\n\n if context.get('entry', None):\n context['the_entry'] = context.get('entry')\n context['the_version'] = context.get('entry').version\n context['the_project'] = context.get('entry').version.project\n\n if context.get('committees', None):\n try:\n context['the_project'] = context.get('committees')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('versions', None):\n try:\n context['the_project'] = context.get('versions')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('entries', None):\n try:\n context['the_version'] = context.get('entries')[0].version\n context['the_project'] = \\\n context.get('entries')[0].version.project\n except (KeyError, IndexError):\n pass\n\n if context.get('categories', None):\n try:\n context['the_project'] = \\\n context.get('categories')[0].project\n except (KeyError, IndexError):\n pass\n\n return response\n", "path": "django_project/core/custom_middleware.py"}]}
| 1,700 | 190 |
gh_patches_debug_35988
|
rasdani/github-patches
|
git_diff
|
PlasmaPy__PlasmaPy-2175
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incompatibility between `@angular_freq_to_hz` and var-keyword arguments
### Bug description
While trying to decorate `gyrofrequency` with `@particle_input` in #2026, I found an issue with `@angular_freq_to_hz`. It appears that `@angular_freq_to_hz` cannot decorate functions that accept var-keyword arguments.
### Expected outcome
We should be able to use `@angular_freq_to_hz` to decorate functions with var-keyword parameters.
### Minimal complete verifiable example
When declaring this function:
```Python
from plasmapy.utils.decorators import angular_freq_to_hz
@angular_freq_to_hz
def f(**kwargs):
return kwargs
```
I get:
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[41], line 1
----> 1 @angular_freq_to_hz
2 def f(**kwargs):
3 return kwargs
File ~/Projects/PlasmaPy/plasmapy/utils/decorators/converter.py:101, in angular_freq_to_hz(fn)
97 new_params = sig.parameters.copy()
98 new_params["to_hz"] = inspect.Parameter(
99 "to_hz", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False
100 )
--> 101 new_sig = inspect.Signature(
102 parameters=new_params.values(), return_annotation=sig.return_annotation
103 )
104 fn.__signature__ = new_sig
106 @preserve_signature
107 @functools.wraps(fn)
108 def wrapper(*args, to_hz=False, **kwargs):
File ~/miniconda3/envs/pldev/lib/python3.11/inspect.py:2994, in Signature.__init__(self, parameters, return_annotation, __validate_parameters__)
2988 msg = (
2989 'wrong parameter order: {} parameter before {} '
2990 'parameter'
2991 )
2992 msg = msg.format(top_kind.description,
2993 kind.description)
-> 2994 raise ValueError(msg)
2995 elif kind > top_kind:
2996 kind_defaults = False
ValueError: wrong parameter order: variadic keyword parameter before positional or keyword parameter
```
### Package versions
Development branch
### Additional context
This is medium priority to address since it's blocking #2026 and possibly also #2022.
</issue>
<code>
[start of plasmapy/utils/decorators/converter.py]
1 """Decorators to convert units."""
2
3 __all__ = ["angular_freq_to_hz"]
4
5 import astropy.units as u
6 import functools
7 import inspect
8
9 from plasmapy.utils.decorators.helpers import preserve_signature
10
11
12 def angular_freq_to_hz(fn):
13 """
14 A decorator that enables a function to convert its return
15 value from angular frequency (rad/s) to frequency (Hz).
16
17 A kwarg ``to_hz`` is added to the function's signature, with a
18 default value of `False`. The keyword is also added to the
19 function's docstring under the **"Other Parameters"** heading.
20
21 Parameters
22 ----------
23 fn : function
24 The function to be decorated.
25
26 Raises
27 ------
28 ValueError
29 If ``fn`` has already defined a kwarg ``to_hz``.
30
31 Returns
32 -------
33 callable
34 The decorated function.
35
36 Notes
37 -----
38 * If `~plasmapy.utils.decorators.converter.angular_freq_to_hz` is
39 used with decorator
40 :func:`~plasmapy.utils.decorators.validators.validate_quantities`,
41 then `angular_freq_to_hz` should be used inside
42 :func:`~plasmapy.utils.decorators.validators.validate_quantities`
43 but special consideration is needed for setup. The following is
44 an example of an appropriate setup::
45
46 import astropy.units as u
47 from plasmapy.utils.decorators.converter import angular_freq_to_hz
48 from plasmapy.utils.decorators.validators import validate_quantities
49
50 @validate_quantities(validations_on_return={'units': [u.rad / u.s, u.Hz]})
51 @angular_freq_to_hz
52 def foo(x: u.rad / u.s) -> u.rad / u.s
53 return x
54
55 Adding ``u.Hz`` to the allowed units allows the converted
56 quantity to pass the validations.
57
58 Examples
59 --------
60 >>> import astropy.units as u
61 >>> from plasmapy.utils.decorators.converter import angular_freq_to_hz
62 >>>
63 >>> @angular_freq_to_hz
64 ... def foo(x):
65 ... return x
66 >>>
67 >>> foo(5 * u.rad / u.s, to_hz=True)
68 <Quantity 0.79577472 Hz>
69 >>>
70 >>> foo(-1 * u.rad / u.s, to_hz=True)
71 <Quantity -0.15915494 Hz>
72
73 Decoration also works with methods
74
75 >>> class Foo:
76 ... def __init__(self, x):
77 ... self.x = x
78 ...
79 ... @angular_freq_to_hz
80 ... def bar(self):
81 ... return self.x
82 >>>
83 >>> foo = Foo(0.5 * u.rad / u.s)
84 >>> foo.bar(to_hz=True)
85 <Quantity 0.07957747 Hz>
86
87 """
88 # raise exception if fn uses the 'to_hz' kwarg
89 sig = inspect.signature(fn)
90 if "to_hz" in sig.parameters:
91 raise ValueError(
92 f"Wrapped function '{fn.__name__}' can not use keyword 'to_hz'."
93 f" Keyword reserved for decorator functionality."
94 )
95
96 # make new signature for fn
97 new_params = sig.parameters.copy()
98 new_params["to_hz"] = inspect.Parameter(
99 "to_hz", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False
100 )
101 new_sig = inspect.Signature(
102 parameters=new_params.values(), return_annotation=sig.return_annotation
103 )
104 fn.__signature__ = new_sig
105
106 @preserve_signature
107 @functools.wraps(fn)
108 def wrapper(*args, to_hz=False, **kwargs):
109 _result = fn(*args, **kwargs)
110 if to_hz:
111 return _result.to(u.Hz, equivalencies=[(u.cy / u.s, u.Hz)])
112 return _result
113
114 added_doc_bit = """
115 Other Parameters
116 ----------------
117 to_hz: bool
118 Set `True` to to convert function output from angular frequency to Hz
119 """
120 if wrapper.__doc__ is not None:
121 wrapper.__doc__ += added_doc_bit
122 else:
123 wrapper.__doc__ = added_doc_bit
124
125 return wrapper
126
[end of plasmapy/utils/decorators/converter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plasmapy/utils/decorators/converter.py b/plasmapy/utils/decorators/converter.py
--- a/plasmapy/utils/decorators/converter.py
+++ b/plasmapy/utils/decorators/converter.py
@@ -3,10 +3,8 @@
__all__ = ["angular_freq_to_hz"]
import astropy.units as u
-import functools
import inspect
-
-from plasmapy.utils.decorators.helpers import preserve_signature
+import wrapt
def angular_freq_to_hz(fn):
@@ -85,7 +83,6 @@
<Quantity 0.07957747 Hz>
"""
- # raise exception if fn uses the 'to_hz' kwarg
sig = inspect.signature(fn)
if "to_hz" in sig.parameters:
raise ValueError(
@@ -94,32 +91,45 @@
)
# make new signature for fn
- new_params = sig.parameters.copy()
- new_params["to_hz"] = inspect.Parameter(
- "to_hz", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False
+ new_params = []
+ var_keyword_param = None
+ for param in sig.parameters.values():
+ if param.kind == param.VAR_KEYWORD:
+ var_keyword_param = param
+ else:
+ new_params.append(param)
+
+ new_params.append(
+ inspect.Parameter("to_hz", inspect.Parameter.KEYWORD_ONLY, default=False)
)
+
+ if var_keyword_param:
+ new_params.append(var_keyword_param)
+
new_sig = inspect.Signature(
- parameters=new_params.values(), return_annotation=sig.return_annotation
+ parameters=new_params, return_annotation=sig.return_annotation
)
fn.__signature__ = new_sig
- @preserve_signature
- @functools.wraps(fn)
- def wrapper(*args, to_hz=False, **kwargs):
+ @wrapt.decorator
+ def wrapper(fn, instance, args, kwargs): # noqa: ARG001
+ to_hz = kwargs.pop("to_hz", False)
_result = fn(*args, **kwargs)
if to_hz:
return _result.to(u.Hz, equivalencies=[(u.cy / u.s, u.Hz)])
return _result
+ fn = wrapper(fn)
+
added_doc_bit = """
Other Parameters
----------------
to_hz: bool
- Set `True` to to convert function output from angular frequency to Hz
+ Set `True` to convert function output from angular frequency to Hz
"""
- if wrapper.__doc__ is not None:
- wrapper.__doc__ += added_doc_bit
+ if fn.__doc__ is not None:
+ fn.__doc__ += added_doc_bit
else:
- wrapper.__doc__ = added_doc_bit
+ fn.__doc__ = added_doc_bit
- return wrapper
+ return fn
|
{"golden_diff": "diff --git a/plasmapy/utils/decorators/converter.py b/plasmapy/utils/decorators/converter.py\n--- a/plasmapy/utils/decorators/converter.py\n+++ b/plasmapy/utils/decorators/converter.py\n@@ -3,10 +3,8 @@\n __all__ = [\"angular_freq_to_hz\"]\n \n import astropy.units as u\n-import functools\n import inspect\n-\n-from plasmapy.utils.decorators.helpers import preserve_signature\n+import wrapt\n \n \n def angular_freq_to_hz(fn):\n@@ -85,7 +83,6 @@\n <Quantity 0.07957747 Hz>\n \n \"\"\"\n- # raise exception if fn uses the 'to_hz' kwarg\n sig = inspect.signature(fn)\n if \"to_hz\" in sig.parameters:\n raise ValueError(\n@@ -94,32 +91,45 @@\n )\n \n # make new signature for fn\n- new_params = sig.parameters.copy()\n- new_params[\"to_hz\"] = inspect.Parameter(\n- \"to_hz\", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False\n+ new_params = []\n+ var_keyword_param = None\n+ for param in sig.parameters.values():\n+ if param.kind == param.VAR_KEYWORD:\n+ var_keyword_param = param\n+ else:\n+ new_params.append(param)\n+\n+ new_params.append(\n+ inspect.Parameter(\"to_hz\", inspect.Parameter.KEYWORD_ONLY, default=False)\n )\n+\n+ if var_keyword_param:\n+ new_params.append(var_keyword_param)\n+\n new_sig = inspect.Signature(\n- parameters=new_params.values(), return_annotation=sig.return_annotation\n+ parameters=new_params, return_annotation=sig.return_annotation\n )\n fn.__signature__ = new_sig\n \n- @preserve_signature\n- @functools.wraps(fn)\n- def wrapper(*args, to_hz=False, **kwargs):\n+ @wrapt.decorator\n+ def wrapper(fn, instance, args, kwargs): # noqa: ARG001\n+ to_hz = kwargs.pop(\"to_hz\", False)\n _result = fn(*args, **kwargs)\n if to_hz:\n return _result.to(u.Hz, equivalencies=[(u.cy / u.s, u.Hz)])\n return _result\n \n+ fn = wrapper(fn)\n+\n added_doc_bit = \"\"\"\n Other Parameters\n ----------------\n to_hz: bool\n- Set `True` to to convert function output from angular frequency to Hz\n+ Set `True` to convert function output from angular frequency to Hz\n \"\"\"\n- if wrapper.__doc__ is not None:\n- wrapper.__doc__ += added_doc_bit\n+ if fn.__doc__ is not None:\n+ fn.__doc__ += added_doc_bit\n else:\n- wrapper.__doc__ = added_doc_bit\n+ fn.__doc__ = added_doc_bit\n \n- return wrapper\n+ return fn\n", "issue": "Incompatibility between `@angular_freq_to_hz` and var-keyword arguments\n### Bug description\r\n\r\nWhile trying to decorate `gyrofrequency` with `@particle_input` in #2026, I found an issue with `@angular_freq_to_hz`. It appears that `@angular_freq_to_hz` cannot decorate functions that accept var-keyword arguments.\r\n\r\n### Expected outcome\r\n\r\nWe should be able to use `@angular_freq_to_hz` to decorate functions with var-keyword parameters.\r\n\r\n### Minimal complete verifiable example\r\n\r\nWhen declaring this function:\r\n\r\n```Python\r\nfrom plasmapy.utils.decorators import angular_freq_to_hz\r\n@angular_freq_to_hz\r\ndef f(**kwargs):\r\n return kwargs\r\n```\r\nI get:\r\n```python\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In[41], line 1\r\n----> 1 @angular_freq_to_hz\r\n 2 def f(**kwargs):\r\n 3 return kwargs\r\n\r\nFile ~/Projects/PlasmaPy/plasmapy/utils/decorators/converter.py:101, in angular_freq_to_hz(fn)\r\n 97 new_params = sig.parameters.copy()\r\n 98 new_params[\"to_hz\"] = inspect.Parameter(\r\n 99 \"to_hz\", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False\r\n 100 )\r\n--> 101 new_sig = inspect.Signature(\r\n 102 parameters=new_params.values(), return_annotation=sig.return_annotation\r\n 103 )\r\n 104 fn.__signature__ = new_sig\r\n 106 @preserve_signature\r\n 107 @functools.wraps(fn)\r\n 108 def wrapper(*args, to_hz=False, **kwargs):\r\n\r\nFile ~/miniconda3/envs/pldev/lib/python3.11/inspect.py:2994, in Signature.__init__(self, parameters, return_annotation, __validate_parameters__)\r\n 2988 msg = (\r\n 2989 'wrong parameter order: {} parameter before {} '\r\n 2990 'parameter'\r\n 2991 )\r\n 2992 msg = msg.format(top_kind.description,\r\n 2993 kind.description)\r\n-> 2994 raise ValueError(msg)\r\n 2995 elif kind > top_kind:\r\n 2996 kind_defaults = False\r\n\r\nValueError: wrong parameter order: variadic keyword parameter before positional or keyword parameter\r\n```\r\n\r\n\r\n### Package versions\r\n\r\nDevelopment branch \r\n\r\n### Additional context\r\n\r\nThis is medium priority to address since it's blocking #2026 and possibly also #2022.\n", "before_files": [{"content": "\"\"\"Decorators to convert units.\"\"\"\n\n__all__ = [\"angular_freq_to_hz\"]\n\nimport astropy.units as u\nimport functools\nimport inspect\n\nfrom plasmapy.utils.decorators.helpers import preserve_signature\n\n\ndef angular_freq_to_hz(fn):\n \"\"\"\n A decorator that enables a function to convert its return\n value from angular frequency (rad/s) to frequency (Hz).\n\n A kwarg ``to_hz`` is added to the function's signature, with a\n default value of `False`. The keyword is also added to the\n function's docstring under the **\"Other Parameters\"** heading.\n\n Parameters\n ----------\n fn : function\n The function to be decorated.\n\n Raises\n ------\n ValueError\n If ``fn`` has already defined a kwarg ``to_hz``.\n\n Returns\n -------\n callable\n The decorated function.\n\n Notes\n -----\n * If `~plasmapy.utils.decorators.converter.angular_freq_to_hz` is\n used with decorator\n :func:`~plasmapy.utils.decorators.validators.validate_quantities`,\n then `angular_freq_to_hz` should be used inside\n :func:`~plasmapy.utils.decorators.validators.validate_quantities`\n but special consideration is needed for setup. The following is\n an example of an appropriate setup::\n\n import astropy.units as u\n from plasmapy.utils.decorators.converter import angular_freq_to_hz\n from plasmapy.utils.decorators.validators import validate_quantities\n\n @validate_quantities(validations_on_return={'units': [u.rad / u.s, u.Hz]})\n @angular_freq_to_hz\n def foo(x: u.rad / u.s) -> u.rad / u.s\n return x\n\n Adding ``u.Hz`` to the allowed units allows the converted\n quantity to pass the validations.\n\n Examples\n --------\n >>> import astropy.units as u\n >>> from plasmapy.utils.decorators.converter import angular_freq_to_hz\n >>>\n >>> @angular_freq_to_hz\n ... def foo(x):\n ... return x\n >>>\n >>> foo(5 * u.rad / u.s, to_hz=True)\n <Quantity 0.79577472 Hz>\n >>>\n >>> foo(-1 * u.rad / u.s, to_hz=True)\n <Quantity -0.15915494 Hz>\n\n Decoration also works with methods\n\n >>> class Foo:\n ... def __init__(self, x):\n ... self.x = x\n ...\n ... @angular_freq_to_hz\n ... def bar(self):\n ... return self.x\n >>>\n >>> foo = Foo(0.5 * u.rad / u.s)\n >>> foo.bar(to_hz=True)\n <Quantity 0.07957747 Hz>\n\n \"\"\"\n # raise exception if fn uses the 'to_hz' kwarg\n sig = inspect.signature(fn)\n if \"to_hz\" in sig.parameters:\n raise ValueError(\n f\"Wrapped function '{fn.__name__}' can not use keyword 'to_hz'.\"\n f\" Keyword reserved for decorator functionality.\"\n )\n\n # make new signature for fn\n new_params = sig.parameters.copy()\n new_params[\"to_hz\"] = inspect.Parameter(\n \"to_hz\", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False\n )\n new_sig = inspect.Signature(\n parameters=new_params.values(), return_annotation=sig.return_annotation\n )\n fn.__signature__ = new_sig\n\n @preserve_signature\n @functools.wraps(fn)\n def wrapper(*args, to_hz=False, **kwargs):\n _result = fn(*args, **kwargs)\n if to_hz:\n return _result.to(u.Hz, equivalencies=[(u.cy / u.s, u.Hz)])\n return _result\n\n added_doc_bit = \"\"\"\n Other Parameters\n ----------------\n to_hz: bool\n Set `True` to to convert function output from angular frequency to Hz\n \"\"\"\n if wrapper.__doc__ is not None:\n wrapper.__doc__ += added_doc_bit\n else:\n wrapper.__doc__ = added_doc_bit\n\n return wrapper\n", "path": "plasmapy/utils/decorators/converter.py"}]}
| 2,307 | 647 |
gh_patches_debug_3596
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-2170
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Identity spoofing via secondary email
See https://github.com/pennersr/django-allauth/issues/2265
cc: @CarolingerSeilchenspringer @MagdaN @fuzzylogic2000
</issue>
<code>
[start of meinberlin/apps/users/adapters.py]
1 import re
2 from urllib.parse import quote
3
4 from allauth.account.adapter import DefaultAccountAdapter
5 from django.conf import settings
6 from django.utils.http import is_safe_url
7
8 from adhocracy4.emails.mixins import SyncEmailMixin
9 from meinberlin.apps.contrib.emails import Email
10 from meinberlin.apps.users import USERNAME_INVALID_MESSAGE
11 from meinberlin.apps.users import USERNAME_REGEX
12
13
14 class UserAccountEmail(SyncEmailMixin, Email):
15 def get_receivers(self):
16 return [self.object]
17
18 @property
19 def template_name(self):
20 return self.kwargs['template_name']
21
22 def get_context(self):
23 context = super().get_context()
24 context['contact_email'] = settings.CONTACT_EMAIL
25 return context
26
27
28 class AccountAdapter(DefaultAccountAdapter):
29 username_regex = re.compile(USERNAME_REGEX)
30 error_messages = dict(
31 DefaultAccountAdapter.error_messages,
32 invalid_username=USERNAME_INVALID_MESSAGE
33 )
34
35 def get_email_confirmation_url(self, request, emailconfirmation):
36 url = super().get_email_confirmation_url(request, emailconfirmation)
37 if 'next' in request.POST and is_safe_url(request.POST['next']):
38 return '{}?next={}'.format(url, quote(request.POST['next']))
39 else:
40 return url
41
42 def send_mail(self, template_prefix, email, context):
43 user = context['user']
44 return UserAccountEmail.send(
45 user,
46 template_name=template_prefix,
47 **context
48 )
49
50 def get_email_confirmation_redirect_url(self, request):
51 if 'next' in request.GET and is_safe_url(request.GET['next']):
52 return request.GET['next']
53 else:
54 return super().get_email_confirmation_redirect_url(request)
55
[end of meinberlin/apps/users/adapters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/meinberlin/apps/users/adapters.py b/meinberlin/apps/users/adapters.py
--- a/meinberlin/apps/users/adapters.py
+++ b/meinberlin/apps/users/adapters.py
@@ -40,9 +40,8 @@
return url
def send_mail(self, template_prefix, email, context):
- user = context['user']
return UserAccountEmail.send(
- user,
+ email,
template_name=template_prefix,
**context
)
|
{"golden_diff": "diff --git a/meinberlin/apps/users/adapters.py b/meinberlin/apps/users/adapters.py\n--- a/meinberlin/apps/users/adapters.py\n+++ b/meinberlin/apps/users/adapters.py\n@@ -40,9 +40,8 @@\n return url\n \n def send_mail(self, template_prefix, email, context):\n- user = context['user']\n return UserAccountEmail.send(\n- user,\n+ email,\n template_name=template_prefix,\n **context\n )\n", "issue": "Identity spoofing via secondary email\nSee https://github.com/pennersr/django-allauth/issues/2265\r\n\r\ncc: @CarolingerSeilchenspringer @MagdaN @fuzzylogic2000 \n", "before_files": [{"content": "import re\nfrom urllib.parse import quote\n\nfrom allauth.account.adapter import DefaultAccountAdapter\nfrom django.conf import settings\nfrom django.utils.http import is_safe_url\n\nfrom adhocracy4.emails.mixins import SyncEmailMixin\nfrom meinberlin.apps.contrib.emails import Email\nfrom meinberlin.apps.users import USERNAME_INVALID_MESSAGE\nfrom meinberlin.apps.users import USERNAME_REGEX\n\n\nclass UserAccountEmail(SyncEmailMixin, Email):\n def get_receivers(self):\n return [self.object]\n\n @property\n def template_name(self):\n return self.kwargs['template_name']\n\n def get_context(self):\n context = super().get_context()\n context['contact_email'] = settings.CONTACT_EMAIL\n return context\n\n\nclass AccountAdapter(DefaultAccountAdapter):\n username_regex = re.compile(USERNAME_REGEX)\n error_messages = dict(\n DefaultAccountAdapter.error_messages,\n invalid_username=USERNAME_INVALID_MESSAGE\n )\n\n def get_email_confirmation_url(self, request, emailconfirmation):\n url = super().get_email_confirmation_url(request, emailconfirmation)\n if 'next' in request.POST and is_safe_url(request.POST['next']):\n return '{}?next={}'.format(url, quote(request.POST['next']))\n else:\n return url\n\n def send_mail(self, template_prefix, email, context):\n user = context['user']\n return UserAccountEmail.send(\n user,\n template_name=template_prefix,\n **context\n )\n\n def get_email_confirmation_redirect_url(self, request):\n if 'next' in request.GET and is_safe_url(request.GET['next']):\n return request.GET['next']\n else:\n return super().get_email_confirmation_redirect_url(request)\n", "path": "meinberlin/apps/users/adapters.py"}]}
| 1,056 | 113 |
gh_patches_debug_10828
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdeploy-700
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pytorch2onnx fails with mmedit models
error with master branch
```
TypeError: forward_dummy() got an unexpected keyword argument 'img_metas'
```
</issue>
<code>
[start of mmdeploy/apis/pytorch2onnx.py]
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import os.path as osp
3 from typing import Any, Optional, Union
4
5 import mmcv
6 import torch
7
8 from mmdeploy.apis.core.pipeline_manager import no_mp
9 from mmdeploy.utils import (get_backend, get_dynamic_axes, get_input_shape,
10 get_onnx_config, load_config)
11 from .core import PIPELINE_MANAGER
12 from .onnx import export
13
14
15 @PIPELINE_MANAGER.register_pipeline()
16 def torch2onnx(img: Any,
17 work_dir: str,
18 save_file: str,
19 deploy_cfg: Union[str, mmcv.Config],
20 model_cfg: Union[str, mmcv.Config],
21 model_checkpoint: Optional[str] = None,
22 device: str = 'cuda:0'):
23 """Convert PyTorch model to ONNX model.
24
25 Examples:
26 >>> from mmdeploy.apis import torch2onnx
27 >>> img = 'demo.jpg'
28 >>> work_dir = 'work_dir'
29 >>> save_file = 'fcos.onnx'
30 >>> deploy_cfg = ('configs/mmdet/detection/'
31 'detection_onnxruntime_dynamic.py')
32 >>> model_cfg = ('mmdetection/configs/fcos/'
33 'fcos_r50_caffe_fpn_gn-head_1x_coco.py')
34 >>> model_checkpoint = ('checkpoints/'
35 'fcos_r50_caffe_fpn_gn-head_1x_coco-821213aa.pth')
36 >>> device = 'cpu'
37 >>> torch2onnx(img, work_dir, save_file, deploy_cfg, \
38 model_cfg, model_checkpoint, device)
39
40 Args:
41 img (str | np.ndarray | torch.Tensor): Input image used to assist
42 converting model.
43 work_dir (str): A working directory to save files.
44 save_file (str): Filename to save onnx model.
45 deploy_cfg (str | mmcv.Config): Deployment config file or
46 Config object.
47 model_cfg (str | mmcv.Config): Model config file or Config object.
48 model_checkpoint (str): A checkpoint path of PyTorch model,
49 defaults to `None`.
50 device (str): A string specifying device type, defaults to 'cuda:0'.
51 """
52 # load deploy_cfg if necessary
53 deploy_cfg, model_cfg = load_config(deploy_cfg, model_cfg)
54 mmcv.mkdir_or_exist(osp.abspath(work_dir))
55
56 input_shape = get_input_shape(deploy_cfg)
57
58 # create model an inputs
59 from mmdeploy.apis import build_task_processor
60 task_processor = build_task_processor(model_cfg, deploy_cfg, device)
61
62 torch_model = task_processor.init_pytorch_model(model_checkpoint)
63 data, model_inputs = task_processor.create_input(img, input_shape)
64 input_metas = dict(img_metas=data.get('img_metas', None))
65 if not isinstance(model_inputs, torch.Tensor) and len(model_inputs) == 1:
66 model_inputs = model_inputs[0]
67
68 # export to onnx
69 context_info = dict()
70 context_info['deploy_cfg'] = deploy_cfg
71 output_prefix = osp.join(work_dir,
72 osp.splitext(osp.basename(save_file))[0])
73 backend = get_backend(deploy_cfg).value
74
75 onnx_cfg = get_onnx_config(deploy_cfg)
76 opset_version = onnx_cfg.get('opset_version', 11)
77
78 input_names = onnx_cfg['input_names']
79 output_names = onnx_cfg['output_names']
80 axis_names = input_names + output_names
81 dynamic_axes = get_dynamic_axes(deploy_cfg, axis_names)
82 verbose = not onnx_cfg.get('strip_doc_string', True) or onnx_cfg.get(
83 'verbose', False)
84 keep_initializers_as_inputs = onnx_cfg.get('keep_initializers_as_inputs',
85 True)
86 optimize = onnx_cfg.get('optimize', False)
87 with no_mp():
88 export(
89 torch_model,
90 model_inputs,
91 input_metas=input_metas,
92 output_path_prefix=output_prefix,
93 backend=backend,
94 input_names=input_names,
95 output_names=output_names,
96 context_info=context_info,
97 opset_version=opset_version,
98 dynamic_axes=dynamic_axes,
99 verbose=verbose,
100 keep_initializers_as_inputs=keep_initializers_as_inputs,
101 optimize=optimize)
102
[end of mmdeploy/apis/pytorch2onnx.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mmdeploy/apis/pytorch2onnx.py b/mmdeploy/apis/pytorch2onnx.py
--- a/mmdeploy/apis/pytorch2onnx.py
+++ b/mmdeploy/apis/pytorch2onnx.py
@@ -61,7 +61,11 @@
torch_model = task_processor.init_pytorch_model(model_checkpoint)
data, model_inputs = task_processor.create_input(img, input_shape)
- input_metas = dict(img_metas=data.get('img_metas', None))
+ if 'img_metas' in data:
+ input_metas = dict(img_metas=data['img_metas'])
+ else:
+ # codebases like mmedit do not have img_metas argument
+ input_metas = None
if not isinstance(model_inputs, torch.Tensor) and len(model_inputs) == 1:
model_inputs = model_inputs[0]
|
{"golden_diff": "diff --git a/mmdeploy/apis/pytorch2onnx.py b/mmdeploy/apis/pytorch2onnx.py\n--- a/mmdeploy/apis/pytorch2onnx.py\n+++ b/mmdeploy/apis/pytorch2onnx.py\n@@ -61,7 +61,11 @@\n \n torch_model = task_processor.init_pytorch_model(model_checkpoint)\n data, model_inputs = task_processor.create_input(img, input_shape)\n- input_metas = dict(img_metas=data.get('img_metas', None))\n+ if 'img_metas' in data:\n+ input_metas = dict(img_metas=data['img_metas'])\n+ else:\n+ # codebases like mmedit do not have img_metas argument\n+ input_metas = None\n if not isinstance(model_inputs, torch.Tensor) and len(model_inputs) == 1:\n model_inputs = model_inputs[0]\n", "issue": "pytorch2onnx fails with mmedit models\nerror with master branch\r\n```\r\nTypeError: forward_dummy() got an unexpected keyword argument 'img_metas'\r\n```\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport os.path as osp\nfrom typing import Any, Optional, Union\n\nimport mmcv\nimport torch\n\nfrom mmdeploy.apis.core.pipeline_manager import no_mp\nfrom mmdeploy.utils import (get_backend, get_dynamic_axes, get_input_shape,\n get_onnx_config, load_config)\nfrom .core import PIPELINE_MANAGER\nfrom .onnx import export\n\n\n@PIPELINE_MANAGER.register_pipeline()\ndef torch2onnx(img: Any,\n work_dir: str,\n save_file: str,\n deploy_cfg: Union[str, mmcv.Config],\n model_cfg: Union[str, mmcv.Config],\n model_checkpoint: Optional[str] = None,\n device: str = 'cuda:0'):\n \"\"\"Convert PyTorch model to ONNX model.\n\n Examples:\n >>> from mmdeploy.apis import torch2onnx\n >>> img = 'demo.jpg'\n >>> work_dir = 'work_dir'\n >>> save_file = 'fcos.onnx'\n >>> deploy_cfg = ('configs/mmdet/detection/'\n 'detection_onnxruntime_dynamic.py')\n >>> model_cfg = ('mmdetection/configs/fcos/'\n 'fcos_r50_caffe_fpn_gn-head_1x_coco.py')\n >>> model_checkpoint = ('checkpoints/'\n 'fcos_r50_caffe_fpn_gn-head_1x_coco-821213aa.pth')\n >>> device = 'cpu'\n >>> torch2onnx(img, work_dir, save_file, deploy_cfg, \\\n model_cfg, model_checkpoint, device)\n\n Args:\n img (str | np.ndarray | torch.Tensor): Input image used to assist\n converting model.\n work_dir (str): A working directory to save files.\n save_file (str): Filename to save onnx model.\n deploy_cfg (str | mmcv.Config): Deployment config file or\n Config object.\n model_cfg (str | mmcv.Config): Model config file or Config object.\n model_checkpoint (str): A checkpoint path of PyTorch model,\n defaults to `None`.\n device (str): A string specifying device type, defaults to 'cuda:0'.\n \"\"\"\n # load deploy_cfg if necessary\n deploy_cfg, model_cfg = load_config(deploy_cfg, model_cfg)\n mmcv.mkdir_or_exist(osp.abspath(work_dir))\n\n input_shape = get_input_shape(deploy_cfg)\n\n # create model an inputs\n from mmdeploy.apis import build_task_processor\n task_processor = build_task_processor(model_cfg, deploy_cfg, device)\n\n torch_model = task_processor.init_pytorch_model(model_checkpoint)\n data, model_inputs = task_processor.create_input(img, input_shape)\n input_metas = dict(img_metas=data.get('img_metas', None))\n if not isinstance(model_inputs, torch.Tensor) and len(model_inputs) == 1:\n model_inputs = model_inputs[0]\n\n # export to onnx\n context_info = dict()\n context_info['deploy_cfg'] = deploy_cfg\n output_prefix = osp.join(work_dir,\n osp.splitext(osp.basename(save_file))[0])\n backend = get_backend(deploy_cfg).value\n\n onnx_cfg = get_onnx_config(deploy_cfg)\n opset_version = onnx_cfg.get('opset_version', 11)\n\n input_names = onnx_cfg['input_names']\n output_names = onnx_cfg['output_names']\n axis_names = input_names + output_names\n dynamic_axes = get_dynamic_axes(deploy_cfg, axis_names)\n verbose = not onnx_cfg.get('strip_doc_string', True) or onnx_cfg.get(\n 'verbose', False)\n keep_initializers_as_inputs = onnx_cfg.get('keep_initializers_as_inputs',\n True)\n optimize = onnx_cfg.get('optimize', False)\n with no_mp():\n export(\n torch_model,\n model_inputs,\n input_metas=input_metas,\n output_path_prefix=output_prefix,\n backend=backend,\n input_names=input_names,\n output_names=output_names,\n context_info=context_info,\n opset_version=opset_version,\n dynamic_axes=dynamic_axes,\n verbose=verbose,\n keep_initializers_as_inputs=keep_initializers_as_inputs,\n optimize=optimize)\n", "path": "mmdeploy/apis/pytorch2onnx.py"}]}
| 1,700 | 193 |
gh_patches_debug_12971
|
rasdani/github-patches
|
git_diff
|
Zeroto521__my-data-toolkit-514
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MAINT: Remove warning message
<!--
Thanks for contributing a pull request!
Please follow these standard acronyms to start the commit message:
- ENH: enhancement
- BUG: bug fix
- DOC: documentation
- TYP: type annotations
- TST: addition or modification of tests
- MAINT: maintenance commit (refactoring, typos, etc.)
- BLD: change related to building
- REL: related to releasing
- API: an (incompatible) API change
- DEP: deprecate something, or remove a deprecated object
- DEV: development tool or utility
- REV: revert an earlier commit
- PERF: performance improvement
- BOT: always commit via a bot
- CI: related to CI or CD
- CLN: Code cleanup
-->
- [ ] closes #xxxx
- [ ] whatsnew entry
</issue>
<code>
[start of dtoolkit/accessor/dataframe/values_to_dict.py]
1 from __future__ import annotations
2
3 import pandas as pd
4
5 from dtoolkit.accessor.register import register_dataframe_method
6 from dtoolkit.accessor.series import values_to_dict as s_values_to_dict # noqa
7 from dtoolkit.util._decorator import deprecated_alias
8
9
10 @register_dataframe_method
11 @deprecated_alias(
12 warning_msg=(
13 "{func_name}'s parameter '{old_alias}' is deprecated and will be removed in "
14 "0.0.15. Please use the parameter '{new_alias}'. "
15 "(Warning added DToolKit 0.0.14)"
16 ),
17 few_as_key="ascending",
18 )
19 def values_to_dict(
20 df: pd.DataFrame,
21 order: list | tuple = None,
22 ascending: bool = True,
23 to_list: bool = True,
24 ) -> dict:
25 """
26 Convert :attr:`~pandas.DataFrame.values` to :class:`dict`.
27
28 Parameters
29 ----------
30 order : list or tuple, optional
31 The order of keys via given columns. If ``order`` is set, ``ascending``
32 will not work.
33
34 ascending : bool, default True
35 If True the key would use the few unique of column values first.
36
37 to_list : bool, default True
38 If True one element value will return :keyword:`list`.
39
40 Returns
41 -------
42 dict
43
44 See Also
45 --------
46 dtoolkit.accessor.series.values_to_dict
47
48 Notes
49 -----
50 The same key of values would be merged into :class:`list`.
51
52 Examples
53 --------
54 >>> import json
55 >>> import dtoolkit.accessor
56 >>> import pandas as pd
57 >>> df = pd.DataFrame(
58 ... {
59 ... "x" : ["A", "A", "B", "B", "B"],
60 ... "y" : ["a", "b", "c", "d", "d"],
61 ... "z" : ["1", "2", "3", "3", "4"],
62 ... }
63 ... )
64 >>> df
65 x y z
66 0 A a 1
67 1 A b 2
68 2 B c 3
69 3 B d 3
70 4 B d 4
71
72 Use few unique of column values as key first. The order of column unique values
73 number is `x` < `y` < `z`. So the result will be ``{x: {y: [z]} }``.
74
75 >>> print(json.dumps(df.values_to_dict(), indent=4))
76 {
77 "A": {
78 "a": [
79 "1"
80 ],
81 "b": [
82 "2"
83 ]
84 },
85 "B": {
86 "c": [
87 "3"
88 ],
89 "d": [
90 "3",
91 "4"
92 ]
93 }
94 }
95
96 Use many unique of column values as key first, the result will be
97 ``{y: {z: [x]} }``.
98
99 >>> print(json.dumps(df.values_to_dict(ascending=False), indent=4))
100 {
101 "a": {
102 "1": [
103 "A"
104 ]
105 },
106 "b": {
107 "2": [
108 "A"
109 ]
110 },
111 "c": {
112 "3": [
113 "B"
114 ]
115 },
116 "d": {
117 "3": [
118 "B"
119 ],
120 "4": [
121 "B"
122 ]
123 }
124 }
125
126 Output the arbitrary order like ``{z: x} or ``{x: {z: [y]} }``,
127 via ``order`` argument.
128
129 >>> print(json.dumps(df.values_to_dict(order=["x", "z"]), indent=4))
130 {
131 "A": [
132 "1",
133 "2"
134 ],
135 "B": [
136 "3",
137 "3",
138 "4"
139 ]
140 }
141 >>> print(json.dumps(df.values_to_dict(order=["x", "z", "y"]), indent=4))
142 {
143 "A": {
144 "1": [
145 "a"
146 ],
147 "2": [
148 "b"
149 ]
150 },
151 "B": {
152 "3": [
153 "c",
154 "d"
155 ],
156 "4": [
157 "d"
158 ]
159 }
160 }
161
162 It also could convert one column DataFrame. But ``ascending`` wouldn' work.
163 The result would be ``{index: [values]}``.
164
165 >>> print(json.dumps(df[["x"]].values_to_dict(), indent=4))
166 {
167 "0": [
168 "A"
169 ],
170 "1": [
171 "A"
172 ],
173 "2": [
174 "B"
175 ],
176 "3": [
177 "B"
178 ],
179 "4": [
180 "B"
181 ]
182 }
183
184 Unpack one element value list.
185
186 >>> print(json.dumps(df.values_to_dict(to_list=False), indent=4))
187 {
188 "A": {
189 "a": "1",
190 "b": "2"
191 },
192 "B": {
193 "c": "3",
194 "d": [
195 "3",
196 "4"
197 ]
198 }
199 }
200 """
201
202 if df.columns.__len__() == 1: # one columns DataFrame
203 return df.to_series().values_to_dict(to_list=to_list)
204
205 columns = order or (
206 df.nunique()
207 .sort_values(
208 ascending=ascending,
209 )
210 .index
211 )
212 return _dict(df[columns], to_list=to_list)
213
214
215 def _dict(df: pd.DataFrame, to_list: bool) -> dict:
216 key_column, *value_column = df.columns
217
218 if df.columns.__len__() == 2: # two column DataFrame
219 return df.to_series(
220 index_column=key_column,
221 value_column=value_column[0],
222 ).values_to_dict(to_list=to_list)
223
224 return {
225 key: _dict(
226 df.loc[df[key_column] == key, value_column],
227 to_list=to_list,
228 )
229 for key in df[key_column].unique()
230 }
231
[end of dtoolkit/accessor/dataframe/values_to_dict.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dtoolkit/accessor/dataframe/values_to_dict.py b/dtoolkit/accessor/dataframe/values_to_dict.py
--- a/dtoolkit/accessor/dataframe/values_to_dict.py
+++ b/dtoolkit/accessor/dataframe/values_to_dict.py
@@ -4,18 +4,9 @@
from dtoolkit.accessor.register import register_dataframe_method
from dtoolkit.accessor.series import values_to_dict as s_values_to_dict # noqa
-from dtoolkit.util._decorator import deprecated_alias
@register_dataframe_method
-@deprecated_alias(
- warning_msg=(
- "{func_name}'s parameter '{old_alias}' is deprecated and will be removed in "
- "0.0.15. Please use the parameter '{new_alias}'. "
- "(Warning added DToolKit 0.0.14)"
- ),
- few_as_key="ascending",
-)
def values_to_dict(
df: pd.DataFrame,
order: list | tuple = None,
|
{"golden_diff": "diff --git a/dtoolkit/accessor/dataframe/values_to_dict.py b/dtoolkit/accessor/dataframe/values_to_dict.py\n--- a/dtoolkit/accessor/dataframe/values_to_dict.py\n+++ b/dtoolkit/accessor/dataframe/values_to_dict.py\n@@ -4,18 +4,9 @@\n \n from dtoolkit.accessor.register import register_dataframe_method\n from dtoolkit.accessor.series import values_to_dict as s_values_to_dict # noqa\n-from dtoolkit.util._decorator import deprecated_alias\n \n \n @register_dataframe_method\n-@deprecated_alias(\n- warning_msg=(\n- \"{func_name}'s parameter '{old_alias}' is deprecated and will be removed in \"\n- \"0.0.15. Please use the parameter '{new_alias}'. \"\n- \"(Warning added DToolKit 0.0.14)\"\n- ),\n- few_as_key=\"ascending\",\n-)\n def values_to_dict(\n df: pd.DataFrame,\n order: list | tuple = None,\n", "issue": "MAINT: Remove warning message\n<!--\r\nThanks for contributing a pull request!\r\n\r\nPlease follow these standard acronyms to start the commit message:\r\n\r\n- ENH: enhancement\r\n- BUG: bug fix\r\n- DOC: documentation\r\n- TYP: type annotations\r\n- TST: addition or modification of tests\r\n- MAINT: maintenance commit (refactoring, typos, etc.)\r\n- BLD: change related to building\r\n- REL: related to releasing\r\n- API: an (incompatible) API change\r\n- DEP: deprecate something, or remove a deprecated object\r\n- DEV: development tool or utility\r\n- REV: revert an earlier commit\r\n- PERF: performance improvement\r\n- BOT: always commit via a bot\r\n- CI: related to CI or CD\r\n- CLN: Code cleanup\r\n-->\r\n\r\n- [ ] closes #xxxx\r\n- [ ] whatsnew entry\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport pandas as pd\n\nfrom dtoolkit.accessor.register import register_dataframe_method\nfrom dtoolkit.accessor.series import values_to_dict as s_values_to_dict # noqa\nfrom dtoolkit.util._decorator import deprecated_alias\n\n\n@register_dataframe_method\n@deprecated_alias(\n warning_msg=(\n \"{func_name}'s parameter '{old_alias}' is deprecated and will be removed in \"\n \"0.0.15. Please use the parameter '{new_alias}'. \"\n \"(Warning added DToolKit 0.0.14)\"\n ),\n few_as_key=\"ascending\",\n)\ndef values_to_dict(\n df: pd.DataFrame,\n order: list | tuple = None,\n ascending: bool = True,\n to_list: bool = True,\n) -> dict:\n \"\"\"\n Convert :attr:`~pandas.DataFrame.values` to :class:`dict`.\n\n Parameters\n ----------\n order : list or tuple, optional\n The order of keys via given columns. If ``order`` is set, ``ascending``\n will not work.\n\n ascending : bool, default True\n If True the key would use the few unique of column values first.\n\n to_list : bool, default True\n If True one element value will return :keyword:`list`.\n\n Returns\n -------\n dict\n\n See Also\n --------\n dtoolkit.accessor.series.values_to_dict\n\n Notes\n -----\n The same key of values would be merged into :class:`list`.\n\n Examples\n --------\n >>> import json\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> df = pd.DataFrame(\n ... {\n ... \"x\" : [\"A\", \"A\", \"B\", \"B\", \"B\"],\n ... \"y\" : [\"a\", \"b\", \"c\", \"d\", \"d\"],\n ... \"z\" : [\"1\", \"2\", \"3\", \"3\", \"4\"],\n ... }\n ... )\n >>> df\n x y z\n 0 A a 1\n 1 A b 2\n 2 B c 3\n 3 B d 3\n 4 B d 4\n\n Use few unique of column values as key first. The order of column unique values\n number is `x` < `y` < `z`. So the result will be ``{x: {y: [z]} }``.\n\n >>> print(json.dumps(df.values_to_dict(), indent=4))\n {\n \"A\": {\n \"a\": [\n \"1\"\n ],\n \"b\": [\n \"2\"\n ]\n },\n \"B\": {\n \"c\": [\n \"3\"\n ],\n \"d\": [\n \"3\",\n \"4\"\n ]\n }\n }\n\n Use many unique of column values as key first, the result will be\n ``{y: {z: [x]} }``.\n\n >>> print(json.dumps(df.values_to_dict(ascending=False), indent=4))\n {\n \"a\": {\n \"1\": [\n \"A\"\n ]\n },\n \"b\": {\n \"2\": [\n \"A\"\n ]\n },\n \"c\": {\n \"3\": [\n \"B\"\n ]\n },\n \"d\": {\n \"3\": [\n \"B\"\n ],\n \"4\": [\n \"B\"\n ]\n }\n }\n\n Output the arbitrary order like ``{z: x} or ``{x: {z: [y]} }``,\n via ``order`` argument.\n\n >>> print(json.dumps(df.values_to_dict(order=[\"x\", \"z\"]), indent=4))\n {\n \"A\": [\n \"1\",\n \"2\"\n ],\n \"B\": [\n \"3\",\n \"3\",\n \"4\"\n ]\n }\n >>> print(json.dumps(df.values_to_dict(order=[\"x\", \"z\", \"y\"]), indent=4))\n {\n \"A\": {\n \"1\": [\n \"a\"\n ],\n \"2\": [\n \"b\"\n ]\n },\n \"B\": {\n \"3\": [\n \"c\",\n \"d\"\n ],\n \"4\": [\n \"d\"\n ]\n }\n }\n\n It also could convert one column DataFrame. But ``ascending`` wouldn' work.\n The result would be ``{index: [values]}``.\n\n >>> print(json.dumps(df[[\"x\"]].values_to_dict(), indent=4))\n {\n \"0\": [\n \"A\"\n ],\n \"1\": [\n \"A\"\n ],\n \"2\": [\n \"B\"\n ],\n \"3\": [\n \"B\"\n ],\n \"4\": [\n \"B\"\n ]\n }\n\n Unpack one element value list.\n\n >>> print(json.dumps(df.values_to_dict(to_list=False), indent=4))\n {\n \"A\": {\n \"a\": \"1\",\n \"b\": \"2\"\n },\n \"B\": {\n \"c\": \"3\",\n \"d\": [\n \"3\",\n \"4\"\n ]\n }\n }\n \"\"\"\n\n if df.columns.__len__() == 1: # one columns DataFrame\n return df.to_series().values_to_dict(to_list=to_list)\n\n columns = order or (\n df.nunique()\n .sort_values(\n ascending=ascending,\n )\n .index\n )\n return _dict(df[columns], to_list=to_list)\n\n\ndef _dict(df: pd.DataFrame, to_list: bool) -> dict:\n key_column, *value_column = df.columns\n\n if df.columns.__len__() == 2: # two column DataFrame\n return df.to_series(\n index_column=key_column,\n value_column=value_column[0],\n ).values_to_dict(to_list=to_list)\n\n return {\n key: _dict(\n df.loc[df[key_column] == key, value_column],\n to_list=to_list,\n )\n for key in df[key_column].unique()\n }\n", "path": "dtoolkit/accessor/dataframe/values_to_dict.py"}]}
| 2,665 | 216 |
gh_patches_debug_38360
|
rasdani/github-patches
|
git_diff
|
cowrie__cowrie-1234
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'ls -al' incorrectly shows '..' files as duplicates of '.'
When using ls inside a cowrie instance the '..' entry is just a duplicate of the '.' entry. The group and user information is often wrong. This is a very easy to check fingerprint of cowrie.
**To Reproduce**
Steps to reproduce the behavior:
1. SSH into a cowrie instance.
2. `cd /home/richard`
3. `ls -al`
4. The '..' entry has ownership 'richard richard'
**Expected behavior**
The information for the parent folder should be retrieved. In the case of '/home' from '/home/richard' the owner of '/home' should read as 'root root'
'ls -al' incorrectly shows '..' files as duplicates of '.'
When using ls inside a cowrie instance the '..' entry is just a duplicate of the '.' entry. The group and user information is often wrong. This is a very easy to check fingerprint of cowrie.
**To Reproduce**
Steps to reproduce the behavior:
1. SSH into a cowrie instance.
2. `cd /home/richard`
3. `ls -al`
4. The '..' entry has ownership 'richard richard'
**Expected behavior**
The information for the parent folder should be retrieved. In the case of '/home' from '/home/richard' the owner of '/home' should read as 'root root'
</issue>
<code>
[start of src/cowrie/commands/ls.py]
1 # Copyright (c) 2009 Upi Tamminen <[email protected]>
2 # See the COPYRIGHT file for more information
3
4 from __future__ import absolute_import, division
5
6 import getopt
7 import stat
8 import time
9
10 import cowrie.shell.fs as fs
11 from cowrie.shell.command import HoneyPotCommand
12 from cowrie.shell.pwd import Group, Passwd
13
14 commands = {}
15
16
17 class command_ls(HoneyPotCommand):
18
19 def uid2name(self, uid):
20 try:
21 return Passwd().getpwuid(uid)["pw_name"]
22 except Exception:
23 return str(uid)
24
25 def gid2name(self, gid):
26 try:
27 return Group().getgrgid(gid)["gr_name"]
28 except Exception:
29 return str(gid)
30
31 def call(self):
32 path = self.protocol.cwd
33 paths = []
34 self.showHidden = False
35 self.showDirectories = False
36 func = self.do_ls_normal
37
38 # Parse options or display no files
39 try:
40 opts, args = getopt.gnu_getopt(self.args, '1@ABCFGHLOPRSTUWabcdefghiklmnopqrstuvwx',
41 ['help', 'version', 'param'])
42 except getopt.GetoptError as err:
43 self.write("ls: {}\n".format(err))
44 self.write("Try 'ls --help' for more information.\n")
45 return
46
47 for x, a in opts:
48 if x in ('-l'):
49 func = self.do_ls_l
50 if x in ('-a'):
51 self.showHidden = True
52 if x in ('-d'):
53 self.showDirectories = True
54
55 for arg in args:
56 paths.append(self.protocol.fs.resolve_path(arg, self.protocol.cwd))
57
58 if not paths:
59 func(path)
60 else:
61 for path in paths:
62 func(path)
63
64 def do_ls_normal(self, path):
65 try:
66 if self.protocol.fs.isdir(path) and not self.showDirectories:
67 files = self.protocol.fs.get_path(path)[:]
68 if self.showHidden:
69 dot = self.protocol.fs.getfile(path)[:]
70 dot[fs.A_NAME] = '.'
71 files.append(dot)
72 # FIXME: should grab dotdot off the parent instead
73 dotdot = self.protocol.fs.getfile(path)[:]
74 dotdot[fs.A_NAME] = '..'
75 files.append(dotdot)
76 else:
77 files = [x for x in files if not x[fs.A_NAME].startswith('.')]
78 files.sort()
79 else:
80 files = (self.protocol.fs.getfile(path)[:],)
81 except Exception:
82 self.write(
83 'ls: cannot access %s: No such file or directory\n' % (path,))
84 return
85
86 line = [x[fs.A_NAME] for x in files]
87 if not line:
88 return
89 count = 0
90 maxlen = max([len(x) for x in line])
91
92 try:
93 wincols = self.protocol.user.windowSize[1]
94 except AttributeError:
95 wincols = 80
96
97 perline = int(wincols / (maxlen + 1))
98 for f in line:
99 if count == perline:
100 count = 0
101 self.write('\n')
102 self.write(f.ljust(maxlen + 1))
103 count += 1
104 self.write('\n')
105
106 def do_ls_l(self, path):
107 try:
108 if self.protocol.fs.isdir(path) and not self.showDirectories:
109 files = self.protocol.fs.get_path(path)[:]
110 if self.showHidden:
111 dot = self.protocol.fs.getfile(path)[:]
112 dot[fs.A_NAME] = '.'
113 files.append(dot)
114 # FIXME: should grab dotdot off the parent instead
115 dotdot = self.protocol.fs.getfile(path)[:]
116 dotdot[fs.A_NAME] = '..'
117 files.append(dotdot)
118 else:
119 files = [x for x in files if not x[fs.A_NAME].startswith('.')]
120 files.sort()
121 else:
122 files = (self.protocol.fs.getfile(path)[:],)
123 except Exception:
124 self.write(
125 'ls: cannot access %s: No such file or directory\n' % (path,))
126 return
127
128 largest = 0
129 if len(files):
130 largest = max([x[fs.A_SIZE] for x in files])
131
132 for file in files:
133 if file[fs.A_NAME].startswith('.') and not self.showHidden:
134 continue
135
136 perms = ['-'] * 10
137 if file[fs.A_MODE] & stat.S_IRUSR:
138 perms[1] = 'r'
139 if file[fs.A_MODE] & stat.S_IWUSR:
140 perms[2] = 'w'
141 if file[fs.A_MODE] & stat.S_IXUSR:
142 perms[3] = 'x'
143 if file[fs.A_MODE] & stat.S_ISUID:
144 perms[3] = 'S'
145 if file[fs.A_MODE] & stat.S_IXUSR and file[fs.A_MODE] & stat.S_ISUID:
146 perms[3] = 's'
147
148 if file[fs.A_MODE] & stat.S_IRGRP:
149 perms[4] = 'r'
150 if file[fs.A_MODE] & stat.S_IWGRP:
151 perms[5] = 'w'
152 if file[fs.A_MODE] & stat.S_IXGRP:
153 perms[6] = 'x'
154 if file[fs.A_MODE] & stat.S_ISGID:
155 perms[6] = 'S'
156 if file[fs.A_MODE] & stat.S_IXGRP and file[fs.A_MODE] & stat.S_ISGID:
157 perms[6] = 's'
158
159 if file[fs.A_MODE] & stat.S_IROTH:
160 perms[7] = 'r'
161 if file[fs.A_MODE] & stat.S_IWOTH:
162 perms[8] = 'w'
163 if file[fs.A_MODE] & stat.S_IXOTH:
164 perms[9] = 'x'
165 if file[fs.A_MODE] & stat.S_ISVTX:
166 perms[9] = 'T'
167 if file[fs.A_MODE] & stat.S_IXOTH and file[fs.A_MODE] & stat.S_ISVTX:
168 perms[9] = 't'
169
170 linktarget = ''
171
172 if file[fs.A_TYPE] == fs.T_DIR:
173 perms[0] = 'd'
174 elif file[fs.A_TYPE] == fs.T_LINK:
175 perms[0] = 'l'
176 linktarget = ' -> %s' % (file[fs.A_TARGET],)
177
178 perms = ''.join(perms)
179 ctime = time.localtime(file[fs.A_CTIME])
180
181 line = '%s 1 %s %s %s %s %s%s' % \
182 (perms,
183 self.uid2name(file[fs.A_UID]),
184 self.gid2name(file[fs.A_GID]),
185 str(file[fs.A_SIZE]).rjust(len(str(largest))),
186 time.strftime('%Y-%m-%d %H:%M', ctime),
187 file[fs.A_NAME],
188 linktarget)
189
190 self.write('{0}\n'.format(line))
191
192
193 commands['/bin/ls'] = command_ls
194 commands['ls'] = command_ls
195 commands['/bin/dir'] = command_ls
196 commands['dir'] = command_ls
197
[end of src/cowrie/commands/ls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cowrie/commands/ls.py b/src/cowrie/commands/ls.py
--- a/src/cowrie/commands/ls.py
+++ b/src/cowrie/commands/ls.py
@@ -4,6 +4,7 @@
from __future__ import absolute_import, division
import getopt
+import os.path
import stat
import time
@@ -61,7 +62,7 @@
for path in paths:
func(path)
- def do_ls_normal(self, path):
+ def get_dir_files(self, path):
try:
if self.protocol.fs.isdir(path) and not self.showDirectories:
files = self.protocol.fs.get_path(path)[:]
@@ -69,8 +70,9 @@
dot = self.protocol.fs.getfile(path)[:]
dot[fs.A_NAME] = '.'
files.append(dot)
- # FIXME: should grab dotdot off the parent instead
- dotdot = self.protocol.fs.getfile(path)[:]
+ dotdot = self.protocol.fs.getfile(os.path.split(path)[0])[:]
+ if not dotdot:
+ dotdot = self.protocol.fs.getfile(path)[:]
dotdot[fs.A_NAME] = '..'
files.append(dotdot)
else:
@@ -82,6 +84,10 @@
self.write(
'ls: cannot access %s: No such file or directory\n' % (path,))
return
+ return files
+
+ def do_ls_normal(self, path):
+ files = self.get_dir_files(path)
line = [x[fs.A_NAME] for x in files]
if not line:
@@ -104,26 +110,7 @@
self.write('\n')
def do_ls_l(self, path):
- try:
- if self.protocol.fs.isdir(path) and not self.showDirectories:
- files = self.protocol.fs.get_path(path)[:]
- if self.showHidden:
- dot = self.protocol.fs.getfile(path)[:]
- dot[fs.A_NAME] = '.'
- files.append(dot)
- # FIXME: should grab dotdot off the parent instead
- dotdot = self.protocol.fs.getfile(path)[:]
- dotdot[fs.A_NAME] = '..'
- files.append(dotdot)
- else:
- files = [x for x in files if not x[fs.A_NAME].startswith('.')]
- files.sort()
- else:
- files = (self.protocol.fs.getfile(path)[:],)
- except Exception:
- self.write(
- 'ls: cannot access %s: No such file or directory\n' % (path,))
- return
+ files = self.get_dir_files(path)
largest = 0
if len(files):
|
{"golden_diff": "diff --git a/src/cowrie/commands/ls.py b/src/cowrie/commands/ls.py\n--- a/src/cowrie/commands/ls.py\n+++ b/src/cowrie/commands/ls.py\n@@ -4,6 +4,7 @@\n from __future__ import absolute_import, division\n \n import getopt\n+import os.path\n import stat\n import time\n \n@@ -61,7 +62,7 @@\n for path in paths:\n func(path)\n \n- def do_ls_normal(self, path):\n+ def get_dir_files(self, path):\n try:\n if self.protocol.fs.isdir(path) and not self.showDirectories:\n files = self.protocol.fs.get_path(path)[:]\n@@ -69,8 +70,9 @@\n dot = self.protocol.fs.getfile(path)[:]\n dot[fs.A_NAME] = '.'\n files.append(dot)\n- # FIXME: should grab dotdot off the parent instead\n- dotdot = self.protocol.fs.getfile(path)[:]\n+ dotdot = self.protocol.fs.getfile(os.path.split(path)[0])[:]\n+ if not dotdot:\n+ dotdot = self.protocol.fs.getfile(path)[:]\n dotdot[fs.A_NAME] = '..'\n files.append(dotdot)\n else:\n@@ -82,6 +84,10 @@\n self.write(\n 'ls: cannot access %s: No such file or directory\\n' % (path,))\n return\n+ return files\n+\n+ def do_ls_normal(self, path):\n+ files = self.get_dir_files(path)\n \n line = [x[fs.A_NAME] for x in files]\n if not line:\n@@ -104,26 +110,7 @@\n self.write('\\n')\n \n def do_ls_l(self, path):\n- try:\n- if self.protocol.fs.isdir(path) and not self.showDirectories:\n- files = self.protocol.fs.get_path(path)[:]\n- if self.showHidden:\n- dot = self.protocol.fs.getfile(path)[:]\n- dot[fs.A_NAME] = '.'\n- files.append(dot)\n- # FIXME: should grab dotdot off the parent instead\n- dotdot = self.protocol.fs.getfile(path)[:]\n- dotdot[fs.A_NAME] = '..'\n- files.append(dotdot)\n- else:\n- files = [x for x in files if not x[fs.A_NAME].startswith('.')]\n- files.sort()\n- else:\n- files = (self.protocol.fs.getfile(path)[:],)\n- except Exception:\n- self.write(\n- 'ls: cannot access %s: No such file or directory\\n' % (path,))\n- return\n+ files = self.get_dir_files(path)\n \n largest = 0\n if len(files):\n", "issue": "'ls -al' incorrectly shows '..' files as duplicates of '.'\nWhen using ls inside a cowrie instance the '..' entry is just a duplicate of the '.' entry. The group and user information is often wrong. This is a very easy to check fingerprint of cowrie. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. SSH into a cowrie instance.\r\n2. `cd /home/richard`\r\n3. `ls -al`\r\n4. The '..' entry has ownership 'richard richard'\r\n\r\n**Expected behavior**\r\nThe information for the parent folder should be retrieved. In the case of '/home' from '/home/richard' the owner of '/home' should read as 'root root'\r\n\n'ls -al' incorrectly shows '..' files as duplicates of '.'\nWhen using ls inside a cowrie instance the '..' entry is just a duplicate of the '.' entry. The group and user information is often wrong. This is a very easy to check fingerprint of cowrie. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. SSH into a cowrie instance.\r\n2. `cd /home/richard`\r\n3. `ls -al`\r\n4. The '..' entry has ownership 'richard richard'\r\n\r\n**Expected behavior**\r\nThe information for the parent folder should be retrieved. In the case of '/home' from '/home/richard' the owner of '/home' should read as 'root root'\r\n\n", "before_files": [{"content": "# Copyright (c) 2009 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\nfrom __future__ import absolute_import, division\n\nimport getopt\nimport stat\nimport time\n\nimport cowrie.shell.fs as fs\nfrom cowrie.shell.command import HoneyPotCommand\nfrom cowrie.shell.pwd import Group, Passwd\n\ncommands = {}\n\n\nclass command_ls(HoneyPotCommand):\n\n def uid2name(self, uid):\n try:\n return Passwd().getpwuid(uid)[\"pw_name\"]\n except Exception:\n return str(uid)\n\n def gid2name(self, gid):\n try:\n return Group().getgrgid(gid)[\"gr_name\"]\n except Exception:\n return str(gid)\n\n def call(self):\n path = self.protocol.cwd\n paths = []\n self.showHidden = False\n self.showDirectories = False\n func = self.do_ls_normal\n\n # Parse options or display no files\n try:\n opts, args = getopt.gnu_getopt(self.args, '1@ABCFGHLOPRSTUWabcdefghiklmnopqrstuvwx',\n ['help', 'version', 'param'])\n except getopt.GetoptError as err:\n self.write(\"ls: {}\\n\".format(err))\n self.write(\"Try 'ls --help' for more information.\\n\")\n return\n\n for x, a in opts:\n if x in ('-l'):\n func = self.do_ls_l\n if x in ('-a'):\n self.showHidden = True\n if x in ('-d'):\n self.showDirectories = True\n\n for arg in args:\n paths.append(self.protocol.fs.resolve_path(arg, self.protocol.cwd))\n\n if not paths:\n func(path)\n else:\n for path in paths:\n func(path)\n\n def do_ls_normal(self, path):\n try:\n if self.protocol.fs.isdir(path) and not self.showDirectories:\n files = self.protocol.fs.get_path(path)[:]\n if self.showHidden:\n dot = self.protocol.fs.getfile(path)[:]\n dot[fs.A_NAME] = '.'\n files.append(dot)\n # FIXME: should grab dotdot off the parent instead\n dotdot = self.protocol.fs.getfile(path)[:]\n dotdot[fs.A_NAME] = '..'\n files.append(dotdot)\n else:\n files = [x for x in files if not x[fs.A_NAME].startswith('.')]\n files.sort()\n else:\n files = (self.protocol.fs.getfile(path)[:],)\n except Exception:\n self.write(\n 'ls: cannot access %s: No such file or directory\\n' % (path,))\n return\n\n line = [x[fs.A_NAME] for x in files]\n if not line:\n return\n count = 0\n maxlen = max([len(x) for x in line])\n\n try:\n wincols = self.protocol.user.windowSize[1]\n except AttributeError:\n wincols = 80\n\n perline = int(wincols / (maxlen + 1))\n for f in line:\n if count == perline:\n count = 0\n self.write('\\n')\n self.write(f.ljust(maxlen + 1))\n count += 1\n self.write('\\n')\n\n def do_ls_l(self, path):\n try:\n if self.protocol.fs.isdir(path) and not self.showDirectories:\n files = self.protocol.fs.get_path(path)[:]\n if self.showHidden:\n dot = self.protocol.fs.getfile(path)[:]\n dot[fs.A_NAME] = '.'\n files.append(dot)\n # FIXME: should grab dotdot off the parent instead\n dotdot = self.protocol.fs.getfile(path)[:]\n dotdot[fs.A_NAME] = '..'\n files.append(dotdot)\n else:\n files = [x for x in files if not x[fs.A_NAME].startswith('.')]\n files.sort()\n else:\n files = (self.protocol.fs.getfile(path)[:],)\n except Exception:\n self.write(\n 'ls: cannot access %s: No such file or directory\\n' % (path,))\n return\n\n largest = 0\n if len(files):\n largest = max([x[fs.A_SIZE] for x in files])\n\n for file in files:\n if file[fs.A_NAME].startswith('.') and not self.showHidden:\n continue\n\n perms = ['-'] * 10\n if file[fs.A_MODE] & stat.S_IRUSR:\n perms[1] = 'r'\n if file[fs.A_MODE] & stat.S_IWUSR:\n perms[2] = 'w'\n if file[fs.A_MODE] & stat.S_IXUSR:\n perms[3] = 'x'\n if file[fs.A_MODE] & stat.S_ISUID:\n perms[3] = 'S'\n if file[fs.A_MODE] & stat.S_IXUSR and file[fs.A_MODE] & stat.S_ISUID:\n perms[3] = 's'\n\n if file[fs.A_MODE] & stat.S_IRGRP:\n perms[4] = 'r'\n if file[fs.A_MODE] & stat.S_IWGRP:\n perms[5] = 'w'\n if file[fs.A_MODE] & stat.S_IXGRP:\n perms[6] = 'x'\n if file[fs.A_MODE] & stat.S_ISGID:\n perms[6] = 'S'\n if file[fs.A_MODE] & stat.S_IXGRP and file[fs.A_MODE] & stat.S_ISGID:\n perms[6] = 's'\n\n if file[fs.A_MODE] & stat.S_IROTH:\n perms[7] = 'r'\n if file[fs.A_MODE] & stat.S_IWOTH:\n perms[8] = 'w'\n if file[fs.A_MODE] & stat.S_IXOTH:\n perms[9] = 'x'\n if file[fs.A_MODE] & stat.S_ISVTX:\n perms[9] = 'T'\n if file[fs.A_MODE] & stat.S_IXOTH and file[fs.A_MODE] & stat.S_ISVTX:\n perms[9] = 't'\n\n linktarget = ''\n\n if file[fs.A_TYPE] == fs.T_DIR:\n perms[0] = 'd'\n elif file[fs.A_TYPE] == fs.T_LINK:\n perms[0] = 'l'\n linktarget = ' -> %s' % (file[fs.A_TARGET],)\n\n perms = ''.join(perms)\n ctime = time.localtime(file[fs.A_CTIME])\n\n line = '%s 1 %s %s %s %s %s%s' % \\\n (perms,\n self.uid2name(file[fs.A_UID]),\n self.gid2name(file[fs.A_GID]),\n str(file[fs.A_SIZE]).rjust(len(str(largest))),\n time.strftime('%Y-%m-%d %H:%M', ctime),\n file[fs.A_NAME],\n linktarget)\n\n self.write('{0}\\n'.format(line))\n\n\ncommands['/bin/ls'] = command_ls\ncommands['ls'] = command_ls\ncommands['/bin/dir'] = command_ls\ncommands['dir'] = command_ls\n", "path": "src/cowrie/commands/ls.py"}]}
| 2,930 | 612 |
gh_patches_debug_25470
|
rasdani/github-patches
|
git_diff
|
hylang__hy-2188
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Monkey-patching `py.path.local.pyimport` should no longer be necessary
Hi
I noticed **py** is used in conftest.py but not declared in any configuration files .
In addition, py as a Python library is deprecated as its [documentation](https://pypi.org/project/py/) "py.path: uniform local and svn path objects -> please use pathlib/pathlib2 instead"
Maybe it is necessary to migrate to new dependency-pathlib2 and add it to configuration files.
</issue>
<code>
[start of conftest.py]
1 import sys
2 import os
3 import importlib
4 from operator import or_
5 from functools import reduce
6
7 import py
8 import pytest
9 import hy
10 from hy._compat import PY3_8, PY3_10
11
12 NATIVE_TESTS = os.path.join("", "tests", "native_tests", "")
13
14 _fspath_pyimport = py.path.local.pyimport
15
16 # https://github.com/hylang/hy/issues/2029
17 os.environ.pop("HYSTARTUP", None)
18
19
20 def pytest_ignore_collect(path, config):
21 versions = [
22 (sys.version_info < (3, 8), "sub_py3_7_only"),
23 (PY3_8, "py3_8_only"),
24 (PY3_10, "py3_10_only"),
25 ]
26
27 return reduce(
28 or_,
29 (name in path.basename and not condition for condition, name in versions),
30 ) or None
31
32
33 def pyimport_patch_mismatch(self, **kwargs):
34 """Lame fix for https://github.com/pytest-dev/py/issues/195"""
35 try:
36 return _fspath_pyimport(self, **kwargs)
37 except py.path.local.ImportMismatchError:
38 pkgpath = self.pypkgpath()
39 if pkgpath is None:
40 pkgroot = self.dirpath()
41 modname = self.purebasename
42 else:
43 pkgroot = pkgpath.dirpath()
44 names = self.new(ext="").relto(pkgroot).split(self.sep)
45 if names[-1] == "__init__":
46 names.pop()
47 modname = ".".join(names)
48
49 res = importlib.import_module(modname)
50
51 return res
52
53
54 py.path.local.pyimport = pyimport_patch_mismatch
55
56
57 def pytest_collect_file(parent, path):
58 if (path.ext == ".hy"
59 and NATIVE_TESTS in path.dirname + os.sep
60 and path.basename != "__init__.hy"):
61
62 if hasattr(pytest.Module, "from_parent"):
63 pytest_mod = pytest.Module.from_parent(parent, fspath=path)
64 else:
65 pytest_mod = pytest.Module(path, parent)
66 return pytest_mod
67
[end of conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -4,15 +4,12 @@
from operator import or_
from functools import reduce
-import py
import pytest
import hy
from hy._compat import PY3_8, PY3_10
NATIVE_TESTS = os.path.join("", "tests", "native_tests", "")
-_fspath_pyimport = py.path.local.pyimport
-
# https://github.com/hylang/hy/issues/2029
os.environ.pop("HYSTARTUP", None)
@@ -30,30 +27,6 @@
) or None
-def pyimport_patch_mismatch(self, **kwargs):
- """Lame fix for https://github.com/pytest-dev/py/issues/195"""
- try:
- return _fspath_pyimport(self, **kwargs)
- except py.path.local.ImportMismatchError:
- pkgpath = self.pypkgpath()
- if pkgpath is None:
- pkgroot = self.dirpath()
- modname = self.purebasename
- else:
- pkgroot = pkgpath.dirpath()
- names = self.new(ext="").relto(pkgroot).split(self.sep)
- if names[-1] == "__init__":
- names.pop()
- modname = ".".join(names)
-
- res = importlib.import_module(modname)
-
- return res
-
-
-py.path.local.pyimport = pyimport_patch_mismatch
-
-
def pytest_collect_file(parent, path):
if (path.ext == ".hy"
and NATIVE_TESTS in path.dirname + os.sep
|
{"golden_diff": "diff --git a/conftest.py b/conftest.py\n--- a/conftest.py\n+++ b/conftest.py\n@@ -4,15 +4,12 @@\n from operator import or_\n from functools import reduce\n \n-import py\n import pytest\n import hy\n from hy._compat import PY3_8, PY3_10\n \n NATIVE_TESTS = os.path.join(\"\", \"tests\", \"native_tests\", \"\")\n \n-_fspath_pyimport = py.path.local.pyimport\n-\n # https://github.com/hylang/hy/issues/2029\n os.environ.pop(\"HYSTARTUP\", None)\n \n@@ -30,30 +27,6 @@\n ) or None\n \n \n-def pyimport_patch_mismatch(self, **kwargs):\n- \"\"\"Lame fix for https://github.com/pytest-dev/py/issues/195\"\"\"\n- try:\n- return _fspath_pyimport(self, **kwargs)\n- except py.path.local.ImportMismatchError:\n- pkgpath = self.pypkgpath()\n- if pkgpath is None:\n- pkgroot = self.dirpath()\n- modname = self.purebasename\n- else:\n- pkgroot = pkgpath.dirpath()\n- names = self.new(ext=\"\").relto(pkgroot).split(self.sep)\n- if names[-1] == \"__init__\":\n- names.pop()\n- modname = \".\".join(names)\n-\n- res = importlib.import_module(modname)\n-\n- return res\n-\n-\n-py.path.local.pyimport = pyimport_patch_mismatch\n-\n-\n def pytest_collect_file(parent, path):\n if (path.ext == \".hy\"\n and NATIVE_TESTS in path.dirname + os.sep\n", "issue": "Monkey-patching `py.path.local.pyimport` should no longer be necessary\nHi\r\nI noticed **py** is used in conftest.py but not declared in any configuration files .\r\nIn addition, py as a Python library is deprecated as its [documentation](https://pypi.org/project/py/) \"py.path: uniform local and svn path objects -> please use pathlib/pathlib2 instead\"\r\n\r\nMaybe it is necessary to migrate to new dependency-pathlib2 and add it to configuration files.\n", "before_files": [{"content": "import sys\nimport os\nimport importlib\nfrom operator import or_\nfrom functools import reduce\n\nimport py\nimport pytest\nimport hy\nfrom hy._compat import PY3_8, PY3_10\n\nNATIVE_TESTS = os.path.join(\"\", \"tests\", \"native_tests\", \"\")\n\n_fspath_pyimport = py.path.local.pyimport\n\n# https://github.com/hylang/hy/issues/2029\nos.environ.pop(\"HYSTARTUP\", None)\n\n\ndef pytest_ignore_collect(path, config):\n versions = [\n (sys.version_info < (3, 8), \"sub_py3_7_only\"),\n (PY3_8, \"py3_8_only\"),\n (PY3_10, \"py3_10_only\"),\n ]\n\n return reduce(\n or_,\n (name in path.basename and not condition for condition, name in versions),\n ) or None\n\n\ndef pyimport_patch_mismatch(self, **kwargs):\n \"\"\"Lame fix for https://github.com/pytest-dev/py/issues/195\"\"\"\n try:\n return _fspath_pyimport(self, **kwargs)\n except py.path.local.ImportMismatchError:\n pkgpath = self.pypkgpath()\n if pkgpath is None:\n pkgroot = self.dirpath()\n modname = self.purebasename\n else:\n pkgroot = pkgpath.dirpath()\n names = self.new(ext=\"\").relto(pkgroot).split(self.sep)\n if names[-1] == \"__init__\":\n names.pop()\n modname = \".\".join(names)\n\n res = importlib.import_module(modname)\n\n return res\n\n\npy.path.local.pyimport = pyimport_patch_mismatch\n\n\ndef pytest_collect_file(parent, path):\n if (path.ext == \".hy\"\n and NATIVE_TESTS in path.dirname + os.sep\n and path.basename != \"__init__.hy\"):\n\n if hasattr(pytest.Module, \"from_parent\"):\n pytest_mod = pytest.Module.from_parent(parent, fspath=path)\n else:\n pytest_mod = pytest.Module(path, parent)\n return pytest_mod\n", "path": "conftest.py"}]}
| 1,214 | 371 |
gh_patches_debug_36132
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-1823
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DeprecationWarning: the imp module is deprecated in favour of importlib
When running a Django project using ddtrace with [warnings enabled](https://docs.python.org/3/using/cmdline.html#cmdoption-w), this warning is emitted:
## Issue
> `/usr/local/lib/python3.7/dist-packages/ddtrace/bootstrap/sitecustomize.py:7`: `DeprecationWarning`: the `imp` module is deprecated in favour of `importlib`; see the [module's documentation](https://docs.python.org/3/library/imp.html) for alternative uses
## Details
The line in question:
https://github.com/DataDog/dd-trace-py/blob/94148324196eb41c1f6bef56be51bdd96c758fa7/ddtrace/bootstrap/sitecustomize.py#L7
How it's used:
https://github.com/DataDog/dd-trace-py/blob/94148324196eb41c1f6bef56be51bdd96c758fa7/ddtrace/bootstrap/sitecustomize.py#L103-L120
Documentation note for [`imp.find_module()`](https://docs.python.org/3/library/imp.html#imp.find_module):
> Deprecated since version 3.3: Use `importlib.util.find_spec()` instead unless Python 3.3 compatibility is required, in which case use `importlib.find_loader()`. For example usage of the former case, see the Examples section of the `importlib` documentation.
Documentation note for [`imp.load_module()`](https://docs.python.org/3/library/imp.html#imp.load_module):
> Deprecated since version 3.3: If previously used in conjunction with `imp.find_module()` then consider using `importlib.import_module()`, otherwise use the loader returned by the replacement you chose for `imp.find_module()`. If you called `imp.load_module()` and related functions directly with file path arguments then use a combination of `importlib.util.spec_from_file_location()` and `importlib.util.module_from_spec()`. See the Examples section of the `importlib` documentation for details of the various approaches.
## Resolution
I suspect [this example](https://docs.python.org/3/library/importlib.html#approximating-importlib-import-module) could be worth building off of to do the necessary path customization.
</issue>
<code>
[start of ddtrace/bootstrap/sitecustomize.py]
1 """
2 Bootstrapping code that is run when using the `ddtrace-run` Python entrypoint
3 Add all monkey-patching that needs to run by default here
4 """
5 import logging
6 import os
7 import imp
8 import sys
9
10 from ddtrace.utils.formats import asbool, get_env, parse_tags_str
11 from ddtrace.internal.logger import get_logger
12 from ddtrace import config, constants
13 from ddtrace.tracer import debug_mode, DD_LOG_FORMAT
14
15
16 if config.logs_injection:
17 # immediately patch logging if trace id injected
18 from ddtrace import patch
19
20 patch(logging=True)
21
22
23 # DEV: Once basicConfig is called here, future calls to it cannot be used to
24 # change the formatter since it applies the formatter to the root handler only
25 # upon initializing it the first time.
26 # See https://github.com/python/cpython/blob/112e4afd582515fcdcc0cde5012a4866e5cfda12/Lib/logging/__init__.py#L1550
27 # Debug mode from the tracer will do a basicConfig so only need to do this otherwise
28 if not debug_mode:
29 if config.logs_injection:
30 logging.basicConfig(format=DD_LOG_FORMAT)
31 else:
32 logging.basicConfig()
33
34 log = get_logger(__name__)
35
36 EXTRA_PATCHED_MODULES = {
37 "bottle": True,
38 "django": True,
39 "falcon": True,
40 "flask": True,
41 "pylons": True,
42 "pyramid": True,
43 }
44
45
46 def update_patched_modules():
47 modules_to_patch = os.environ.get("DATADOG_PATCH_MODULES")
48 if not modules_to_patch:
49 return
50
51 modules = parse_tags_str(modules_to_patch)
52 for module, should_patch in modules.items():
53 EXTRA_PATCHED_MODULES[module] = asbool(should_patch)
54
55
56 try:
57 from ddtrace import tracer
58
59 # Respect DATADOG_* environment variables in global tracer configuration
60 # TODO: these variables are deprecated; use utils method and update our documentation
61 # correct prefix should be DD_*
62 hostname = os.environ.get("DD_AGENT_HOST", os.environ.get("DATADOG_TRACE_AGENT_HOSTNAME"))
63 port = os.environ.get("DATADOG_TRACE_AGENT_PORT")
64 priority_sampling = os.environ.get("DATADOG_PRIORITY_SAMPLING")
65 profiling = asbool(os.environ.get("DD_PROFILING_ENABLED", False))
66
67 if profiling:
68 import ddtrace.profiling.auto # noqa: F401
69
70 opts = {}
71
72 if asbool(os.environ.get("DATADOG_TRACE_ENABLED", True)):
73 patch = True
74 else:
75 patch = False
76 opts["enabled"] = False
77
78 if hostname:
79 opts["hostname"] = hostname
80 if port:
81 opts["port"] = int(port)
82 if priority_sampling:
83 opts["priority_sampling"] = asbool(priority_sampling)
84
85 opts["collect_metrics"] = asbool(get_env("runtime_metrics", "enabled"))
86
87 if opts:
88 tracer.configure(**opts)
89
90 if patch:
91 update_patched_modules()
92 from ddtrace import patch_all
93
94 patch_all(**EXTRA_PATCHED_MODULES)
95
96 if "DATADOG_ENV" in os.environ:
97 tracer.set_tags({constants.ENV_KEY: os.environ["DATADOG_ENV"]})
98
99 if "DD_TRACE_GLOBAL_TAGS" in os.environ:
100 env_tags = os.getenv("DD_TRACE_GLOBAL_TAGS")
101 tracer.set_tags(parse_tags_str(env_tags))
102
103 # Ensure sitecustomize.py is properly called if available in application directories:
104 # * exclude `bootstrap_dir` from the search
105 # * find a user `sitecustomize.py` module
106 # * import that module via `imp`
107 bootstrap_dir = os.path.dirname(__file__)
108 path = list(sys.path)
109
110 if bootstrap_dir in path:
111 path.remove(bootstrap_dir)
112
113 try:
114 (f, path, description) = imp.find_module("sitecustomize", path)
115 except ImportError:
116 pass
117 else:
118 # `sitecustomize.py` found, load it
119 log.debug("sitecustomize from user found in: %s", path)
120 imp.load_module("sitecustomize", f, path, description)
121
122 # Loading status used in tests to detect if the `sitecustomize` has been
123 # properly loaded without exceptions. This must be the last action in the module
124 # when the execution ends with a success.
125 loaded = True
126 except Exception:
127 loaded = False
128 log.warning("error configuring Datadog tracing", exc_info=True)
129
[end of ddtrace/bootstrap/sitecustomize.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ddtrace/bootstrap/sitecustomize.py b/ddtrace/bootstrap/sitecustomize.py
--- a/ddtrace/bootstrap/sitecustomize.py
+++ b/ddtrace/bootstrap/sitecustomize.py
@@ -4,7 +4,6 @@
"""
import logging
import os
-import imp
import sys
from ddtrace.utils.formats import asbool, get_env, parse_tags_str
@@ -100,24 +99,40 @@
env_tags = os.getenv("DD_TRACE_GLOBAL_TAGS")
tracer.set_tags(parse_tags_str(env_tags))
- # Ensure sitecustomize.py is properly called if available in application directories:
- # * exclude `bootstrap_dir` from the search
- # * find a user `sitecustomize.py` module
- # * import that module via `imp`
+ # Check for and import any sitecustomize that would have normally been used
+ # had ddtrace-run not been used.
bootstrap_dir = os.path.dirname(__file__)
- path = list(sys.path)
-
- if bootstrap_dir in path:
- path.remove(bootstrap_dir)
-
- try:
- (f, path, description) = imp.find_module("sitecustomize", path)
- except ImportError:
- pass
+ if bootstrap_dir in sys.path:
+ index = sys.path.index(bootstrap_dir)
+ del sys.path[index]
+
+ # NOTE: this reference to the module is crucial in Python 2.
+ # Without it the current module gets gc'd and all subsequent references
+ # will be `None`.
+ ddtrace_sitecustomize = sys.modules["sitecustomize"]
+ del sys.modules["sitecustomize"]
+ try:
+ import sitecustomize # noqa
+ except ImportError:
+ # If an additional sitecustomize is not found then put the ddtrace
+ # sitecustomize back.
+ log.debug("additional sitecustomize not found")
+ sys.modules["sitecustomize"] = ddtrace_sitecustomize
+ else:
+ log.debug("additional sitecustomize found in: %s", sys.path)
+ finally:
+ # Always reinsert the ddtrace bootstrap directory to the path so
+ # that introspection and debugging the application makes sense.
+ # Note that this does not interfere with imports since a user
+ # sitecustomize, if it exists, will be imported.
+ sys.path.insert(index, bootstrap_dir)
else:
- # `sitecustomize.py` found, load it
- log.debug("sitecustomize from user found in: %s", path)
- imp.load_module("sitecustomize", f, path, description)
+ try:
+ import sitecustomize # noqa
+ except ImportError:
+ log.debug("additional sitecustomize not found")
+ else:
+ log.debug("additional sitecustomize found in: %s", sys.path)
# Loading status used in tests to detect if the `sitecustomize` has been
# properly loaded without exceptions. This must be the last action in the module
|
{"golden_diff": "diff --git a/ddtrace/bootstrap/sitecustomize.py b/ddtrace/bootstrap/sitecustomize.py\n--- a/ddtrace/bootstrap/sitecustomize.py\n+++ b/ddtrace/bootstrap/sitecustomize.py\n@@ -4,7 +4,6 @@\n \"\"\"\n import logging\n import os\n-import imp\n import sys\n \n from ddtrace.utils.formats import asbool, get_env, parse_tags_str\n@@ -100,24 +99,40 @@\n env_tags = os.getenv(\"DD_TRACE_GLOBAL_TAGS\")\n tracer.set_tags(parse_tags_str(env_tags))\n \n- # Ensure sitecustomize.py is properly called if available in application directories:\n- # * exclude `bootstrap_dir` from the search\n- # * find a user `sitecustomize.py` module\n- # * import that module via `imp`\n+ # Check for and import any sitecustomize that would have normally been used\n+ # had ddtrace-run not been used.\n bootstrap_dir = os.path.dirname(__file__)\n- path = list(sys.path)\n-\n- if bootstrap_dir in path:\n- path.remove(bootstrap_dir)\n-\n- try:\n- (f, path, description) = imp.find_module(\"sitecustomize\", path)\n- except ImportError:\n- pass\n+ if bootstrap_dir in sys.path:\n+ index = sys.path.index(bootstrap_dir)\n+ del sys.path[index]\n+\n+ # NOTE: this reference to the module is crucial in Python 2.\n+ # Without it the current module gets gc'd and all subsequent references\n+ # will be `None`.\n+ ddtrace_sitecustomize = sys.modules[\"sitecustomize\"]\n+ del sys.modules[\"sitecustomize\"]\n+ try:\n+ import sitecustomize # noqa\n+ except ImportError:\n+ # If an additional sitecustomize is not found then put the ddtrace\n+ # sitecustomize back.\n+ log.debug(\"additional sitecustomize not found\")\n+ sys.modules[\"sitecustomize\"] = ddtrace_sitecustomize\n+ else:\n+ log.debug(\"additional sitecustomize found in: %s\", sys.path)\n+ finally:\n+ # Always reinsert the ddtrace bootstrap directory to the path so\n+ # that introspection and debugging the application makes sense.\n+ # Note that this does not interfere with imports since a user\n+ # sitecustomize, if it exists, will be imported.\n+ sys.path.insert(index, bootstrap_dir)\n else:\n- # `sitecustomize.py` found, load it\n- log.debug(\"sitecustomize from user found in: %s\", path)\n- imp.load_module(\"sitecustomize\", f, path, description)\n+ try:\n+ import sitecustomize # noqa\n+ except ImportError:\n+ log.debug(\"additional sitecustomize not found\")\n+ else:\n+ log.debug(\"additional sitecustomize found in: %s\", sys.path)\n \n # Loading status used in tests to detect if the `sitecustomize` has been\n # properly loaded without exceptions. This must be the last action in the module\n", "issue": "DeprecationWarning: the imp module is deprecated in favour of importlib\nWhen running a Django project using ddtrace with [warnings enabled](https://docs.python.org/3/using/cmdline.html#cmdoption-w), this warning is emitted:\r\n\r\n## Issue\r\n\r\n> `/usr/local/lib/python3.7/dist-packages/ddtrace/bootstrap/sitecustomize.py:7`: `DeprecationWarning`: the `imp` module is deprecated in favour of `importlib`; see the [module's documentation](https://docs.python.org/3/library/imp.html) for alternative uses\r\n\r\n## Details\r\n\r\nThe line in question:\r\n\r\nhttps://github.com/DataDog/dd-trace-py/blob/94148324196eb41c1f6bef56be51bdd96c758fa7/ddtrace/bootstrap/sitecustomize.py#L7\r\n\r\nHow it's used: \r\n\r\nhttps://github.com/DataDog/dd-trace-py/blob/94148324196eb41c1f6bef56be51bdd96c758fa7/ddtrace/bootstrap/sitecustomize.py#L103-L120\r\n\r\nDocumentation note for [`imp.find_module()`](https://docs.python.org/3/library/imp.html#imp.find_module):\r\n\r\n> Deprecated since version 3.3: Use `importlib.util.find_spec()` instead unless Python 3.3 compatibility is required, in which case use `importlib.find_loader()`. For example usage of the former case, see the Examples section of the `importlib` documentation.\r\n\r\nDocumentation note for [`imp.load_module()`](https://docs.python.org/3/library/imp.html#imp.load_module):\r\n\r\n> Deprecated since version 3.3: If previously used in conjunction with `imp.find_module()` then consider using `importlib.import_module()`, otherwise use the loader returned by the replacement you chose for `imp.find_module()`. If you called `imp.load_module()` and related functions directly with file path arguments then use a combination of `importlib.util.spec_from_file_location()` and `importlib.util.module_from_spec()`. See the Examples section of the `importlib` documentation for details of the various approaches.\r\n\r\n## Resolution\r\n\r\nI suspect [this example](https://docs.python.org/3/library/importlib.html#approximating-importlib-import-module) could be worth building off of to do the necessary path customization.\n", "before_files": [{"content": "\"\"\"\nBootstrapping code that is run when using the `ddtrace-run` Python entrypoint\nAdd all monkey-patching that needs to run by default here\n\"\"\"\nimport logging\nimport os\nimport imp\nimport sys\n\nfrom ddtrace.utils.formats import asbool, get_env, parse_tags_str\nfrom ddtrace.internal.logger import get_logger\nfrom ddtrace import config, constants\nfrom ddtrace.tracer import debug_mode, DD_LOG_FORMAT\n\n\nif config.logs_injection:\n # immediately patch logging if trace id injected\n from ddtrace import patch\n\n patch(logging=True)\n\n\n# DEV: Once basicConfig is called here, future calls to it cannot be used to\n# change the formatter since it applies the formatter to the root handler only\n# upon initializing it the first time.\n# See https://github.com/python/cpython/blob/112e4afd582515fcdcc0cde5012a4866e5cfda12/Lib/logging/__init__.py#L1550\n# Debug mode from the tracer will do a basicConfig so only need to do this otherwise\nif not debug_mode:\n if config.logs_injection:\n logging.basicConfig(format=DD_LOG_FORMAT)\n else:\n logging.basicConfig()\n\nlog = get_logger(__name__)\n\nEXTRA_PATCHED_MODULES = {\n \"bottle\": True,\n \"django\": True,\n \"falcon\": True,\n \"flask\": True,\n \"pylons\": True,\n \"pyramid\": True,\n}\n\n\ndef update_patched_modules():\n modules_to_patch = os.environ.get(\"DATADOG_PATCH_MODULES\")\n if not modules_to_patch:\n return\n\n modules = parse_tags_str(modules_to_patch)\n for module, should_patch in modules.items():\n EXTRA_PATCHED_MODULES[module] = asbool(should_patch)\n\n\ntry:\n from ddtrace import tracer\n\n # Respect DATADOG_* environment variables in global tracer configuration\n # TODO: these variables are deprecated; use utils method and update our documentation\n # correct prefix should be DD_*\n hostname = os.environ.get(\"DD_AGENT_HOST\", os.environ.get(\"DATADOG_TRACE_AGENT_HOSTNAME\"))\n port = os.environ.get(\"DATADOG_TRACE_AGENT_PORT\")\n priority_sampling = os.environ.get(\"DATADOG_PRIORITY_SAMPLING\")\n profiling = asbool(os.environ.get(\"DD_PROFILING_ENABLED\", False))\n\n if profiling:\n import ddtrace.profiling.auto # noqa: F401\n\n opts = {}\n\n if asbool(os.environ.get(\"DATADOG_TRACE_ENABLED\", True)):\n patch = True\n else:\n patch = False\n opts[\"enabled\"] = False\n\n if hostname:\n opts[\"hostname\"] = hostname\n if port:\n opts[\"port\"] = int(port)\n if priority_sampling:\n opts[\"priority_sampling\"] = asbool(priority_sampling)\n\n opts[\"collect_metrics\"] = asbool(get_env(\"runtime_metrics\", \"enabled\"))\n\n if opts:\n tracer.configure(**opts)\n\n if patch:\n update_patched_modules()\n from ddtrace import patch_all\n\n patch_all(**EXTRA_PATCHED_MODULES)\n\n if \"DATADOG_ENV\" in os.environ:\n tracer.set_tags({constants.ENV_KEY: os.environ[\"DATADOG_ENV\"]})\n\n if \"DD_TRACE_GLOBAL_TAGS\" in os.environ:\n env_tags = os.getenv(\"DD_TRACE_GLOBAL_TAGS\")\n tracer.set_tags(parse_tags_str(env_tags))\n\n # Ensure sitecustomize.py is properly called if available in application directories:\n # * exclude `bootstrap_dir` from the search\n # * find a user `sitecustomize.py` module\n # * import that module via `imp`\n bootstrap_dir = os.path.dirname(__file__)\n path = list(sys.path)\n\n if bootstrap_dir in path:\n path.remove(bootstrap_dir)\n\n try:\n (f, path, description) = imp.find_module(\"sitecustomize\", path)\n except ImportError:\n pass\n else:\n # `sitecustomize.py` found, load it\n log.debug(\"sitecustomize from user found in: %s\", path)\n imp.load_module(\"sitecustomize\", f, path, description)\n\n # Loading status used in tests to detect if the `sitecustomize` has been\n # properly loaded without exceptions. This must be the last action in the module\n # when the execution ends with a success.\n loaded = True\nexcept Exception:\n loaded = False\n log.warning(\"error configuring Datadog tracing\", exc_info=True)\n", "path": "ddtrace/bootstrap/sitecustomize.py"}]}
| 2,307 | 649 |
gh_patches_debug_20168
|
rasdani/github-patches
|
git_diff
|
stephenmcd__mezzanine-1259
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
overextends tag broken in Django 1.7+1.8
Looks like the changes made to `loader_tags.py` in a50de50699bb6a24bfb5f118449991aa7608b426 either didn't work or both Django versions have since changed.
As reported here: https://groups.google.com/d/msg/mezzanine-users/_QWfFVB3RVc/ZirizEV9t2YJ
Just pinging @AlexHill as you might have a head's up on this one already.
I made a quick attempt by changing `find_template_loader = context.engine.find_template_loader` to `find_template_loader = context.engine.find_template_loader` which appears to work for 1.8, but then other possibly unrelated exceptions came up.
BTW my quick tip for actually running `overextends` is to modify the first line of `core/templates/admin/base_site.html` to use it instead of `extends`
</issue>
<code>
[start of mezzanine/template/loader_tags.py]
1 from __future__ import unicode_literals
2 from future.builtins import map
3
4 import os
5
6 from django.template import Template, TemplateSyntaxError, TemplateDoesNotExist
7 from django.template.loader_tags import ExtendsNode
8
9 from mezzanine import template
10
11
12 register = template.Library()
13
14
15 class OverExtendsNode(ExtendsNode):
16 """
17 Allows the template ``foo/bar.html`` to extend ``foo/bar.html``,
18 given that there is another version of it that can be loaded. This
19 allows templates to be created in a project that extend their app
20 template counterparts, or even app templates that extend other app
21 templates with the same relative name/path.
22
23 We use our own version of ``find_template``, that uses an explict
24 list of template directories to search for the template, based on
25 the directories that the known template loaders
26 (``app_directories`` and ``filesystem``) use. This list gets stored
27 in the template context, and each time a template is found, its
28 absolute path gets removed from the list, so that subsequent
29 searches for the same relative name/path can find parent templates
30 in other directories, which allows circular inheritance to occur.
31
32 Django's ``app_directories``, ``filesystem``, and ``cached``
33 loaders are supported. The ``eggs`` loader, and any loader that
34 implements ``load_template_source`` with a source string returned,
35 should also theoretically work.
36 """
37
38 def find_template(self, name, context, peeking=False):
39 """
40 Replacement for Django's ``find_template`` that uses the current
41 template context to keep track of which template directories it
42 has used when finding a template. This allows multiple templates
43 with the same relative name/path to be discovered, so that
44 circular template inheritance can occur.
45 """
46
47 # These imports want settings, which aren't available when this
48 # module is imported to ``add_to_builtins``, so do them here.
49 import django.template.loaders.app_directories as app_directories
50 try:
51 # Django >= 1.8
52 app_template_dirs = app_directories.get_app_template_dirs
53 except AttributeError:
54 # Django <= 1.7
55 app_template_dirs = app_directories.app_template_dirs
56
57 try:
58 # Django >= 1.8
59 find_template_loader = context.engine.find_template_loader
60 except AttributeError:
61 # Django <= 1.7
62 from django.template.loaders import find_template_loader
63
64 from mezzanine.conf import settings
65
66 # Store a dictionary in the template context mapping template
67 # names to the lists of template directories available to
68 # search for that template. Each time a template is loaded, its
69 # origin directory is removed from its directories list.
70 context_name = "OVEREXTENDS_DIRS"
71 if context_name not in context:
72 context[context_name] = {}
73 if name not in context[context_name]:
74 all_dirs = list(settings.TEMPLATE_DIRS) + list(app_template_dirs)
75 # os.path.abspath is needed under uWSGI, and also ensures we
76 # have consistent path separators across different OSes.
77 context[context_name][name] = list(map(os.path.abspath, all_dirs))
78
79 # Build a list of template loaders to use. For loaders that wrap
80 # other loaders like the ``cached`` template loader, unwind its
81 # internal loaders and add those instead.
82 loaders = []
83 for loader_name in settings.TEMPLATE_LOADERS:
84 loader = find_template_loader(loader_name)
85 loaders.extend(getattr(loader, "loaders", [loader]))
86
87 # Go through the loaders and try to find the template. When
88 # found, removed its absolute path from the context dict so
89 # that it won't be used again when the same relative name/path
90 # is requested.
91 for loader in loaders:
92 dirs = context[context_name][name]
93 try:
94 source, path = loader.load_template_source(name, dirs)
95 except TemplateDoesNotExist:
96 pass
97 else:
98 # Only remove the absolute path for the initial call in
99 # get_parent, and not when we're peeking during the
100 # second call.
101 if not peeking:
102 remove_path = os.path.abspath(path[:-len(name) - 1])
103 context[context_name][name].remove(remove_path)
104 return Template(source)
105 raise TemplateDoesNotExist(name)
106
107 def get_parent(self, context):
108 """
109 Load the parent template using our own ``find_template``, which
110 will cause its absolute path to not be used again. Then peek at
111 the first node, and if its parent arg is the same as the
112 current parent arg, we know circular inheritance is going to
113 occur, in which case we try and find the template again, with
114 the absolute directory removed from the search list.
115 """
116 parent = self.parent_name.resolve(context)
117 # If parent is a template object, just return it.
118 if hasattr(parent, "render"):
119 return parent
120 template = self.find_template(parent, context)
121 for node in template.nodelist:
122 if (isinstance(node, ExtendsNode) and
123 node.parent_name.resolve(context) == parent):
124 return self.find_template(parent, context, peeking=True)
125 return template
126
127
128 @register.tag
129 def overextends(parser, token):
130 """
131 Extended version of Django's ``extends`` tag that allows circular
132 inheritance to occur, eg a template can both be overridden and
133 extended at once.
134 """
135 bits = token.split_contents()
136 if len(bits) != 2:
137 raise TemplateSyntaxError("'%s' takes one argument" % bits[0])
138 parent_name = parser.compile_filter(bits[1])
139 nodelist = parser.parse()
140 if nodelist.get_nodes_by_type(ExtendsNode):
141 raise TemplateSyntaxError("'%s' cannot appear more than once "
142 "in the same template" % bits[0])
143 return OverExtendsNode(nodelist, parent_name, None)
144
[end of mezzanine/template/loader_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mezzanine/template/loader_tags.py b/mezzanine/template/loader_tags.py
--- a/mezzanine/template/loader_tags.py
+++ b/mezzanine/template/loader_tags.py
@@ -49,17 +49,18 @@
import django.template.loaders.app_directories as app_directories
try:
# Django >= 1.8
- app_template_dirs = app_directories.get_app_template_dirs
+ get_app_template_dirs = app_directories.get_app_template_dirs
+ app_template_dirs = get_app_template_dirs('templates')
except AttributeError:
# Django <= 1.7
app_template_dirs = app_directories.app_template_dirs
try:
# Django >= 1.8
- find_template_loader = context.engine.find_template_loader
+ find_template_loader = context.template.engine.find_template_loader
except AttributeError:
# Django <= 1.7
- from django.template.loaders import find_template_loader
+ from django.template.loader import find_template_loader
from mezzanine.conf import settings
|
{"golden_diff": "diff --git a/mezzanine/template/loader_tags.py b/mezzanine/template/loader_tags.py\n--- a/mezzanine/template/loader_tags.py\n+++ b/mezzanine/template/loader_tags.py\n@@ -49,17 +49,18 @@\n import django.template.loaders.app_directories as app_directories\n try:\n # Django >= 1.8\n- app_template_dirs = app_directories.get_app_template_dirs\n+ get_app_template_dirs = app_directories.get_app_template_dirs\n+ app_template_dirs = get_app_template_dirs('templates')\n except AttributeError:\n # Django <= 1.7\n app_template_dirs = app_directories.app_template_dirs\n \n try:\n # Django >= 1.8\n- find_template_loader = context.engine.find_template_loader\n+ find_template_loader = context.template.engine.find_template_loader\n except AttributeError:\n # Django <= 1.7\n- from django.template.loaders import find_template_loader\n+ from django.template.loader import find_template_loader\n \n from mezzanine.conf import settings\n", "issue": "overextends tag broken in Django 1.7+1.8\nLooks like the changes made to `loader_tags.py` in a50de50699bb6a24bfb5f118449991aa7608b426 either didn't work or both Django versions have since changed.\n\nAs reported here: https://groups.google.com/d/msg/mezzanine-users/_QWfFVB3RVc/ZirizEV9t2YJ\n\nJust pinging @AlexHill as you might have a head's up on this one already. \n\nI made a quick attempt by changing `find_template_loader = context.engine.find_template_loader` to `find_template_loader = context.engine.find_template_loader` which appears to work for 1.8, but then other possibly unrelated exceptions came up.\n\nBTW my quick tip for actually running `overextends` is to modify the first line of `core/templates/admin/base_site.html` to use it instead of `extends`\n\n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom future.builtins import map\n\nimport os\n\nfrom django.template import Template, TemplateSyntaxError, TemplateDoesNotExist\nfrom django.template.loader_tags import ExtendsNode\n\nfrom mezzanine import template\n\n\nregister = template.Library()\n\n\nclass OverExtendsNode(ExtendsNode):\n \"\"\"\n Allows the template ``foo/bar.html`` to extend ``foo/bar.html``,\n given that there is another version of it that can be loaded. This\n allows templates to be created in a project that extend their app\n template counterparts, or even app templates that extend other app\n templates with the same relative name/path.\n\n We use our own version of ``find_template``, that uses an explict\n list of template directories to search for the template, based on\n the directories that the known template loaders\n (``app_directories`` and ``filesystem``) use. This list gets stored\n in the template context, and each time a template is found, its\n absolute path gets removed from the list, so that subsequent\n searches for the same relative name/path can find parent templates\n in other directories, which allows circular inheritance to occur.\n\n Django's ``app_directories``, ``filesystem``, and ``cached``\n loaders are supported. The ``eggs`` loader, and any loader that\n implements ``load_template_source`` with a source string returned,\n should also theoretically work.\n \"\"\"\n\n def find_template(self, name, context, peeking=False):\n \"\"\"\n Replacement for Django's ``find_template`` that uses the current\n template context to keep track of which template directories it\n has used when finding a template. This allows multiple templates\n with the same relative name/path to be discovered, so that\n circular template inheritance can occur.\n \"\"\"\n\n # These imports want settings, which aren't available when this\n # module is imported to ``add_to_builtins``, so do them here.\n import django.template.loaders.app_directories as app_directories\n try:\n # Django >= 1.8\n app_template_dirs = app_directories.get_app_template_dirs\n except AttributeError:\n # Django <= 1.7\n app_template_dirs = app_directories.app_template_dirs\n\n try:\n # Django >= 1.8\n find_template_loader = context.engine.find_template_loader\n except AttributeError:\n # Django <= 1.7\n from django.template.loaders import find_template_loader\n\n from mezzanine.conf import settings\n\n # Store a dictionary in the template context mapping template\n # names to the lists of template directories available to\n # search for that template. Each time a template is loaded, its\n # origin directory is removed from its directories list.\n context_name = \"OVEREXTENDS_DIRS\"\n if context_name not in context:\n context[context_name] = {}\n if name not in context[context_name]:\n all_dirs = list(settings.TEMPLATE_DIRS) + list(app_template_dirs)\n # os.path.abspath is needed under uWSGI, and also ensures we\n # have consistent path separators across different OSes.\n context[context_name][name] = list(map(os.path.abspath, all_dirs))\n\n # Build a list of template loaders to use. For loaders that wrap\n # other loaders like the ``cached`` template loader, unwind its\n # internal loaders and add those instead.\n loaders = []\n for loader_name in settings.TEMPLATE_LOADERS:\n loader = find_template_loader(loader_name)\n loaders.extend(getattr(loader, \"loaders\", [loader]))\n\n # Go through the loaders and try to find the template. When\n # found, removed its absolute path from the context dict so\n # that it won't be used again when the same relative name/path\n # is requested.\n for loader in loaders:\n dirs = context[context_name][name]\n try:\n source, path = loader.load_template_source(name, dirs)\n except TemplateDoesNotExist:\n pass\n else:\n # Only remove the absolute path for the initial call in\n # get_parent, and not when we're peeking during the\n # second call.\n if not peeking:\n remove_path = os.path.abspath(path[:-len(name) - 1])\n context[context_name][name].remove(remove_path)\n return Template(source)\n raise TemplateDoesNotExist(name)\n\n def get_parent(self, context):\n \"\"\"\n Load the parent template using our own ``find_template``, which\n will cause its absolute path to not be used again. Then peek at\n the first node, and if its parent arg is the same as the\n current parent arg, we know circular inheritance is going to\n occur, in which case we try and find the template again, with\n the absolute directory removed from the search list.\n \"\"\"\n parent = self.parent_name.resolve(context)\n # If parent is a template object, just return it.\n if hasattr(parent, \"render\"):\n return parent\n template = self.find_template(parent, context)\n for node in template.nodelist:\n if (isinstance(node, ExtendsNode) and\n node.parent_name.resolve(context) == parent):\n return self.find_template(parent, context, peeking=True)\n return template\n\n\[email protected]\ndef overextends(parser, token):\n \"\"\"\n Extended version of Django's ``extends`` tag that allows circular\n inheritance to occur, eg a template can both be overridden and\n extended at once.\n \"\"\"\n bits = token.split_contents()\n if len(bits) != 2:\n raise TemplateSyntaxError(\"'%s' takes one argument\" % bits[0])\n parent_name = parser.compile_filter(bits[1])\n nodelist = parser.parse()\n if nodelist.get_nodes_by_type(ExtendsNode):\n raise TemplateSyntaxError(\"'%s' cannot appear more than once \"\n \"in the same template\" % bits[0])\n return OverExtendsNode(nodelist, parent_name, None)\n", "path": "mezzanine/template/loader_tags.py"}]}
| 2,373 | 229 |
gh_patches_debug_27770
|
rasdani/github-patches
|
git_diff
|
vyperlang__vyper-2081
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation Updates Megaissue
While working on #1915 I've run into some areas where the documentation is lacking. This issue is a list of topics that I think need work. It may change over time.
- [x] `public` and `constant` as methods applied to storage variables
- [x] `self`
- [x] assignment
- [x] statements, expressions, control structure
- [x] scoping rules
- [x] `for` loops
- [x] tuples
- [x] contract objects
- [ ] memory layout of data types
- [ ] pass-by-reference / pass-by-value
- [ ] abi format
- [x] arithmetic functions (should be moved from types to builtin functions)
- [x] allowable literals for each type
- [x] examples for each of the builtin functions
- [x] `__init__` method
</issue>
<code>
[start of docs/conf.py]
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Vyper documentation build configuration file, created by
5 # sphinx-quickstart on Wed Jul 26 11:18:29 2017.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20 # import os
21 # import sys
22 # sys.path.insert(0, os.path.abspath('.'))
23 from recommonmark.parser import CommonMarkParser
24
25 # TO DO - Create and Implement Vyper Lexer
26 # def setup(sphinx):
27 # sys.path.insert(0, os.path.abspath('./utils'))
28 # from SolidityLexer import SolidityLexer
29 # sphinx.add_lexer('Python', SolidityLexer())
30
31
32 # -- General configuration ------------------------------------------------
33
34 # If your documentation needs a minimal Sphinx version, state it here.
35 #
36 # needs_sphinx = '1.0'
37
38 # Add any Sphinx extension module names here, as strings. They can be
39 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
40 # ones.
41 extensions = [
42 "sphinx.ext.autodoc",
43 "sphinx.ext.intersphinx",
44 ]
45
46 # Add any paths that contain templates here, relative to this directory.
47 templates_path = ["_templates"]
48
49 # The suffix(es) of source filenames.
50 # You can specify multiple suffix as a list of string:
51 #
52 # source_suffix = ['.rst', '.md']
53 source_suffix = ".rst"
54
55 # The master toctree document.
56 master_doc = "index"
57
58 # General information about the project.
59 project = "Vyper"
60 copyright = "2017-2020 CC-BY-4.0 Vyper Team"
61 author = "Vyper Team (originally created by Vitalik Buterin)"
62
63 # The version info for the project you're documenting, acts as replacement for
64 # |version| and |release|, also used in various other places throughout the
65 # built documents.
66 #
67 # The short X.Y version.
68 version = ""
69 # The full version, including alpha/beta/rc tags.
70 release = ""
71
72 # The language for content autogenerated by Sphinx. Refer to documentation
73 # for a list of supported languages.
74 #
75 # This is also used if you do content translation via gettext catalogs.
76 # Usually you set "language" from the command line for these cases.
77 language = "python"
78
79 # List of patterns, relative to source directory, that match files and
80 # directories to ignore when looking for source files.
81 # This patterns also effect to html_static_path and html_extra_path
82 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
83
84 # The name of the Pygments (syntax highlighting) style to use.
85 pygments_style = "sphinx"
86
87 # If true, `todo` and `todoList` produce output, else they produce nothing.
88 todo_include_todos = False
89
90
91 # -- Options for HTML output ----------------------------------------------
92
93 # The theme to use for HTML and HTML Help pages. See the documentation for
94 # a list of builtin themes.
95 #
96 html_theme = "sphinx_rtd_theme"
97
98 # Theme options are theme-specific and customize the look and feel of a theme
99 # further. For a list of options available for each theme, see the
100 # documentation.
101 #
102 # html_theme_options = {}
103
104 # Add any paths that contain custom static files (such as style sheets) here,
105 # relative to this directory. They are copied after the builtin static files,
106 # so a file named "default.css" will overwrite the builtin "default.css".
107 html_static_path = ["_static"]
108
109 html_css_files = ["css/toggle.css", "css/dark.css"]
110
111 html_js_files = ["js/toggle.js"]
112
113 # Custom sidebar templates, must be a dictionary that maps document names
114 # to template names.
115 #
116 # The default sidebars (for documents that don't match any pattern) are
117 # defined by theme itself. Builtin themes are using these templates by
118 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
119 # 'searchbox.html']``.
120 #
121 # html_sidebars = {}
122
123
124 # -- Options for HTMLHelp output ------------------------------------------
125
126 # Output file base name for HTML help builder.
127 htmlhelp_basename = "Vyperdoc"
128
129
130 # -- Options for LaTeX output ---------------------------------------------
131
132 latex_elements = {
133 # The paper size ('letterpaper' or 'a4paper').
134 #
135 # 'papersize': 'letterpaper',
136 # The font size ('10pt', '11pt' or '12pt').
137 #
138 # 'pointsize': '10pt',
139 # Additional stuff for the LaTeX preamble.
140 #
141 # 'preamble': '',
142 # Latex figure (float) alignment
143 #
144 # 'figure_align': 'htbp',
145 }
146
147 # Grouping the document tree into LaTeX files. List of tuples
148 # (source start file, target name, title,
149 # author, documentclass [howto, manual, or own class]).
150 latex_documents = [
151 (
152 master_doc,
153 "Vyper.tex",
154 "Vyper Documentation",
155 "Vyper Team (originally created by Vitalik Buterin)",
156 "manual",
157 ),
158 ]
159
160
161 # -- Options for manual page output ---------------------------------------
162
163 # One entry per manual page. List of tuples
164 # (source start file, name, description, authors, manual section).
165 man_pages = [(master_doc, "vyper", "Vyper Documentation", [author], 1)]
166
167
168 # -- Options for Texinfo output -------------------------------------------
169
170 # Grouping the document tree into Texinfo files. List of tuples
171 # (source start file, target name, title, author,
172 # dir menu entry, description, category)
173 texinfo_documents = [
174 (
175 master_doc,
176 "Vyper",
177 "Vyper Documentation",
178 author,
179 "Vyper",
180 "One line description of project.",
181 "Miscellaneous",
182 ),
183 ]
184
185 source_parsers = {
186 ".md": CommonMarkParser,
187 }
188
189 source_suffix = [".rst", ".md"]
190
191 intersphinx_mapping = {
192 "brownie": ("https://eth-brownie.readthedocs.io/en/stable", None),
193 "pytest": ("https://docs.pytest.org/en/latest/", None),
194 "python": ("https://docs.python.org/3.8/", None),
195 }
196
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -49,11 +49,10 @@
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
-# source_suffix = ['.rst', '.md']
-source_suffix = ".rst"
+source_suffix = [".rst", ".md"]
# The master toctree document.
-master_doc = "index"
+master_doc = "toctree"
# General information about the project.
project = "Vyper"
@@ -110,6 +109,8 @@
html_js_files = ["js/toggle.js"]
+html_logo = "vyper-logo-transparent.svg"
+
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
@@ -129,7 +130,7 @@
# -- Options for LaTeX output ---------------------------------------------
-latex_elements = {
+latex_elements: dict = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
@@ -186,8 +187,6 @@
".md": CommonMarkParser,
}
-source_suffix = [".rst", ".md"]
-
intersphinx_mapping = {
"brownie": ("https://eth-brownie.readthedocs.io/en/stable", None),
"pytest": ("https://docs.pytest.org/en/latest/", None),
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -49,11 +49,10 @@\n # The suffix(es) of source filenames.\n # You can specify multiple suffix as a list of string:\n #\n-# source_suffix = ['.rst', '.md']\n-source_suffix = \".rst\"\n+source_suffix = [\".rst\", \".md\"]\n \n # The master toctree document.\n-master_doc = \"index\"\n+master_doc = \"toctree\"\n \n # General information about the project.\n project = \"Vyper\"\n@@ -110,6 +109,8 @@\n \n html_js_files = [\"js/toggle.js\"]\n \n+html_logo = \"vyper-logo-transparent.svg\"\n+\n # Custom sidebar templates, must be a dictionary that maps document names\n # to template names.\n #\n@@ -129,7 +130,7 @@\n \n # -- Options for LaTeX output ---------------------------------------------\n \n-latex_elements = {\n+latex_elements: dict = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n@@ -186,8 +187,6 @@\n \".md\": CommonMarkParser,\n }\n \n-source_suffix = [\".rst\", \".md\"]\n-\n intersphinx_mapping = {\n \"brownie\": (\"https://eth-brownie.readthedocs.io/en/stable\", None),\n \"pytest\": (\"https://docs.pytest.org/en/latest/\", None),\n", "issue": "Documentation Updates Megaissue\nWhile working on #1915 I've run into some areas where the documentation is lacking. This issue is a list of topics that I think need work. It may change over time.\r\n\r\n- [x] `public` and `constant` as methods applied to storage variables\r\n- [x] `self`\r\n- [x] assignment\r\n- [x] statements, expressions, control structure\r\n- [x] scoping rules\r\n- [x] `for` loops\r\n- [x] tuples\r\n- [x] contract objects\r\n- [ ] memory layout of data types\r\n- [ ] pass-by-reference / pass-by-value\r\n- [ ] abi format\r\n- [x] arithmetic functions (should be moved from types to builtin functions)\r\n- [x] allowable literals for each type\r\n- [x] examples for each of the builtin functions\r\n- [x] `__init__` method\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Vyper documentation build configuration file, created by\n# sphinx-quickstart on Wed Jul 26 11:18:29 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\nfrom recommonmark.parser import CommonMarkParser\n\n# TO DO - Create and Implement Vyper Lexer\n# def setup(sphinx):\n# sys.path.insert(0, os.path.abspath('./utils'))\n# from SolidityLexer import SolidityLexer\n# sphinx.add_lexer('Python', SolidityLexer())\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"Vyper\"\ncopyright = \"2017-2020 CC-BY-4.0 Vyper Team\"\nauthor = \"Vyper Team (originally created by Vitalik Buterin)\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"\"\n# The full version, including alpha/beta/rc tags.\nrelease = \"\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = \"python\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\nhtml_css_files = [\"css/toggle.css\", \"css/dark.css\"]\n\nhtml_js_files = [\"js/toggle.js\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"Vyperdoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"Vyper.tex\",\n \"Vyper Documentation\",\n \"Vyper Team (originally created by Vitalik Buterin)\",\n \"manual\",\n ),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"vyper\", \"Vyper Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"Vyper\",\n \"Vyper Documentation\",\n author,\n \"Vyper\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\nsource_parsers = {\n \".md\": CommonMarkParser,\n}\n\nsource_suffix = [\".rst\", \".md\"]\n\nintersphinx_mapping = {\n \"brownie\": (\"https://eth-brownie.readthedocs.io/en/stable\", None),\n \"pytest\": (\"https://docs.pytest.org/en/latest/\", None),\n \"python\": (\"https://docs.python.org/3.8/\", None),\n}\n", "path": "docs/conf.py"}]}
| 2,656 | 325 |
gh_patches_debug_27499
|
rasdani/github-patches
|
git_diff
|
gammapy__gammapy-4863
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Just importing gammapy raises a warning about ray support being experimental
**Gammapy version**
```
Gammapy support for parallelisation with ray is still a prototype and is not fully functional.
System:
python_executable : /home/maxnoe/.local/conda/envs/gammapy-dev/bin/python3.9
python_version : 3.9.16
machine : x86_64
system : Linux
Gammapy package:
version : 1.2.dev201+g514451881.d20230627
path : /home/maxnoe/Projects/gammapy/gammapy
Other packages:
numpy : 1.25.0
scipy : 1.11.0
astropy : 5.3
regions : 0.7
click : 8.1.3
yaml : 6.0
IPython : 8.14.0
jupyterlab : 3.5.3
matplotlib : 3.7.1
pandas : 2.0.2
healpy : 1.16.2
iminuit : 2.22.0
sherpa : 4.15.1
naima : 0.10.0
emcee : 3.1.4
corner : 2.2.2
ray : 2.5.1
Gammapy environment variables:
GAMMAPY_DATA : /home/maxnoe/Projects/gammapy/gammapy-datasets/dev
```
**Bug description**
Just importing a subpackage of gammapy, without doing anything else, raises a warning about ray support being experimental.
I am not doing anything with ray, I just setup the dev environment and imported things:
```
❯ python -c 'import gammapy.datasets'
Gammapy support for parallelisation with ray is still a prototype and is not fully functional.
❯ python -c 'import gammapy.makers'
Gammapy support for parallelisation with ray is still a prototype and is not fully functional.
```
**Expected behavior**
No warnings about things I don't actually use.
**To Reproduce**
See above, dev environment and the imports.
**Other information**
</issue>
<code>
[start of gammapy/utils/parallel.py]
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 """Multiprocessing and multithreading setup"""
3 import importlib
4 import logging
5 import multiprocessing
6 from enum import Enum
7 from gammapy.utils.pbar import progress_bar
8
9 log = logging.getLogger(__name__)
10
11
12 class ParallelBackendEnum(Enum):
13 """Enum for parallel backend"""
14
15 multiprocessing = "multiprocessing"
16 ray = "ray"
17
18 @classmethod
19 def from_str(cls, value):
20 """Get enum from string"""
21 if value is None:
22 value = BACKEND_DEFAULT
23
24 if value == "ray" and not is_ray_available():
25 log.warning("Ray is not installed, falling back to multiprocessing backend")
26 value = "multiprocessing"
27
28 return cls(value)
29
30
31 class PoolMethodEnum(Enum):
32 """Enum for pool method"""
33
34 starmap = "starmap"
35 apply_async = "apply_async"
36
37
38 BACKEND_DEFAULT = ParallelBackendEnum.multiprocessing
39 N_JOBS_DEFAULT = 1
40
41
42 def get_multiprocessing_ray():
43 """Get multiprocessing module for ray backend"""
44 import ray.util.multiprocessing as multiprocessing
45
46 log.warning(
47 "Gammapy support for parallelisation with ray is still a prototype and is not fully functional."
48 )
49 return multiprocessing
50
51
52 def is_ray_initialized():
53 """Check if ray is initialized"""
54 try:
55 from ray import is_initialized
56
57 return is_initialized()
58 except ModuleNotFoundError:
59 return False
60
61
62 def is_ray_available():
63 """Check if ray is available"""
64 try:
65 importlib.import_module("ray")
66 return True
67 except ModuleNotFoundError:
68 return False
69
70
71 class ParallelMixin:
72 """Mixin class to handle parallel processing"""
73
74 @property
75 def n_jobs(self):
76 """Number of jobs (int)"""
77 # TODO: this is somewhat unusual behaviour. It deviates from a normal default value handling
78 if self._n_jobs is None:
79 return N_JOBS_DEFAULT
80
81 return self._n_jobs
82
83 @n_jobs.setter
84 def n_jobs(self, value):
85 """Number of jobs setter (int)"""
86 if not isinstance(value, (int, type(None))):
87 raise ValueError(
88 f"Invalid type: {value!r}, and integer or None is expected."
89 )
90
91 self._n_jobs = value
92
93 @property
94 def parallel_backend(self):
95 """Parallel backend (str)"""
96 if self._parallel_backend is None:
97 return BACKEND_DEFAULT
98
99 return self._parallel_backend
100
101 @parallel_backend.setter
102 def parallel_backend(self, value):
103 """Parallel backend setter (str)"""
104 self._parallel_backend = ParallelBackendEnum.from_str(value).value
105
106
107 def run_multiprocessing(
108 func,
109 inputs,
110 backend=None,
111 pool_kwargs=None,
112 method="starmap",
113 method_kwargs=None,
114 task_name="",
115 ):
116 """Run function in a loop or in Parallel
117
118 Notes
119 -----
120 The progress bar can be displayed for this function.
121
122 Parameters
123 ----------
124 func : function
125 Function to run
126 inputs : list
127 List of arguments to pass to the function
128 backend : {'multiprocessing', 'ray'}
129 Backend to use.
130 pool_kwargs : dict
131 Keyword arguments passed to the pool. The number of processes is limited
132 to the number of physical CPUs.
133 method : {'starmap', 'apply_async'}
134 Pool method to use.
135 method_kwargs : dict
136 Keyword arguments passed to the method
137 task_name : str
138 Name of the task to display in the progress bar
139 """
140 backend = ParallelBackendEnum.from_str(backend)
141
142 if method_kwargs is None:
143 method_kwargs = {}
144
145 if pool_kwargs is None:
146 pool_kwargs = {}
147
148 processes = pool_kwargs.get("processes", N_JOBS_DEFAULT)
149
150 multiprocessing = PARALLEL_BACKEND_MODULES[backend]
151
152 if backend == ParallelBackendEnum.multiprocessing:
153 cpu_count = multiprocessing.cpu_count()
154
155 if processes > cpu_count:
156 log.info(f"Limiting number of processes from {processes} to {cpu_count}")
157 processes = cpu_count
158
159 if multiprocessing.current_process().name != "MainProcess":
160 # subprocesses cannot have childs
161 processes = 1
162 # TODO: check for ray
163
164 if processes == 1:
165 return run_loop(
166 func=func, inputs=inputs, method_kwargs=method_kwargs, task_name=task_name
167 )
168
169 if backend == ParallelBackendEnum.ray:
170 address = "auto" if is_ray_initialized() else None
171 pool_kwargs.setdefault("ray_address", address)
172
173 log.info(f"Using {processes} processes to compute {task_name}")
174
175 with multiprocessing.Pool(**pool_kwargs) as pool:
176 pool_func = POOL_METHODS[PoolMethodEnum(method)]
177 results = pool_func(
178 pool=pool,
179 func=func,
180 inputs=inputs,
181 method_kwargs=method_kwargs,
182 task_name=task_name,
183 )
184
185 return results
186
187
188 def run_loop(func, inputs, method_kwargs=None, task_name=""):
189 """Loop over inputs an run function"""
190 results = []
191
192 callback = method_kwargs.get("callback", None)
193
194 for arguments in progress_bar(inputs, desc=task_name):
195 result = func(*arguments)
196
197 if callback is not None:
198 result = callback(result)
199
200 results.append(result)
201
202 return results
203
204
205 def run_pool_star_map(pool, func, inputs, method_kwargs=None, task_name=""):
206 """Run function in parallel"""
207 return pool.starmap(func, progress_bar(inputs, desc=task_name), **method_kwargs)
208
209
210 def run_pool_async(pool, func, inputs, method_kwargs=None, task_name=""):
211 """Run function in parallel async"""
212 results = []
213
214 for arguments in progress_bar(inputs, desc=task_name):
215 result = pool.apply_async(func, arguments, **method_kwargs)
216 results.append(result)
217 # wait async run is done
218 [result.wait() for result in results]
219 return results
220
221
222 POOL_METHODS = {
223 PoolMethodEnum.starmap: run_pool_star_map,
224 PoolMethodEnum.apply_async: run_pool_async,
225 }
226
227 PARALLEL_BACKEND_MODULES = {
228 ParallelBackendEnum.multiprocessing: multiprocessing,
229 }
230
231 if is_ray_available():
232 PARALLEL_BACKEND_MODULES[ParallelBackendEnum.ray] = get_multiprocessing_ray()
233
[end of gammapy/utils/parallel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gammapy/utils/parallel.py b/gammapy/utils/parallel.py
--- a/gammapy/utils/parallel.py
+++ b/gammapy/utils/parallel.py
@@ -2,7 +2,6 @@
"""Multiprocessing and multithreading setup"""
import importlib
import logging
-import multiprocessing
from enum import Enum
from gammapy.utils.pbar import progress_bar
@@ -39,6 +38,13 @@
N_JOBS_DEFAULT = 1
+def get_multiprocessing():
+ """Get multiprocessing module"""
+ import multiprocessing
+
+ return multiprocessing
+
+
def get_multiprocessing_ray():
"""Get multiprocessing module for ray backend"""
import ray.util.multiprocessing as multiprocessing
@@ -147,7 +153,7 @@
processes = pool_kwargs.get("processes", N_JOBS_DEFAULT)
- multiprocessing = PARALLEL_BACKEND_MODULES[backend]
+ multiprocessing = PARALLEL_BACKEND_MODULES[backend]()
if backend == ParallelBackendEnum.multiprocessing:
cpu_count = multiprocessing.cpu_count()
@@ -225,8 +231,6 @@
}
PARALLEL_BACKEND_MODULES = {
- ParallelBackendEnum.multiprocessing: multiprocessing,
+ ParallelBackendEnum.multiprocessing: get_multiprocessing,
+ ParallelBackendEnum.ray: get_multiprocessing_ray,
}
-
-if is_ray_available():
- PARALLEL_BACKEND_MODULES[ParallelBackendEnum.ray] = get_multiprocessing_ray()
|
{"golden_diff": "diff --git a/gammapy/utils/parallel.py b/gammapy/utils/parallel.py\n--- a/gammapy/utils/parallel.py\n+++ b/gammapy/utils/parallel.py\n@@ -2,7 +2,6 @@\n \"\"\"Multiprocessing and multithreading setup\"\"\"\n import importlib\n import logging\n-import multiprocessing\n from enum import Enum\n from gammapy.utils.pbar import progress_bar\n \n@@ -39,6 +38,13 @@\n N_JOBS_DEFAULT = 1\n \n \n+def get_multiprocessing():\n+ \"\"\"Get multiprocessing module\"\"\"\n+ import multiprocessing\n+\n+ return multiprocessing\n+\n+\n def get_multiprocessing_ray():\n \"\"\"Get multiprocessing module for ray backend\"\"\"\n import ray.util.multiprocessing as multiprocessing\n@@ -147,7 +153,7 @@\n \n processes = pool_kwargs.get(\"processes\", N_JOBS_DEFAULT)\n \n- multiprocessing = PARALLEL_BACKEND_MODULES[backend]\n+ multiprocessing = PARALLEL_BACKEND_MODULES[backend]()\n \n if backend == ParallelBackendEnum.multiprocessing:\n cpu_count = multiprocessing.cpu_count()\n@@ -225,8 +231,6 @@\n }\n \n PARALLEL_BACKEND_MODULES = {\n- ParallelBackendEnum.multiprocessing: multiprocessing,\n+ ParallelBackendEnum.multiprocessing: get_multiprocessing,\n+ ParallelBackendEnum.ray: get_multiprocessing_ray,\n }\n-\n-if is_ray_available():\n- PARALLEL_BACKEND_MODULES[ParallelBackendEnum.ray] = get_multiprocessing_ray()\n", "issue": "Just importing gammapy raises a warning about ray support being experimental\n**Gammapy version**\r\n\r\n```\r\nGammapy support for parallelisation with ray is still a prototype and is not fully functional.\r\n\r\nSystem:\r\n\r\n\tpython_executable : /home/maxnoe/.local/conda/envs/gammapy-dev/bin/python3.9 \r\n\tpython_version : 3.9.16 \r\n\tmachine : x86_64 \r\n\tsystem : Linux \r\n\r\n\r\nGammapy package:\r\n\r\n\tversion : 1.2.dev201+g514451881.d20230627 \r\n\tpath : /home/maxnoe/Projects/gammapy/gammapy \r\n\r\n\r\nOther packages:\r\n\r\n\tnumpy : 1.25.0 \r\n\tscipy : 1.11.0 \r\n\tastropy : 5.3 \r\n\tregions : 0.7 \r\n\tclick : 8.1.3 \r\n\tyaml : 6.0 \r\n\tIPython : 8.14.0 \r\n\tjupyterlab : 3.5.3 \r\n\tmatplotlib : 3.7.1 \r\n\tpandas : 2.0.2 \r\n\thealpy : 1.16.2 \r\n\timinuit : 2.22.0 \r\n\tsherpa : 4.15.1 \r\n\tnaima : 0.10.0 \r\n\temcee : 3.1.4 \r\n\tcorner : 2.2.2 \r\n\tray : 2.5.1 \r\n\r\n\r\nGammapy environment variables:\r\n\r\n\tGAMMAPY_DATA : /home/maxnoe/Projects/gammapy/gammapy-datasets/dev \r\n```\r\n\r\n**Bug description**\r\n\r\nJust importing a subpackage of gammapy, without doing anything else, raises a warning about ray support being experimental.\r\n\r\nI am not doing anything with ray, I just setup the dev environment and imported things:\r\n\r\n```\r\n\u276f python -c 'import gammapy.datasets'\r\nGammapy support for parallelisation with ray is still a prototype and is not fully functional.\r\n\u276f python -c 'import gammapy.makers'\r\nGammapy support for parallelisation with ray is still a prototype and is not fully functional.\r\n```\r\n\r\n**Expected behavior**\r\n\r\nNo warnings about things I don't actually use.\r\n\r\n\r\n**To Reproduce**\r\nSee above, dev environment and the imports.\r\n\r\n**Other information**\r\n\r\n\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"Multiprocessing and multithreading setup\"\"\"\nimport importlib\nimport logging\nimport multiprocessing\nfrom enum import Enum\nfrom gammapy.utils.pbar import progress_bar\n\nlog = logging.getLogger(__name__)\n\n\nclass ParallelBackendEnum(Enum):\n \"\"\"Enum for parallel backend\"\"\"\n\n multiprocessing = \"multiprocessing\"\n ray = \"ray\"\n\n @classmethod\n def from_str(cls, value):\n \"\"\"Get enum from string\"\"\"\n if value is None:\n value = BACKEND_DEFAULT\n\n if value == \"ray\" and not is_ray_available():\n log.warning(\"Ray is not installed, falling back to multiprocessing backend\")\n value = \"multiprocessing\"\n\n return cls(value)\n\n\nclass PoolMethodEnum(Enum):\n \"\"\"Enum for pool method\"\"\"\n\n starmap = \"starmap\"\n apply_async = \"apply_async\"\n\n\nBACKEND_DEFAULT = ParallelBackendEnum.multiprocessing\nN_JOBS_DEFAULT = 1\n\n\ndef get_multiprocessing_ray():\n \"\"\"Get multiprocessing module for ray backend\"\"\"\n import ray.util.multiprocessing as multiprocessing\n\n log.warning(\n \"Gammapy support for parallelisation with ray is still a prototype and is not fully functional.\"\n )\n return multiprocessing\n\n\ndef is_ray_initialized():\n \"\"\"Check if ray is initialized\"\"\"\n try:\n from ray import is_initialized\n\n return is_initialized()\n except ModuleNotFoundError:\n return False\n\n\ndef is_ray_available():\n \"\"\"Check if ray is available\"\"\"\n try:\n importlib.import_module(\"ray\")\n return True\n except ModuleNotFoundError:\n return False\n\n\nclass ParallelMixin:\n \"\"\"Mixin class to handle parallel processing\"\"\"\n\n @property\n def n_jobs(self):\n \"\"\"Number of jobs (int)\"\"\"\n # TODO: this is somewhat unusual behaviour. It deviates from a normal default value handling\n if self._n_jobs is None:\n return N_JOBS_DEFAULT\n\n return self._n_jobs\n\n @n_jobs.setter\n def n_jobs(self, value):\n \"\"\"Number of jobs setter (int)\"\"\"\n if not isinstance(value, (int, type(None))):\n raise ValueError(\n f\"Invalid type: {value!r}, and integer or None is expected.\"\n )\n\n self._n_jobs = value\n\n @property\n def parallel_backend(self):\n \"\"\"Parallel backend (str)\"\"\"\n if self._parallel_backend is None:\n return BACKEND_DEFAULT\n\n return self._parallel_backend\n\n @parallel_backend.setter\n def parallel_backend(self, value):\n \"\"\"Parallel backend setter (str)\"\"\"\n self._parallel_backend = ParallelBackendEnum.from_str(value).value\n\n\ndef run_multiprocessing(\n func,\n inputs,\n backend=None,\n pool_kwargs=None,\n method=\"starmap\",\n method_kwargs=None,\n task_name=\"\",\n):\n \"\"\"Run function in a loop or in Parallel\n\n Notes\n -----\n The progress bar can be displayed for this function.\n\n Parameters\n ----------\n func : function\n Function to run\n inputs : list\n List of arguments to pass to the function\n backend : {'multiprocessing', 'ray'}\n Backend to use.\n pool_kwargs : dict\n Keyword arguments passed to the pool. The number of processes is limited\n to the number of physical CPUs.\n method : {'starmap', 'apply_async'}\n Pool method to use.\n method_kwargs : dict\n Keyword arguments passed to the method\n task_name : str\n Name of the task to display in the progress bar\n \"\"\"\n backend = ParallelBackendEnum.from_str(backend)\n\n if method_kwargs is None:\n method_kwargs = {}\n\n if pool_kwargs is None:\n pool_kwargs = {}\n\n processes = pool_kwargs.get(\"processes\", N_JOBS_DEFAULT)\n\n multiprocessing = PARALLEL_BACKEND_MODULES[backend]\n\n if backend == ParallelBackendEnum.multiprocessing:\n cpu_count = multiprocessing.cpu_count()\n\n if processes > cpu_count:\n log.info(f\"Limiting number of processes from {processes} to {cpu_count}\")\n processes = cpu_count\n\n if multiprocessing.current_process().name != \"MainProcess\":\n # subprocesses cannot have childs\n processes = 1\n # TODO: check for ray\n\n if processes == 1:\n return run_loop(\n func=func, inputs=inputs, method_kwargs=method_kwargs, task_name=task_name\n )\n\n if backend == ParallelBackendEnum.ray:\n address = \"auto\" if is_ray_initialized() else None\n pool_kwargs.setdefault(\"ray_address\", address)\n\n log.info(f\"Using {processes} processes to compute {task_name}\")\n\n with multiprocessing.Pool(**pool_kwargs) as pool:\n pool_func = POOL_METHODS[PoolMethodEnum(method)]\n results = pool_func(\n pool=pool,\n func=func,\n inputs=inputs,\n method_kwargs=method_kwargs,\n task_name=task_name,\n )\n\n return results\n\n\ndef run_loop(func, inputs, method_kwargs=None, task_name=\"\"):\n \"\"\"Loop over inputs an run function\"\"\"\n results = []\n\n callback = method_kwargs.get(\"callback\", None)\n\n for arguments in progress_bar(inputs, desc=task_name):\n result = func(*arguments)\n\n if callback is not None:\n result = callback(result)\n\n results.append(result)\n\n return results\n\n\ndef run_pool_star_map(pool, func, inputs, method_kwargs=None, task_name=\"\"):\n \"\"\"Run function in parallel\"\"\"\n return pool.starmap(func, progress_bar(inputs, desc=task_name), **method_kwargs)\n\n\ndef run_pool_async(pool, func, inputs, method_kwargs=None, task_name=\"\"):\n \"\"\"Run function in parallel async\"\"\"\n results = []\n\n for arguments in progress_bar(inputs, desc=task_name):\n result = pool.apply_async(func, arguments, **method_kwargs)\n results.append(result)\n # wait async run is done\n [result.wait() for result in results]\n return results\n\n\nPOOL_METHODS = {\n PoolMethodEnum.starmap: run_pool_star_map,\n PoolMethodEnum.apply_async: run_pool_async,\n}\n\nPARALLEL_BACKEND_MODULES = {\n ParallelBackendEnum.multiprocessing: multiprocessing,\n}\n\nif is_ray_available():\n PARALLEL_BACKEND_MODULES[ParallelBackendEnum.ray] = get_multiprocessing_ray()\n", "path": "gammapy/utils/parallel.py"}]}
| 3,061 | 320 |
gh_patches_debug_35887
|
rasdani/github-patches
|
git_diff
|
facebookresearch__xformers-308
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot pickle nystrom based model
When I `torch.save` my model with `nystrom` attention I get the following pickle error:
```py
AttributeError: Can't pickle local object 'get_avg_pool.<locals>.avg_pool'
```
I believe coming from this function:
https://github.com/facebookresearch/xformers/blob/9232b2d27a775a43173e2fd86c03251ab64f7ede/xformers/components/attention/nystrom.py#L60
</issue>
<code>
[start of xformers/components/attention/nystrom.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 import logging
8 from dataclasses import dataclass
9 from typing import Optional
10
11 import torch
12 import torch.nn as nn
13
14 from xformers.components.attention import Attention, AttentionConfig, register_attention
15 from xformers.components.attention.core import (
16 scaled_dot_product_attention,
17 scaled_query_key_softmax,
18 )
19 from xformers.components.attention.utils import (
20 bool_mask_to_additive,
21 iterative_pinv,
22 reshape_key_padding_mask,
23 )
24
25
26 @dataclass
27 class NystromSelfAttentionConfig(AttentionConfig):
28 """
29 num_heads Number of heads.
30 num_landmarks Number of landmarks to use for softmax approximation. 64 often sufficient for a good
31 approximation according to https://arxiv.org/pdf/2102.03902.pdf.
32 causal Apply a causal mask, in that the attention cannot be applied to the future.
33 use_razavi_pinverse If true, use iterative method from (Razavi et al. 2014) to approximate the Moore-Penrose
34 inverse, otherwise use standard torch inverse.
35 pinverse_original_init True if using original initialization when calculating Moore-Penrose pseudo inverse using
36 method from (Razavi et al. 2014).
37 False if using exact coefficient computation (leads to faster convergence).
38 inv_iterations Number of iterations for calculating the Moore-Penrose pseudo inverse.
39 v_skip_connection A module that will take V as input and will be added as a skip connection to the
40 softmax approximation. A skip connection is added in the paper to help with training.
41 conv_kernel_size Kernel size for convolution optionally added to help in training.
42 If v_skip_connection is not specified, this will be used to define the default
43 depth wise convolution used as a skip connection.
44 If both conv_kernel_size and v_skip_connection are None, no skip connection will
45 be added.
46 landmark_pooling Which module to use when computing landmarks. Default is AdaptiveAvgPool2d.
47 """
48
49 num_heads: int
50 num_landmarks: Optional[int]
51 landmark_pooling: Optional[nn.Module]
52 causal: Optional[bool]
53 pinverse_original_init: Optional[bool]
54 inv_iterations: Optional[int]
55 v_skip_connection: Optional[nn.Module]
56 conv_kernel_size: Optional[int]
57 use_razavi_pinverse: Optional[bool]
58
59
60 def get_avg_pool(n: int):
61 def avg_pool(x: torch.Tensor):
62 # Average independently for every segment in the sequence dimension
63 seq_len = x.shape[1]
64 head_dim = x.shape[2]
65 segments = seq_len // n
66
67 # Dimensions are a match
68 if seq_len % n == 0:
69 return x.reshape(
70 -1,
71 n,
72 segments,
73 head_dim,
74 ).mean(dim=-2)
75
76 # Handle the last segment boundary being off
77 n_round = n - seq_len % n
78
79 x_avg_round = (
80 x[:, : n_round * segments, :]
81 .reshape(-1, n_round, segments, head_dim)
82 .mean(dim=-2)
83 )
84 x_avg_off = (
85 x[:, n_round * segments :, :]
86 .reshape(-1, n - n_round, segments + 1, head_dim)
87 .mean(dim=-2)
88 )
89 return torch.cat((x_avg_round, x_avg_off), dim=-2)
90
91 return avg_pool
92
93
94 @register_attention("nystrom", NystromSelfAttentionConfig)
95 class NystromAttention(Attention):
96 # TODO: update defaults for use_razavi_pinverse and inv_iterations
97 def __init__(
98 self,
99 dropout: float,
100 num_heads: int,
101 num_landmarks: int = 64,
102 landmark_pooling: Optional[nn.Module] = None,
103 causal: bool = False,
104 use_razavi_pinverse: bool = True,
105 pinverse_original_init: bool = False,
106 inv_iterations: int = 6, # recommended default in paper was 6.
107 v_skip_connection: Optional[nn.Module] = None,
108 conv_kernel_size: Optional[int] = None,
109 *args,
110 **kwargs,
111 ):
112 """
113 Nystrom attention mechanism, from Nystromformer_.
114 ::
115
116 "A Nystrom-based Algorithm for Approximating Self-Attention."
117 Xiong, Y., Zeng, Z., Chakraborty, R., Tan, M., Fung, G., Li, Y., Singh, V. (2021)
118
119 Reference codebase: https://github.com/mlpen/Nystromformer
120
121 .. _Nystromformer: https://arxiv.org/pdf/2102.03902.pdf
122
123 """
124 super().__init__()
125 # merged key padding mask and attention mask is not accepted
126 self.requires_separate_masks = True
127 self.num_landmarks = num_landmarks
128 # TODO: should be able to not have to pass in num_heads
129 self.num_heads = num_heads
130 self.use_razavi_pinverse = use_razavi_pinverse
131 self.pinverse_original_init = pinverse_original_init
132 self.inv_iterations = inv_iterations
133 self.attn_drop = nn.Dropout(dropout)
134 self.skip_connection = v_skip_connection
135 self.causal = causal
136
137 if self.skip_connection is None and conv_kernel_size is not None:
138 self.skip_connection = nn.Conv2d(
139 in_channels=self.num_heads,
140 out_channels=self.num_heads,
141 kernel_size=(conv_kernel_size, 1),
142 padding=(conv_kernel_size // 2, 0),
143 bias=False,
144 groups=self.num_heads,
145 )
146
147 if landmark_pooling is not None:
148 self.landmark_pooling = landmark_pooling
149 else:
150 self.landmark_pooling = get_avg_pool(self.num_landmarks)
151
152 # Optional lower triangular masks for causal attention
153 self.causal_mask_1: Optional[torch.Tensor] = None
154 self.causal_mask_2: Optional[torch.Tensor] = None
155 self.causal_mask_3: Optional[torch.Tensor] = None
156
157 # This attention does not support attention masks
158 self.supports_attention_mask = False
159 self.supports_key_padding_mask = True
160
161 def forward(
162 self,
163 q: torch.Tensor,
164 k: torch.Tensor,
165 v: torch.Tensor,
166 key_padding_mask: Optional[torch.Tensor] = None,
167 *args,
168 **kwargs,
169 ):
170 r"""
171 key_padding_mask Only a key padding mask is accepted here. The size must be (batch size, sequence length) or
172 (batch size * num_heads, 1, sequence length). If dimensions are not correct, the mask will
173 be ignored. An additive mask is expected, meaning float values using "-inf" to mask values
174 """
175
176 batched_dim = k.size(0)
177 seq_len = k.size(-2)
178 tt = {"dtype": q.dtype, "device": q.device}
179
180 if key_padding_mask is not None:
181 if key_padding_mask.dtype == torch.bool:
182 logging.warning(
183 "Bool mask found, but an additive mask is expected. Converting but this is slow"
184 )
185 key_padding_mask = bool_mask_to_additive(key_padding_mask)
186
187 if key_padding_mask.ndim == 2:
188 key_padding_mask = reshape_key_padding_mask(
189 key_padding_mask, batched_dim
190 )
191
192 assert key_padding_mask.size() == (batched_dim, 1, seq_len), (
193 f"key_padding_mask has invalid dimensions {key_padding_mask.size()}."
194 f" Must have dimensions {batched_dim, 1, seq_len} or (batch_size, {seq_len})."
195 )
196
197 if self.num_landmarks >= seq_len:
198 mask: Optional[torch.Tensor] = None
199
200 if self.causal:
201 mask = self._triu_mask(batched_dim, seq_len, seq_len, **tt)
202
203 if key_padding_mask is not None:
204 mask = key_padding_mask if mask is None else mask + key_padding_mask
205
206 x = scaled_dot_product_attention(q=q, k=k, v=v, att_mask=mask)
207
208 else:
209 q_landmarks = self.landmark_pooling(q)
210 k_landmarks = self.landmark_pooling(k)
211
212 if self.causal and (
213 self.causal_mask_1 is None
214 or (batched_dim, seq_len, self.num_landmarks)
215 != self.causal_mask_1.size()
216 ):
217 self.causal_mask_1 = self._triu_mask(
218 batched_dim, seq_len, self.num_landmarks, **tt
219 )
220 self.causal_mask_2 = self._triu_mask(
221 batched_dim, self.num_landmarks, self.num_landmarks, **tt
222 )
223 self.causal_mask_3 = self._triu_mask(
224 batched_dim, self.num_landmarks, seq_len, **tt
225 )
226
227 mask_1: Optional[torch.Tensor] = self.causal_mask_1
228 mask_2: Optional[torch.Tensor] = self.causal_mask_2
229 mask_3: Optional[torch.Tensor] = self.causal_mask_3
230 if key_padding_mask is not None:
231 mask_1 = (
232 key_padding_mask.transpose(-2, -1)
233 if mask_1 is None
234 else mask_1 + key_padding_mask.transpose(-2, -1)
235 )
236 mask_3 = (
237 key_padding_mask if mask_3 is None else mask_3 + key_padding_mask
238 )
239
240 kernel_1 = scaled_query_key_softmax(q=q, k=k_landmarks, att_mask=mask_1)
241 kernel_2 = scaled_query_key_softmax(
242 q=q_landmarks, k=k_landmarks, att_mask=mask_2
243 )
244 kernel_3 = scaled_dot_product_attention(
245 q=q_landmarks, k=k, v=v, att_mask=mask_3
246 )
247
248 kernel_2_inv = (
249 iterative_pinv(
250 kernel_2, self.inv_iterations, self.pinverse_original_init
251 )
252 if self.use_razavi_pinverse
253 else torch.linalg.pinv(kernel_2)
254 )
255
256 x = torch.matmul(
257 torch.matmul(
258 kernel_1,
259 kernel_2_inv,
260 ),
261 kernel_3,
262 )
263
264 if self.skip_connection:
265 # Assumption here is that v is 3D.
266 v_conv = self.skip_connection(
267 v.reshape(-1, self.num_heads, v.size(-2), v.size(-1))
268 )
269 x += v_conv.reshape(-1, v_conv.size(-2), v_conv.size(-1))
270 x = self.attn_drop(x)
271 return x
272
273 def _triu_mask(self, dim_1: int, dim_2: int, dim_3: int, **kwargs) -> torch.Tensor:
274 device = kwargs["device"]
275 dtype = kwargs["dtype"]
276
277 return torch.triu(
278 torch.ones(dim_2, dim_3, dtype=dtype, device=device) * float("-inf"),
279 diagonal=1,
280 ).expand(
281 dim_1, -1, -1
282 ) # micro optim, save memory on the batch dimension
283
[end of xformers/components/attention/nystrom.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/xformers/components/attention/nystrom.py b/xformers/components/attention/nystrom.py
--- a/xformers/components/attention/nystrom.py
+++ b/xformers/components/attention/nystrom.py
@@ -57,24 +57,29 @@
use_razavi_pinverse: Optional[bool]
-def get_avg_pool(n: int):
- def avg_pool(x: torch.Tensor):
+class AvgPool(nn.Module):
+ def __init__(self, n: int):
+ super().__init__()
+ self.n = n
+
+ def forward(self, x: torch.Tensor):
# Average independently for every segment in the sequence dimension
seq_len = x.shape[1]
head_dim = x.shape[2]
- segments = seq_len // n
+ segments = seq_len // self.n
+ assert segments > 0, "num_landmarks should be smaller than the sequence length"
# Dimensions are a match
- if seq_len % n == 0:
+ if seq_len % self.n == 0:
return x.reshape(
-1,
- n,
+ self.n,
segments,
head_dim,
).mean(dim=-2)
# Handle the last segment boundary being off
- n_round = n - seq_len % n
+ n_round = self.n - seq_len % self.n
x_avg_round = (
x[:, : n_round * segments, :]
@@ -83,13 +88,11 @@
)
x_avg_off = (
x[:, n_round * segments :, :]
- .reshape(-1, n - n_round, segments + 1, head_dim)
+ .reshape(-1, self.n - n_round, segments + 1, head_dim)
.mean(dim=-2)
)
return torch.cat((x_avg_round, x_avg_off), dim=-2)
- return avg_pool
-
@register_attention("nystrom", NystromSelfAttentionConfig)
class NystromAttention(Attention):
@@ -147,7 +150,7 @@
if landmark_pooling is not None:
self.landmark_pooling = landmark_pooling
else:
- self.landmark_pooling = get_avg_pool(self.num_landmarks)
+ self.landmark_pooling = AvgPool(n=self.num_landmarks)
# Optional lower triangular masks for causal attention
self.causal_mask_1: Optional[torch.Tensor] = None
|
{"golden_diff": "diff --git a/xformers/components/attention/nystrom.py b/xformers/components/attention/nystrom.py\n--- a/xformers/components/attention/nystrom.py\n+++ b/xformers/components/attention/nystrom.py\n@@ -57,24 +57,29 @@\n use_razavi_pinverse: Optional[bool]\n \n \n-def get_avg_pool(n: int):\n- def avg_pool(x: torch.Tensor):\n+class AvgPool(nn.Module):\n+ def __init__(self, n: int):\n+ super().__init__()\n+ self.n = n\n+\n+ def forward(self, x: torch.Tensor):\n # Average independently for every segment in the sequence dimension\n seq_len = x.shape[1]\n head_dim = x.shape[2]\n- segments = seq_len // n\n+ segments = seq_len // self.n\n+ assert segments > 0, \"num_landmarks should be smaller than the sequence length\"\n \n # Dimensions are a match\n- if seq_len % n == 0:\n+ if seq_len % self.n == 0:\n return x.reshape(\n -1,\n- n,\n+ self.n,\n segments,\n head_dim,\n ).mean(dim=-2)\n \n # Handle the last segment boundary being off\n- n_round = n - seq_len % n\n+ n_round = self.n - seq_len % self.n\n \n x_avg_round = (\n x[:, : n_round * segments, :]\n@@ -83,13 +88,11 @@\n )\n x_avg_off = (\n x[:, n_round * segments :, :]\n- .reshape(-1, n - n_round, segments + 1, head_dim)\n+ .reshape(-1, self.n - n_round, segments + 1, head_dim)\n .mean(dim=-2)\n )\n return torch.cat((x_avg_round, x_avg_off), dim=-2)\n \n- return avg_pool\n-\n \n @register_attention(\"nystrom\", NystromSelfAttentionConfig)\n class NystromAttention(Attention):\n@@ -147,7 +150,7 @@\n if landmark_pooling is not None:\n self.landmark_pooling = landmark_pooling\n else:\n- self.landmark_pooling = get_avg_pool(self.num_landmarks)\n+ self.landmark_pooling = AvgPool(n=self.num_landmarks)\n \n # Optional lower triangular masks for causal attention\n self.causal_mask_1: Optional[torch.Tensor] = None\n", "issue": "Cannot pickle nystrom based model\nWhen I `torch.save` my model with `nystrom` attention I get the following pickle error:\r\n\r\n```py\r\nAttributeError: Can't pickle local object 'get_avg_pool.<locals>.avg_pool'\r\n```\r\n\r\nI believe coming from this function:\r\n\r\nhttps://github.com/facebookresearch/xformers/blob/9232b2d27a775a43173e2fd86c03251ab64f7ede/xformers/components/attention/nystrom.py#L60\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\nimport logging\nfrom dataclasses import dataclass\nfrom typing import Optional\n\nimport torch\nimport torch.nn as nn\n\nfrom xformers.components.attention import Attention, AttentionConfig, register_attention\nfrom xformers.components.attention.core import (\n scaled_dot_product_attention,\n scaled_query_key_softmax,\n)\nfrom xformers.components.attention.utils import (\n bool_mask_to_additive,\n iterative_pinv,\n reshape_key_padding_mask,\n)\n\n\n@dataclass\nclass NystromSelfAttentionConfig(AttentionConfig):\n \"\"\"\n num_heads Number of heads.\n num_landmarks Number of landmarks to use for softmax approximation. 64 often sufficient for a good\n approximation according to https://arxiv.org/pdf/2102.03902.pdf.\n causal Apply a causal mask, in that the attention cannot be applied to the future.\n use_razavi_pinverse If true, use iterative method from (Razavi et al. 2014) to approximate the Moore-Penrose\n inverse, otherwise use standard torch inverse.\n pinverse_original_init True if using original initialization when calculating Moore-Penrose pseudo inverse using\n method from (Razavi et al. 2014).\n False if using exact coefficient computation (leads to faster convergence).\n inv_iterations Number of iterations for calculating the Moore-Penrose pseudo inverse.\n v_skip_connection A module that will take V as input and will be added as a skip connection to the\n softmax approximation. A skip connection is added in the paper to help with training.\n conv_kernel_size Kernel size for convolution optionally added to help in training.\n If v_skip_connection is not specified, this will be used to define the default\n depth wise convolution used as a skip connection.\n If both conv_kernel_size and v_skip_connection are None, no skip connection will\n be added.\n landmark_pooling Which module to use when computing landmarks. Default is AdaptiveAvgPool2d.\n \"\"\"\n\n num_heads: int\n num_landmarks: Optional[int]\n landmark_pooling: Optional[nn.Module]\n causal: Optional[bool]\n pinverse_original_init: Optional[bool]\n inv_iterations: Optional[int]\n v_skip_connection: Optional[nn.Module]\n conv_kernel_size: Optional[int]\n use_razavi_pinverse: Optional[bool]\n\n\ndef get_avg_pool(n: int):\n def avg_pool(x: torch.Tensor):\n # Average independently for every segment in the sequence dimension\n seq_len = x.shape[1]\n head_dim = x.shape[2]\n segments = seq_len // n\n\n # Dimensions are a match\n if seq_len % n == 0:\n return x.reshape(\n -1,\n n,\n segments,\n head_dim,\n ).mean(dim=-2)\n\n # Handle the last segment boundary being off\n n_round = n - seq_len % n\n\n x_avg_round = (\n x[:, : n_round * segments, :]\n .reshape(-1, n_round, segments, head_dim)\n .mean(dim=-2)\n )\n x_avg_off = (\n x[:, n_round * segments :, :]\n .reshape(-1, n - n_round, segments + 1, head_dim)\n .mean(dim=-2)\n )\n return torch.cat((x_avg_round, x_avg_off), dim=-2)\n\n return avg_pool\n\n\n@register_attention(\"nystrom\", NystromSelfAttentionConfig)\nclass NystromAttention(Attention):\n # TODO: update defaults for use_razavi_pinverse and inv_iterations\n def __init__(\n self,\n dropout: float,\n num_heads: int,\n num_landmarks: int = 64,\n landmark_pooling: Optional[nn.Module] = None,\n causal: bool = False,\n use_razavi_pinverse: bool = True,\n pinverse_original_init: bool = False,\n inv_iterations: int = 6, # recommended default in paper was 6.\n v_skip_connection: Optional[nn.Module] = None,\n conv_kernel_size: Optional[int] = None,\n *args,\n **kwargs,\n ):\n \"\"\"\n Nystrom attention mechanism, from Nystromformer_.\n ::\n\n \"A Nystrom-based Algorithm for Approximating Self-Attention.\"\n Xiong, Y., Zeng, Z., Chakraborty, R., Tan, M., Fung, G., Li, Y., Singh, V. (2021)\n\n Reference codebase: https://github.com/mlpen/Nystromformer\n\n .. _Nystromformer: https://arxiv.org/pdf/2102.03902.pdf\n\n \"\"\"\n super().__init__()\n # merged key padding mask and attention mask is not accepted\n self.requires_separate_masks = True\n self.num_landmarks = num_landmarks\n # TODO: should be able to not have to pass in num_heads\n self.num_heads = num_heads\n self.use_razavi_pinverse = use_razavi_pinverse\n self.pinverse_original_init = pinverse_original_init\n self.inv_iterations = inv_iterations\n self.attn_drop = nn.Dropout(dropout)\n self.skip_connection = v_skip_connection\n self.causal = causal\n\n if self.skip_connection is None and conv_kernel_size is not None:\n self.skip_connection = nn.Conv2d(\n in_channels=self.num_heads,\n out_channels=self.num_heads,\n kernel_size=(conv_kernel_size, 1),\n padding=(conv_kernel_size // 2, 0),\n bias=False,\n groups=self.num_heads,\n )\n\n if landmark_pooling is not None:\n self.landmark_pooling = landmark_pooling\n else:\n self.landmark_pooling = get_avg_pool(self.num_landmarks)\n\n # Optional lower triangular masks for causal attention\n self.causal_mask_1: Optional[torch.Tensor] = None\n self.causal_mask_2: Optional[torch.Tensor] = None\n self.causal_mask_3: Optional[torch.Tensor] = None\n\n # This attention does not support attention masks\n self.supports_attention_mask = False\n self.supports_key_padding_mask = True\n\n def forward(\n self,\n q: torch.Tensor,\n k: torch.Tensor,\n v: torch.Tensor,\n key_padding_mask: Optional[torch.Tensor] = None,\n *args,\n **kwargs,\n ):\n r\"\"\"\n key_padding_mask Only a key padding mask is accepted here. The size must be (batch size, sequence length) or\n (batch size * num_heads, 1, sequence length). If dimensions are not correct, the mask will\n be ignored. An additive mask is expected, meaning float values using \"-inf\" to mask values\n \"\"\"\n\n batched_dim = k.size(0)\n seq_len = k.size(-2)\n tt = {\"dtype\": q.dtype, \"device\": q.device}\n\n if key_padding_mask is not None:\n if key_padding_mask.dtype == torch.bool:\n logging.warning(\n \"Bool mask found, but an additive mask is expected. Converting but this is slow\"\n )\n key_padding_mask = bool_mask_to_additive(key_padding_mask)\n\n if key_padding_mask.ndim == 2:\n key_padding_mask = reshape_key_padding_mask(\n key_padding_mask, batched_dim\n )\n\n assert key_padding_mask.size() == (batched_dim, 1, seq_len), (\n f\"key_padding_mask has invalid dimensions {key_padding_mask.size()}.\"\n f\" Must have dimensions {batched_dim, 1, seq_len} or (batch_size, {seq_len}).\"\n )\n\n if self.num_landmarks >= seq_len:\n mask: Optional[torch.Tensor] = None\n\n if self.causal:\n mask = self._triu_mask(batched_dim, seq_len, seq_len, **tt)\n\n if key_padding_mask is not None:\n mask = key_padding_mask if mask is None else mask + key_padding_mask\n\n x = scaled_dot_product_attention(q=q, k=k, v=v, att_mask=mask)\n\n else:\n q_landmarks = self.landmark_pooling(q)\n k_landmarks = self.landmark_pooling(k)\n\n if self.causal and (\n self.causal_mask_1 is None\n or (batched_dim, seq_len, self.num_landmarks)\n != self.causal_mask_1.size()\n ):\n self.causal_mask_1 = self._triu_mask(\n batched_dim, seq_len, self.num_landmarks, **tt\n )\n self.causal_mask_2 = self._triu_mask(\n batched_dim, self.num_landmarks, self.num_landmarks, **tt\n )\n self.causal_mask_3 = self._triu_mask(\n batched_dim, self.num_landmarks, seq_len, **tt\n )\n\n mask_1: Optional[torch.Tensor] = self.causal_mask_1\n mask_2: Optional[torch.Tensor] = self.causal_mask_2\n mask_3: Optional[torch.Tensor] = self.causal_mask_3\n if key_padding_mask is not None:\n mask_1 = (\n key_padding_mask.transpose(-2, -1)\n if mask_1 is None\n else mask_1 + key_padding_mask.transpose(-2, -1)\n )\n mask_3 = (\n key_padding_mask if mask_3 is None else mask_3 + key_padding_mask\n )\n\n kernel_1 = scaled_query_key_softmax(q=q, k=k_landmarks, att_mask=mask_1)\n kernel_2 = scaled_query_key_softmax(\n q=q_landmarks, k=k_landmarks, att_mask=mask_2\n )\n kernel_3 = scaled_dot_product_attention(\n q=q_landmarks, k=k, v=v, att_mask=mask_3\n )\n\n kernel_2_inv = (\n iterative_pinv(\n kernel_2, self.inv_iterations, self.pinverse_original_init\n )\n if self.use_razavi_pinverse\n else torch.linalg.pinv(kernel_2)\n )\n\n x = torch.matmul(\n torch.matmul(\n kernel_1,\n kernel_2_inv,\n ),\n kernel_3,\n )\n\n if self.skip_connection:\n # Assumption here is that v is 3D.\n v_conv = self.skip_connection(\n v.reshape(-1, self.num_heads, v.size(-2), v.size(-1))\n )\n x += v_conv.reshape(-1, v_conv.size(-2), v_conv.size(-1))\n x = self.attn_drop(x)\n return x\n\n def _triu_mask(self, dim_1: int, dim_2: int, dim_3: int, **kwargs) -> torch.Tensor:\n device = kwargs[\"device\"]\n dtype = kwargs[\"dtype\"]\n\n return torch.triu(\n torch.ones(dim_2, dim_3, dtype=dtype, device=device) * float(\"-inf\"),\n diagonal=1,\n ).expand(\n dim_1, -1, -1\n ) # micro optim, save memory on the batch dimension\n", "path": "xformers/components/attention/nystrom.py"}]}
| 3,899 | 552 |
gh_patches_debug_9777
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-1397
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
simplelistadapter should accept objects inheriting from list or tuple
I'll found it usefull if it was possible to extend the list object that I pass to the simplelistadapter, but an exception is raised.
Reproduce :
``` python
from kivy.adapters.simplelistadapter import SimpleListAdapter
class ExtendedList(list):
pass
list_adapter = SimpleListAdapter(data=ExtendedList())
```
A solution :
In kivy/adapters/simplelistadapter.py
``` python
47 if type(kwargs['data']) not in (tuple, list):
48 raise Exception('list adapter: data must be a tuple or list')
```
May be replaced by:
``` python
if not isinstance(kwargs['data'], list) and not isinstance(kwargs['data'], tuple)
```
</issue>
<code>
[start of kivy/adapters/simplelistadapter.py]
1 '''
2 SimpleListAdapter
3 =================
4
5 .. versionadded:: 1.5
6
7 .. warning::
8
9 This code is still experimental, and its API is subject to change in a
10 future version.
11
12 The :class:`~kivy.adapters.simplelistadapter.SimpleListAdapter` is used for
13 basic lists. For example, it can be used for displaying a list of read-only
14 strings that do not require user interaction.
15
16 '''
17
18 __all__ = ('SimpleListAdapter', )
19
20 from kivy.adapters.adapter import Adapter
21 from kivy.properties import ListProperty
22 from kivy.lang import Builder
23
24
25 class SimpleListAdapter(Adapter):
26 '''A :class:`~kivy.adapters.simplelistadapter.SimpleListAdapter` is an
27 adapter around a Python list.
28
29 From :class:`~kivy.adapters.adapter.Adapter`, the
30 :class:`~kivy.adapters.simplelistadapter.ListAdapter` gets cls, template,
31 and args_converter properties.
32 '''
33
34 data = ListProperty([])
35 '''The data list property contains a list of objects (which can be strings)
36 that will be used directly if no args_converter function is provided. If
37 there is an args_converter, the data objects will be passed to it for
38 instantiating the item view class instances.
39
40 :data:`data` is a :class:`~kivy.properties.ListProperty` and
41 defaults to [].
42 '''
43
44 def __init__(self, **kwargs):
45 if 'data' not in kwargs:
46 raise Exception('list adapter: input must include data argument')
47 if type(kwargs['data']) not in (tuple, list):
48 raise Exception('list adapter: data must be a tuple or list')
49 super(SimpleListAdapter, self).__init__(**kwargs)
50
51 def get_count(self):
52 return len(self.data)
53
54 def get_data_item(self, index):
55 if index < 0 or index >= len(self.data):
56 return None
57 return self.data[index]
58
59 # Returns a view instance for an item.
60 def get_view(self, index):
61 item = self.get_data_item(index)
62
63 if item is None:
64 return None
65
66 item_args = self.args_converter(index, item)
67
68 if self.cls:
69 instance = self.cls(**item_args)
70 return instance
71 else:
72 return Builder.template(self.template, **item_args)
73
[end of kivy/adapters/simplelistadapter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kivy/adapters/simplelistadapter.py b/kivy/adapters/simplelistadapter.py
--- a/kivy/adapters/simplelistadapter.py
+++ b/kivy/adapters/simplelistadapter.py
@@ -44,7 +44,8 @@
def __init__(self, **kwargs):
if 'data' not in kwargs:
raise Exception('list adapter: input must include data argument')
- if type(kwargs['data']) not in (tuple, list):
+ if not isinstance(kwargs['data'], list) and \
+ not isinstance(kwargs['data'], tuple):
raise Exception('list adapter: data must be a tuple or list')
super(SimpleListAdapter, self).__init__(**kwargs)
|
{"golden_diff": "diff --git a/kivy/adapters/simplelistadapter.py b/kivy/adapters/simplelistadapter.py\n--- a/kivy/adapters/simplelistadapter.py\n+++ b/kivy/adapters/simplelistadapter.py\n@@ -44,7 +44,8 @@\n def __init__(self, **kwargs):\n if 'data' not in kwargs:\n raise Exception('list adapter: input must include data argument')\n- if type(kwargs['data']) not in (tuple, list):\n+ if not isinstance(kwargs['data'], list) and \\\n+ not isinstance(kwargs['data'], tuple):\n raise Exception('list adapter: data must be a tuple or list')\n super(SimpleListAdapter, self).__init__(**kwargs)\n", "issue": "simplelistadapter should accept objects inheriting from list or tuple\nI'll found it usefull if it was possible to extend the list object that I pass to the simplelistadapter, but an exception is raised.\n\nReproduce :\n\n``` python\nfrom kivy.adapters.simplelistadapter import SimpleListAdapter\nclass ExtendedList(list):\n pass\n\nlist_adapter = SimpleListAdapter(data=ExtendedList())\n```\n\nA solution :\nIn kivy/adapters/simplelistadapter.py\n\n``` python\n 47 if type(kwargs['data']) not in (tuple, list): \n 48 raise Exception('list adapter: data must be a tuple or list') \n```\n\nMay be replaced by:\n\n``` python\nif not isinstance(kwargs['data'], list) and not isinstance(kwargs['data'], tuple)\n```\n\n", "before_files": [{"content": "'''\nSimpleListAdapter\n=================\n\n.. versionadded:: 1.5\n\n.. warning::\n\n This code is still experimental, and its API is subject to change in a\n future version.\n\nThe :class:`~kivy.adapters.simplelistadapter.SimpleListAdapter` is used for\nbasic lists. For example, it can be used for displaying a list of read-only\nstrings that do not require user interaction.\n\n'''\n\n__all__ = ('SimpleListAdapter', )\n\nfrom kivy.adapters.adapter import Adapter\nfrom kivy.properties import ListProperty\nfrom kivy.lang import Builder\n\n\nclass SimpleListAdapter(Adapter):\n '''A :class:`~kivy.adapters.simplelistadapter.SimpleListAdapter` is an\n adapter around a Python list.\n\n From :class:`~kivy.adapters.adapter.Adapter`, the\n :class:`~kivy.adapters.simplelistadapter.ListAdapter` gets cls, template,\n and args_converter properties.\n '''\n\n data = ListProperty([])\n '''The data list property contains a list of objects (which can be strings)\n that will be used directly if no args_converter function is provided. If\n there is an args_converter, the data objects will be passed to it for\n instantiating the item view class instances.\n\n :data:`data` is a :class:`~kivy.properties.ListProperty` and\n defaults to [].\n '''\n\n def __init__(self, **kwargs):\n if 'data' not in kwargs:\n raise Exception('list adapter: input must include data argument')\n if type(kwargs['data']) not in (tuple, list):\n raise Exception('list adapter: data must be a tuple or list')\n super(SimpleListAdapter, self).__init__(**kwargs)\n\n def get_count(self):\n return len(self.data)\n\n def get_data_item(self, index):\n if index < 0 or index >= len(self.data):\n return None\n return self.data[index]\n\n # Returns a view instance for an item.\n def get_view(self, index):\n item = self.get_data_item(index)\n\n if item is None:\n return None\n\n item_args = self.args_converter(index, item)\n\n if self.cls:\n instance = self.cls(**item_args)\n return instance\n else:\n return Builder.template(self.template, **item_args)\n", "path": "kivy/adapters/simplelistadapter.py"}]}
| 1,333 | 154 |
gh_patches_debug_26995
|
rasdani/github-patches
|
git_diff
|
WordPress__openverse-api-411
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Sentry to API
## Problem
<!-- Describe a problem solved by this feature; or delete the section entirely. -->
We don't have any visibility into the API service. Sentry would be a good and easy first step.
## Description
<!-- Describe the feature and how it solves the problem. -->
Let's add Sentry. Long term we have goals of adding other monitoring but Sentry is a good and easy first step.
## Additional context
<!-- Add any other context about the feature here; or delete the section entirely. -->
## Implementation
<!-- Replace the [ ] with [x] to check the box. -->
- [x] 🙋 I would be interested in implementing this feature.
</issue>
<code>
[start of openverse_api/catalog/settings.py]
1 """
2 Django settings for catalog project.
3
4 Generated by 'django-admin startproject' using Django 2.0.5.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/2.0/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/2.0/ref/settings/
11 """
12
13 from pathlib import Path
14 from socket import gethostbyname, gethostname
15
16 from decouple import config
17
18
19 # Build paths inside the project like this: BASE_DIR.join('dir', 'subdir'...)
20 BASE_DIR = Path(__file__).resolve().parent.parent
21
22 # Where to collect static files in production/development deployments
23 STATIC_ROOT = "/var/api_static_content/static"
24
25 # Logo uploads
26 MEDIA_ROOT = "/var/api_media/"
27 MEDIA_URL = "/media/"
28
29 # Quick-start development settings - unsuitable for production
30 # See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/
31
32 # SECURITY WARNING: keep the secret key used in production secret!
33 SECRET_KEY = config("DJANGO_SECRET_KEY") # required
34
35 # SECURITY WARNING: don't run with debug turned on in production!
36 DEBUG = config("DJANGO_DEBUG_ENABLED", default=False, cast=bool)
37
38 ALLOWED_HOSTS = [
39 "api-dev.openverse.engineering",
40 "api.openverse.engineering",
41 gethostname(),
42 gethostbyname(gethostname()),
43 ]
44
45 if lb_url := config("LOAD_BALANCER_URL", default=""):
46 ALLOWED_HOSTS.append(lb_url)
47
48 if DEBUG:
49 ALLOWED_HOSTS += [
50 "localhost",
51 "127.0.0.1",
52 "0.0.0.0",
53 ]
54
55 # Domains that shortened links may point to
56 SHORT_URL_WHITELIST = {
57 "api-dev.openverse.engineering",
58 "api.openverse.engineering",
59 "localhost:8000",
60 }
61 SHORT_URL_PATH_WHITELIST = ["/v1/list", "/v1/images/"]
62
63 USE_S3 = config("USE_S3", default=False, cast=bool)
64
65 # Application definition
66
67 INSTALLED_APPS = [
68 "catalog",
69 "catalog.api",
70 "drf_yasg",
71 "django.contrib.admin",
72 "django.contrib.auth",
73 "django.contrib.contenttypes",
74 "django.contrib.sessions",
75 "django.contrib.messages",
76 "django.contrib.staticfiles",
77 "oauth2_provider",
78 "rest_framework",
79 "corsheaders",
80 "sslserver",
81 ]
82
83 if USE_S3:
84 DEFAULT_FILE_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"
85 AWS_STORAGE_BUCKET_NAME = config("LOGOS_BUCKET", default="openverse_api-logos-prod")
86 AWS_S3_SIGNATURE_VERSION = "s3v4"
87 INSTALLED_APPS.append("storages")
88
89 MIDDLEWARE = [
90 "django.middleware.security.SecurityMiddleware",
91 "django.contrib.sessions.middleware.SessionMiddleware",
92 "corsheaders.middleware.CorsMiddleware",
93 "django.middleware.common.CommonMiddleware",
94 "django.middleware.csrf.CsrfViewMiddleware",
95 "django.contrib.auth.middleware.AuthenticationMiddleware",
96 "django.contrib.messages.middleware.MessageMiddleware",
97 "django.middleware.clickjacking.XFrameOptionsMiddleware",
98 "oauth2_provider.middleware.OAuth2TokenMiddleware",
99 ]
100
101 SWAGGER_SETTINGS = {"SECURITY_DEFINITIONS": {}}
102
103 OAUTH2_PROVIDER = {
104 "SCOPES": {
105 "read": "Read scope",
106 "write": "Write scope",
107 }
108 }
109
110 OAUTH2_PROVIDER_APPLICATION_MODEL = "api.ThrottledApplication"
111
112 REST_FRAMEWORK = {
113 "DEFAULT_AUTHENTICATION_CLASSES": (
114 "oauth2_provider.contrib.rest_framework.OAuth2Authentication",
115 ),
116 "DEFAULT_VERSIONING_CLASS": "rest_framework.versioning.URLPathVersioning",
117 "DEFAULT_RENDERER_CLASSES": (
118 "rest_framework.renderers.JSONRenderer",
119 "rest_framework.renderers.BrowsableAPIRenderer",
120 "rest_framework_xml.renderers.XMLRenderer",
121 ),
122 "DEFAULT_THROTTLE_CLASSES": (
123 "catalog.api.utils.throttle.BurstRateThrottle",
124 "catalog.api.utils.throttle.SustainedRateThrottle",
125 "catalog.api.utils.throttle.OAuth2IdThrottleSustainedRate",
126 "catalog.api.utils.throttle.OAuth2IdThrottleBurstRate",
127 "catalog.api.utils.throttle.EnhancedOAuth2IdThrottleSustainedRate",
128 "catalog.api.utils.throttle.EnhancedOAuth2IdThrottleBurstRate",
129 ),
130 "DEFAULT_THROTTLE_RATES": {
131 "anon_burst": "60/min",
132 "anon_sustained": "5000/day",
133 "oauth2_client_credentials_sustained": "10000/day",
134 "oauth2_client_credentials_burst": "100/min",
135 "enhanced_oauth2_client_credentials_sustained": "20000/day",
136 "enhanced_oauth2_client_credentials_burst": "200/min",
137 },
138 "EXCEPTION_HANDLER": "catalog.api.utils.exceptions.exception_handler",
139 }
140
141 if config("DISABLE_GLOBAL_THROTTLING", default=True, cast=bool):
142 del REST_FRAMEWORK["DEFAULT_THROTTLE_RATES"]
143 del REST_FRAMEWORK["DEFAULT_THROTTLE_CLASSES"]
144
145 REDIS_HOST = config("REDIS_HOST", default="localhost")
146 REDIS_PORT = config("REDIS_PORT", default=6379, cast=int)
147 REDIS_PASSWORD = config("REDIS_PASSWORD", default="")
148 CACHES = {
149 # Site cache writes to 'default'
150 "default": {
151 "BACKEND": "django_redis.cache.RedisCache",
152 "LOCATION": f"redis://{REDIS_HOST}:{REDIS_PORT}/0",
153 "OPTIONS": {
154 "CLIENT_CLASS": "django_redis.client.DefaultClient",
155 },
156 },
157 # For rapidly changing stats that we don't want to hammer the database with
158 "traffic_stats": {
159 "BACKEND": "django_redis.cache.RedisCache",
160 "LOCATION": f"redis://{REDIS_HOST}:{REDIS_PORT}/1",
161 "OPTIONS": {
162 "CLIENT_CLASS": "django_redis.client.DefaultClient",
163 },
164 },
165 # For ensuring consistency among multiple Django workers and servers.
166 # Used by Redlock.
167 "locks": {
168 "BACKEND": "django_redis.cache.RedisCache",
169 "LOCATION": f"redis://{REDIS_HOST}:{REDIS_PORT}/2",
170 "OPTIONS": {
171 "CLIENT_CLASS": "django_redis.client.DefaultClient",
172 },
173 },
174 }
175
176 # Produce CC-hosted thumbnails dynamically through a proxy.
177 THUMBNAIL_PROXY_URL = config("THUMBNAIL_PROXY_URL", default="http://localhost:8222")
178
179 THUMBNAIL_WIDTH_PX = 600
180
181 AUTHENTICATION_BACKENDS = (
182 "oauth2_provider.backends.OAuth2Backend",
183 "django.contrib.auth.backends.ModelBackend",
184 )
185
186 ROOT_URLCONF = "catalog.urls"
187
188 TEMPLATES = [
189 {
190 "BACKEND": "django.template.backends.django.DjangoTemplates",
191 "DIRS": [BASE_DIR.joinpath("catalog", "templates")],
192 "APP_DIRS": True,
193 "OPTIONS": {
194 "context_processors": [
195 "django.template.context_processors.debug",
196 "django.template.context_processors.request",
197 "django.contrib.auth.context_processors.auth",
198 "django.contrib.messages.context_processors.messages",
199 ],
200 },
201 },
202 ]
203
204 WSGI_APPLICATION = "catalog.wsgi.application"
205
206 # Database
207 # https://docs.djangoproject.com/en/2.0/ref/settings/#databases
208
209 DATABASES = {
210 "default": {
211 "ENGINE": "django.db.backends.postgresql",
212 "HOST": config("DJANGO_DATABASE_HOST", default="localhost"),
213 "PORT": config("DJANGO_DATABASE_PORT", default=5432, cast=int),
214 "USER": config("DJANGO_DATABASE_USER", default="deploy"),
215 "PASSWORD": config("DJANGO_DATABASE_PASSWORD", default="deploy"),
216 "NAME": config("DJANGO_DATABASE_NAME", default="openledger"),
217 },
218 "upstream": {
219 "ENGINE": "django.db.backends.postgresql",
220 "HOST": config("UPSTREAM_DATABASE_HOST", default="localhost"),
221 "PORT": config("UPSTREAM_DATABASE_PORT", default=5433, cast=int),
222 "USER": config("UPSTREAM_DATABASE_USER", default="deploy"),
223 "PASSWORD": config("UPSTREAM_DATABASE_PASSWORD", default="deploy"),
224 "NAME": config("UPSTREAM_DATABASE_NAME", default="openledger"),
225 },
226 }
227
228 # Password validation
229 # https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators
230
231 AUTH_PASSWORD_VALIDATORS = [
232 {
233 "NAME": "django.contrib.auth.password_validation"
234 ".UserAttributeSimilarityValidator",
235 },
236 {
237 "NAME": "django.contrib.auth.password_validation" ".MinimumLengthValidator",
238 },
239 {
240 "NAME": "django.contrib.auth.password_validation" ".CommonPasswordValidator",
241 },
242 {
243 "NAME": "django.contrib.auth.password_validation" ".NumericPasswordValidator",
244 },
245 ]
246
247 LOGGING = {
248 "version": 1,
249 "disable_existing_loggers": False,
250 "handlers": {
251 "console": {
252 "level": "INFO",
253 "class": "logging.StreamHandler",
254 },
255 },
256 "loggers": {
257 "django": {
258 "handlers": ["console"],
259 "level": "INFO",
260 "propagate": True,
261 },
262 # root logger
263 "": {
264 "level": "INFO",
265 "handlers": ["console"],
266 },
267 },
268 }
269
270 # Internationalization
271 # https://docs.djangoproject.com/en/2.0/topics/i18n/
272
273 LANGUAGE_CODE = "en-us"
274
275 TIME_ZONE = "UTC"
276
277 USE_I18N = True
278
279 USE_L10N = True
280
281 USE_TZ = True
282
283 # Static files (CSS, JavaScript, Images)
284 # https://docs.djangoproject.com/en/2.0/howto/static-files/
285
286 STATIC_URL = "/static/"
287
288 # Allow anybody to access the API from any domain
289 CORS_ORIGIN_ALLOW_ALL = True
290
291 # The version of the API. We follow the semantic version specification.
292 API_VERSION = config("SEMANTIC_VERSION", default="Version not specified")
293
294 # The contact email of the Openverse team
295 CONTACT_EMAIL = config("CONTACT_EMAIL", default="[email protected]")
296
297 WATERMARK_ENABLED = config("WATERMARK_ENABLED", default=False, cast=bool)
298
299 ELASTICSEARCH_URL = config("ELASTICSEARCH_URL", default="localhost")
300 ELASTICSEARCH_PORT = config("ELASTICSEARCH_PORT", default=9200, cast=int)
301 ELASTICSEARCH_AWS_REGION = config("ELASTICSEARCH_AWS_REGION", default="us-east-1")
302
303 # Additional settings for dev/prod environments
304 AWS_ACCESS_KEY_ID = config("AWS_ACCESS_KEY_ID", default="")
305 AWS_SECRET_ACCESS_KEY = config("AWS_SECRET_ACCESS_KEY", default="")
306
307 EMAIL_SENDER = config("EMAIL_SENDER", default="")
308 EMAIL_HOST = config("EMAIL_HOST", default="")
309 EMAIL_PORT = config("EMAIL_PORT", default=25, cast=int)
310 EMAIL_HOST_USER = config("EMAIL_HOST_USER", default="")
311 EMAIL_HOST_PASSWORD = config("EMAIL_HOST_PASSWORD", default="")
312 EMAIL_SUBJECT_PREFIX = "[noreply]"
313 EMAIL_USE_TLS = True
314
315 if EMAIL_HOST_USER or EMAIL_HOST_PASSWORD:
316 EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
317 else:
318 EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
319
320 # Log full Elasticsearch response
321 VERBOSE_ES_RESPONSE = config("DEBUG_SCORES", default=False, cast=bool)
322
323 # Whether to boost results by authority and popularity
324 USE_RANK_FEATURES = config("USE_RANK_FEATURES", default=True, cast=bool)
325
[end of openverse_api/catalog/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openverse_api/catalog/settings.py b/openverse_api/catalog/settings.py
--- a/openverse_api/catalog/settings.py
+++ b/openverse_api/catalog/settings.py
@@ -13,7 +13,9 @@
from pathlib import Path
from socket import gethostbyname, gethostname
+import sentry_sdk
from decouple import config
+from sentry_sdk.integrations.django import DjangoIntegration
# Build paths inside the project like this: BASE_DIR.join('dir', 'subdir'...)
@@ -35,6 +37,8 @@
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = config("DJANGO_DEBUG_ENABLED", default=False, cast=bool)
+PYTHON_ENV = config("PYTHON_ENV", default="production")
+
ALLOWED_HOSTS = [
"api-dev.openverse.engineering",
"api.openverse.engineering",
@@ -322,3 +326,18 @@
# Whether to boost results by authority and popularity
USE_RANK_FEATURES = config("USE_RANK_FEATURES", default=True, cast=bool)
+
+SENTRY_DSN = config(
+ "SENTRY_DSN",
+ default="https://[email protected]/6107216",
+)
+SENTRY_SAMPLE_RATE = config("SENTRY_SAMPLE_RATE", default=1.0, cast=float)
+
+if not DEBUG:
+ sentry_sdk.init(
+ dsn=SENTRY_DSN,
+ integrations=[DjangoIntegration()],
+ traces_sample_rate=SENTRY_SAMPLE_RATE,
+ send_default_pii=False,
+ environment=PYTHON_ENV,
+ )
|
{"golden_diff": "diff --git a/openverse_api/catalog/settings.py b/openverse_api/catalog/settings.py\n--- a/openverse_api/catalog/settings.py\n+++ b/openverse_api/catalog/settings.py\n@@ -13,7 +13,9 @@\n from pathlib import Path\n from socket import gethostbyname, gethostname\n \n+import sentry_sdk\n from decouple import config\n+from sentry_sdk.integrations.django import DjangoIntegration\n \n \n # Build paths inside the project like this: BASE_DIR.join('dir', 'subdir'...)\n@@ -35,6 +37,8 @@\n # SECURITY WARNING: don't run with debug turned on in production!\n DEBUG = config(\"DJANGO_DEBUG_ENABLED\", default=False, cast=bool)\n \n+PYTHON_ENV = config(\"PYTHON_ENV\", default=\"production\")\n+\n ALLOWED_HOSTS = [\n \"api-dev.openverse.engineering\",\n \"api.openverse.engineering\",\n@@ -322,3 +326,18 @@\n \n # Whether to boost results by authority and popularity\n USE_RANK_FEATURES = config(\"USE_RANK_FEATURES\", default=True, cast=bool)\n+\n+SENTRY_DSN = config(\n+ \"SENTRY_DSN\",\n+ default=\"https://[email protected]/6107216\",\n+)\n+SENTRY_SAMPLE_RATE = config(\"SENTRY_SAMPLE_RATE\", default=1.0, cast=float)\n+\n+if not DEBUG:\n+ sentry_sdk.init(\n+ dsn=SENTRY_DSN,\n+ integrations=[DjangoIntegration()],\n+ traces_sample_rate=SENTRY_SAMPLE_RATE,\n+ send_default_pii=False,\n+ environment=PYTHON_ENV,\n+ )\n", "issue": "Add Sentry to API\n## Problem\r\n<!-- Describe a problem solved by this feature; or delete the section entirely. -->\r\nWe don't have any visibility into the API service. Sentry would be a good and easy first step.\r\n\r\n## Description\r\n<!-- Describe the feature and how it solves the problem. -->\r\nLet's add Sentry. Long term we have goals of adding other monitoring but Sentry is a good and easy first step.\r\n\r\n## Additional context\r\n<!-- Add any other context about the feature here; or delete the section entirely. -->\r\n\r\n## Implementation\r\n<!-- Replace the [ ] with [x] to check the box. -->\r\n- [x] \ud83d\ude4b I would be interested in implementing this feature.\r\n\n", "before_files": [{"content": "\"\"\"\nDjango settings for catalog project.\n\nGenerated by 'django-admin startproject' using Django 2.0.5.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/2.0/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/2.0/ref/settings/\n\"\"\"\n\nfrom pathlib import Path\nfrom socket import gethostbyname, gethostname\n\nfrom decouple import config\n\n\n# Build paths inside the project like this: BASE_DIR.join('dir', 'subdir'...)\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Where to collect static files in production/development deployments\nSTATIC_ROOT = \"/var/api_static_content/static\"\n\n# Logo uploads\nMEDIA_ROOT = \"/var/api_media/\"\nMEDIA_URL = \"/media/\"\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = config(\"DJANGO_SECRET_KEY\") # required\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = config(\"DJANGO_DEBUG_ENABLED\", default=False, cast=bool)\n\nALLOWED_HOSTS = [\n \"api-dev.openverse.engineering\",\n \"api.openverse.engineering\",\n gethostname(),\n gethostbyname(gethostname()),\n]\n\nif lb_url := config(\"LOAD_BALANCER_URL\", default=\"\"):\n ALLOWED_HOSTS.append(lb_url)\n\nif DEBUG:\n ALLOWED_HOSTS += [\n \"localhost\",\n \"127.0.0.1\",\n \"0.0.0.0\",\n ]\n\n# Domains that shortened links may point to\nSHORT_URL_WHITELIST = {\n \"api-dev.openverse.engineering\",\n \"api.openverse.engineering\",\n \"localhost:8000\",\n}\nSHORT_URL_PATH_WHITELIST = [\"/v1/list\", \"/v1/images/\"]\n\nUSE_S3 = config(\"USE_S3\", default=False, cast=bool)\n\n# Application definition\n\nINSTALLED_APPS = [\n \"catalog\",\n \"catalog.api\",\n \"drf_yasg\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"oauth2_provider\",\n \"rest_framework\",\n \"corsheaders\",\n \"sslserver\",\n]\n\nif USE_S3:\n DEFAULT_FILE_STORAGE = \"storages.backends.s3boto3.S3Boto3Storage\"\n AWS_STORAGE_BUCKET_NAME = config(\"LOGOS_BUCKET\", default=\"openverse_api-logos-prod\")\n AWS_S3_SIGNATURE_VERSION = \"s3v4\"\n INSTALLED_APPS.append(\"storages\")\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"oauth2_provider.middleware.OAuth2TokenMiddleware\",\n]\n\nSWAGGER_SETTINGS = {\"SECURITY_DEFINITIONS\": {}}\n\nOAUTH2_PROVIDER = {\n \"SCOPES\": {\n \"read\": \"Read scope\",\n \"write\": \"Write scope\",\n }\n}\n\nOAUTH2_PROVIDER_APPLICATION_MODEL = \"api.ThrottledApplication\"\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"oauth2_provider.contrib.rest_framework.OAuth2Authentication\",\n ),\n \"DEFAULT_VERSIONING_CLASS\": \"rest_framework.versioning.URLPathVersioning\",\n \"DEFAULT_RENDERER_CLASSES\": (\n \"rest_framework.renderers.JSONRenderer\",\n \"rest_framework.renderers.BrowsableAPIRenderer\",\n \"rest_framework_xml.renderers.XMLRenderer\",\n ),\n \"DEFAULT_THROTTLE_CLASSES\": (\n \"catalog.api.utils.throttle.BurstRateThrottle\",\n \"catalog.api.utils.throttle.SustainedRateThrottle\",\n \"catalog.api.utils.throttle.OAuth2IdThrottleSustainedRate\",\n \"catalog.api.utils.throttle.OAuth2IdThrottleBurstRate\",\n \"catalog.api.utils.throttle.EnhancedOAuth2IdThrottleSustainedRate\",\n \"catalog.api.utils.throttle.EnhancedOAuth2IdThrottleBurstRate\",\n ),\n \"DEFAULT_THROTTLE_RATES\": {\n \"anon_burst\": \"60/min\",\n \"anon_sustained\": \"5000/day\",\n \"oauth2_client_credentials_sustained\": \"10000/day\",\n \"oauth2_client_credentials_burst\": \"100/min\",\n \"enhanced_oauth2_client_credentials_sustained\": \"20000/day\",\n \"enhanced_oauth2_client_credentials_burst\": \"200/min\",\n },\n \"EXCEPTION_HANDLER\": \"catalog.api.utils.exceptions.exception_handler\",\n}\n\nif config(\"DISABLE_GLOBAL_THROTTLING\", default=True, cast=bool):\n del REST_FRAMEWORK[\"DEFAULT_THROTTLE_RATES\"]\n del REST_FRAMEWORK[\"DEFAULT_THROTTLE_CLASSES\"]\n\nREDIS_HOST = config(\"REDIS_HOST\", default=\"localhost\")\nREDIS_PORT = config(\"REDIS_PORT\", default=6379, cast=int)\nREDIS_PASSWORD = config(\"REDIS_PASSWORD\", default=\"\")\nCACHES = {\n # Site cache writes to 'default'\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"redis://{REDIS_HOST}:{REDIS_PORT}/0\",\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n },\n # For rapidly changing stats that we don't want to hammer the database with\n \"traffic_stats\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"redis://{REDIS_HOST}:{REDIS_PORT}/1\",\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n },\n # For ensuring consistency among multiple Django workers and servers.\n # Used by Redlock.\n \"locks\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"redis://{REDIS_HOST}:{REDIS_PORT}/2\",\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n },\n}\n\n# Produce CC-hosted thumbnails dynamically through a proxy.\nTHUMBNAIL_PROXY_URL = config(\"THUMBNAIL_PROXY_URL\", default=\"http://localhost:8222\")\n\nTHUMBNAIL_WIDTH_PX = 600\n\nAUTHENTICATION_BACKENDS = (\n \"oauth2_provider.backends.OAuth2Backend\",\n \"django.contrib.auth.backends.ModelBackend\",\n)\n\nROOT_URLCONF = \"catalog.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [BASE_DIR.joinpath(\"catalog\", \"templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"catalog.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/2.0/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"HOST\": config(\"DJANGO_DATABASE_HOST\", default=\"localhost\"),\n \"PORT\": config(\"DJANGO_DATABASE_PORT\", default=5432, cast=int),\n \"USER\": config(\"DJANGO_DATABASE_USER\", default=\"deploy\"),\n \"PASSWORD\": config(\"DJANGO_DATABASE_PASSWORD\", default=\"deploy\"),\n \"NAME\": config(\"DJANGO_DATABASE_NAME\", default=\"openledger\"),\n },\n \"upstream\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"HOST\": config(\"UPSTREAM_DATABASE_HOST\", default=\"localhost\"),\n \"PORT\": config(\"UPSTREAM_DATABASE_PORT\", default=5433, cast=int),\n \"USER\": config(\"UPSTREAM_DATABASE_USER\", default=\"deploy\"),\n \"PASSWORD\": config(\"UPSTREAM_DATABASE_PASSWORD\", default=\"deploy\"),\n \"NAME\": config(\"UPSTREAM_DATABASE_NAME\", default=\"openledger\"),\n },\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation\"\n \".UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation\" \".MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation\" \".CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation\" \".NumericPasswordValidator\",\n },\n]\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"handlers\": {\n \"console\": {\n \"level\": \"INFO\",\n \"class\": \"logging.StreamHandler\",\n },\n },\n \"loggers\": {\n \"django\": {\n \"handlers\": [\"console\"],\n \"level\": \"INFO\",\n \"propagate\": True,\n },\n # root logger\n \"\": {\n \"level\": \"INFO\",\n \"handlers\": [\"console\"],\n },\n },\n}\n\n# Internationalization\n# https://docs.djangoproject.com/en/2.0/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/2.0/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\n# Allow anybody to access the API from any domain\nCORS_ORIGIN_ALLOW_ALL = True\n\n# The version of the API. We follow the semantic version specification.\nAPI_VERSION = config(\"SEMANTIC_VERSION\", default=\"Version not specified\")\n\n# The contact email of the Openverse team\nCONTACT_EMAIL = config(\"CONTACT_EMAIL\", default=\"[email protected]\")\n\nWATERMARK_ENABLED = config(\"WATERMARK_ENABLED\", default=False, cast=bool)\n\nELASTICSEARCH_URL = config(\"ELASTICSEARCH_URL\", default=\"localhost\")\nELASTICSEARCH_PORT = config(\"ELASTICSEARCH_PORT\", default=9200, cast=int)\nELASTICSEARCH_AWS_REGION = config(\"ELASTICSEARCH_AWS_REGION\", default=\"us-east-1\")\n\n# Additional settings for dev/prod environments\nAWS_ACCESS_KEY_ID = config(\"AWS_ACCESS_KEY_ID\", default=\"\")\nAWS_SECRET_ACCESS_KEY = config(\"AWS_SECRET_ACCESS_KEY\", default=\"\")\n\nEMAIL_SENDER = config(\"EMAIL_SENDER\", default=\"\")\nEMAIL_HOST = config(\"EMAIL_HOST\", default=\"\")\nEMAIL_PORT = config(\"EMAIL_PORT\", default=25, cast=int)\nEMAIL_HOST_USER = config(\"EMAIL_HOST_USER\", default=\"\")\nEMAIL_HOST_PASSWORD = config(\"EMAIL_HOST_PASSWORD\", default=\"\")\nEMAIL_SUBJECT_PREFIX = \"[noreply]\"\nEMAIL_USE_TLS = True\n\nif EMAIL_HOST_USER or EMAIL_HOST_PASSWORD:\n EMAIL_BACKEND = \"django.core.mail.backends.smtp.EmailBackend\"\nelse:\n EMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n\n# Log full Elasticsearch response\nVERBOSE_ES_RESPONSE = config(\"DEBUG_SCORES\", default=False, cast=bool)\n\n# Whether to boost results by authority and popularity\nUSE_RANK_FEATURES = config(\"USE_RANK_FEATURES\", default=True, cast=bool)\n", "path": "openverse_api/catalog/settings.py"}]}
| 4,051 | 392 |
gh_patches_debug_24859
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-16242
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enable translations for hotspots subsystem
There are unused translations at the hotspots subsystem, which could be enabled due to finished and available translations. At the moment there is a mix of English and the configured user language.
Affected file: zerver/lib/hotspots.py
Example (mixed English/German):

</issue>
<code>
[start of zerver/lib/hotspots.py]
1 # See https://zulip.readthedocs.io/en/latest/subsystems/hotspots.html
2 # for documentation on this subsystem.
3 from typing import Dict, List
4
5 from django.conf import settings
6 from django.utils.translation import ugettext as _
7
8 from zerver.models import UserHotspot, UserProfile
9
10 ALL_HOTSPOTS: Dict[str, Dict[str, str]] = {
11 'intro_reply': {
12 'title': _('Reply to a message'),
13 'description': _('Click anywhere on a message to reply.'),
14 },
15 'intro_streams': {
16 'title': _('Catch up on a stream'),
17 'description': _('Messages sent to a stream are seen by everyone subscribed '
18 'to that stream. Try clicking on one of the stream links below.'),
19 },
20 'intro_topics': {
21 'title': _('Topics'),
22 'description': _('Every message has a topic. Topics keep conversations '
23 'easy to follow, and make it easy to reply to conversations that start '
24 'while you are offline.'),
25 },
26 'intro_gear': {
27 'title': _('Settings'),
28 'description': _('Go to Settings to configure your '
29 'notifications and display settings.'),
30 },
31 'intro_compose': {
32 'title': _('Compose'),
33 'description': _('Click here to start a new conversation. Pick a topic '
34 '(2-3 words is best), and give it a go!'),
35 },
36 }
37
38 def get_next_hotspots(user: UserProfile) -> List[Dict[str, object]]:
39 # For manual testing, it can be convenient to set
40 # ALWAYS_SEND_ALL_HOTSPOTS=True in `zproject/dev_settings.py` to
41 # make it easy to click on all of the hotspots. Note that
42 # ALWAYS_SEND_ALL_HOTSPOTS has some bugs; see ReadTheDocs (link
43 # above) for details.
44 if settings.ALWAYS_SEND_ALL_HOTSPOTS:
45 return [{
46 'name': hotspot,
47 'title': ALL_HOTSPOTS[hotspot]['title'],
48 'description': ALL_HOTSPOTS[hotspot]['description'],
49 'delay': 0,
50 } for hotspot in ALL_HOTSPOTS]
51
52 if user.tutorial_status == UserProfile.TUTORIAL_FINISHED:
53 return []
54
55 seen_hotspots = frozenset(UserHotspot.objects.filter(user=user).values_list('hotspot', flat=True))
56 for hotspot in ['intro_reply', 'intro_streams', 'intro_topics', 'intro_gear', 'intro_compose']:
57 if hotspot not in seen_hotspots:
58 return [{
59 'name': hotspot,
60 'title': ALL_HOTSPOTS[hotspot]['title'],
61 'description': ALL_HOTSPOTS[hotspot]['description'],
62 'delay': 0.5,
63 }]
64
65 user.tutorial_status = UserProfile.TUTORIAL_FINISHED
66 user.save(update_fields=['tutorial_status'])
67 return []
68
69 def copy_hotpots(source_profile: UserProfile, target_profile: UserProfile) -> None:
70 for userhotspot in frozenset(UserHotspot.objects.filter(user=source_profile)):
71 UserHotspot.objects.create(user=target_profile, hotspot=userhotspot.hotspot,
72 timestamp=userhotspot.timestamp)
73
74 target_profile.tutorial_status = source_profile.tutorial_status
75 target_profile.onboarding_steps = source_profile.onboarding_steps
76 target_profile.save(update_fields=['tutorial_status', 'onboarding_steps'])
77
[end of zerver/lib/hotspots.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zerver/lib/hotspots.py b/zerver/lib/hotspots.py
--- a/zerver/lib/hotspots.py
+++ b/zerver/lib/hotspots.py
@@ -3,7 +3,7 @@
from typing import Dict, List
from django.conf import settings
-from django.utils.translation import ugettext as _
+from django.utils.translation import ugettext_lazy as _
from zerver.models import UserHotspot, UserProfile
@@ -44,8 +44,8 @@
if settings.ALWAYS_SEND_ALL_HOTSPOTS:
return [{
'name': hotspot,
- 'title': ALL_HOTSPOTS[hotspot]['title'],
- 'description': ALL_HOTSPOTS[hotspot]['description'],
+ 'title': str(ALL_HOTSPOTS[hotspot]['title']),
+ 'description': str(ALL_HOTSPOTS[hotspot]['description']),
'delay': 0,
} for hotspot in ALL_HOTSPOTS]
@@ -57,8 +57,8 @@
if hotspot not in seen_hotspots:
return [{
'name': hotspot,
- 'title': ALL_HOTSPOTS[hotspot]['title'],
- 'description': ALL_HOTSPOTS[hotspot]['description'],
+ 'title': str(ALL_HOTSPOTS[hotspot]['title']),
+ 'description': str(ALL_HOTSPOTS[hotspot]['description']),
'delay': 0.5,
}]
|
{"golden_diff": "diff --git a/zerver/lib/hotspots.py b/zerver/lib/hotspots.py\n--- a/zerver/lib/hotspots.py\n+++ b/zerver/lib/hotspots.py\n@@ -3,7 +3,7 @@\n from typing import Dict, List\n \n from django.conf import settings\n-from django.utils.translation import ugettext as _\n+from django.utils.translation import ugettext_lazy as _\n \n from zerver.models import UserHotspot, UserProfile\n \n@@ -44,8 +44,8 @@\n if settings.ALWAYS_SEND_ALL_HOTSPOTS:\n return [{\n 'name': hotspot,\n- 'title': ALL_HOTSPOTS[hotspot]['title'],\n- 'description': ALL_HOTSPOTS[hotspot]['description'],\n+ 'title': str(ALL_HOTSPOTS[hotspot]['title']),\n+ 'description': str(ALL_HOTSPOTS[hotspot]['description']),\n 'delay': 0,\n } for hotspot in ALL_HOTSPOTS]\n \n@@ -57,8 +57,8 @@\n if hotspot not in seen_hotspots:\n return [{\n 'name': hotspot,\n- 'title': ALL_HOTSPOTS[hotspot]['title'],\n- 'description': ALL_HOTSPOTS[hotspot]['description'],\n+ 'title': str(ALL_HOTSPOTS[hotspot]['title']),\n+ 'description': str(ALL_HOTSPOTS[hotspot]['description']),\n 'delay': 0.5,\n }]\n", "issue": "Enable translations for hotspots subsystem\nThere are unused translations at the hotspots subsystem, which could be enabled due to finished and available translations. At the moment there is a mix of English and the configured user language.\r\n\r\nAffected file: zerver/lib/hotspots.py\r\n\r\nExample (mixed English/German):\r\n\r\n\n", "before_files": [{"content": "# See https://zulip.readthedocs.io/en/latest/subsystems/hotspots.html\n# for documentation on this subsystem.\nfrom typing import Dict, List\n\nfrom django.conf import settings\nfrom django.utils.translation import ugettext as _\n\nfrom zerver.models import UserHotspot, UserProfile\n\nALL_HOTSPOTS: Dict[str, Dict[str, str]] = {\n 'intro_reply': {\n 'title': _('Reply to a message'),\n 'description': _('Click anywhere on a message to reply.'),\n },\n 'intro_streams': {\n 'title': _('Catch up on a stream'),\n 'description': _('Messages sent to a stream are seen by everyone subscribed '\n 'to that stream. Try clicking on one of the stream links below.'),\n },\n 'intro_topics': {\n 'title': _('Topics'),\n 'description': _('Every message has a topic. Topics keep conversations '\n 'easy to follow, and make it easy to reply to conversations that start '\n 'while you are offline.'),\n },\n 'intro_gear': {\n 'title': _('Settings'),\n 'description': _('Go to Settings to configure your '\n 'notifications and display settings.'),\n },\n 'intro_compose': {\n 'title': _('Compose'),\n 'description': _('Click here to start a new conversation. Pick a topic '\n '(2-3 words is best), and give it a go!'),\n },\n}\n\ndef get_next_hotspots(user: UserProfile) -> List[Dict[str, object]]:\n # For manual testing, it can be convenient to set\n # ALWAYS_SEND_ALL_HOTSPOTS=True in `zproject/dev_settings.py` to\n # make it easy to click on all of the hotspots. Note that\n # ALWAYS_SEND_ALL_HOTSPOTS has some bugs; see ReadTheDocs (link\n # above) for details.\n if settings.ALWAYS_SEND_ALL_HOTSPOTS:\n return [{\n 'name': hotspot,\n 'title': ALL_HOTSPOTS[hotspot]['title'],\n 'description': ALL_HOTSPOTS[hotspot]['description'],\n 'delay': 0,\n } for hotspot in ALL_HOTSPOTS]\n\n if user.tutorial_status == UserProfile.TUTORIAL_FINISHED:\n return []\n\n seen_hotspots = frozenset(UserHotspot.objects.filter(user=user).values_list('hotspot', flat=True))\n for hotspot in ['intro_reply', 'intro_streams', 'intro_topics', 'intro_gear', 'intro_compose']:\n if hotspot not in seen_hotspots:\n return [{\n 'name': hotspot,\n 'title': ALL_HOTSPOTS[hotspot]['title'],\n 'description': ALL_HOTSPOTS[hotspot]['description'],\n 'delay': 0.5,\n }]\n\n user.tutorial_status = UserProfile.TUTORIAL_FINISHED\n user.save(update_fields=['tutorial_status'])\n return []\n\ndef copy_hotpots(source_profile: UserProfile, target_profile: UserProfile) -> None:\n for userhotspot in frozenset(UserHotspot.objects.filter(user=source_profile)):\n UserHotspot.objects.create(user=target_profile, hotspot=userhotspot.hotspot,\n timestamp=userhotspot.timestamp)\n\n target_profile.tutorial_status = source_profile.tutorial_status\n target_profile.onboarding_steps = source_profile.onboarding_steps\n target_profile.save(update_fields=['tutorial_status', 'onboarding_steps'])\n", "path": "zerver/lib/hotspots.py"}]}
| 1,525 | 314 |
gh_patches_debug_13021
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-173
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update PyPI description
At the moment I wouldn't be tempted if I first seen this page.
https://pypi.python.org/pypi/mkdocs
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5 from setuptools import setup
6 import re
7 import os
8 import sys
9
10
11 name = 'mkdocs'
12 package = 'mkdocs'
13 description = 'In progress.'
14 url = 'http://www.mkdocs.org'
15 author = 'Tom Christie'
16 author_email = '[email protected]'
17 license = 'BSD'
18 install_requires = [
19 'Jinja2>=2.7.1',
20 'Markdown>=2.3.1,<2.5',
21 'PyYAML>=3.10',
22 'watchdog>=0.7.0',
23 'ghp-import>=0.4.1'
24 ]
25
26 long_description = """Work in progress."""
27
28
29 def get_version(package):
30 """
31 Return package version as listed in `__version__` in `init.py`.
32 """
33 init_py = open(os.path.join(package, '__init__.py')).read()
34 return re.search("^__version__ = ['\"]([^'\"]+)['\"]", init_py, re.MULTILINE).group(1)
35
36
37 def get_packages(package):
38 """
39 Return root package and all sub-packages.
40 """
41 return [dirpath
42 for dirpath, dirnames, filenames in os.walk(package)
43 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
44
45
46 def get_package_data(package):
47 """
48 Return all files under the root package, that are not in a
49 package themselves.
50 """
51 walk = [(dirpath.replace(package + os.sep, '', 1), filenames)
52 for dirpath, dirnames, filenames in os.walk(package)
53 if not os.path.exists(os.path.join(dirpath, '__init__.py'))]
54
55 filepaths = []
56 for base, filenames in walk:
57 filepaths.extend([os.path.join(base, filename)
58 for filename in filenames])
59 return {package: filepaths}
60
61
62 if sys.argv[-1] == 'publish':
63 os.system("python setup.py sdist upload")
64 args = {'version': get_version(package)}
65 print("You probably want to also tag the version now:")
66 print(" git tag -a %(version)s -m 'version %(version)s'" % args)
67 print(" git push --tags")
68 sys.exit()
69
70
71 setup(
72 name=name,
73 version=get_version(package),
74 url=url,
75 license=license,
76 description=description,
77 long_description=long_description,
78 author=author,
79 author_email=author_email,
80 packages=get_packages(package),
81 package_data=get_package_data(package),
82 install_requires=install_requires,
83 entry_points={
84 'console_scripts': [
85 'mkdocs = mkdocs.main:run_main',
86 ],
87 },
88 classifiers=[
89 'Development Status :: 5 - Production/Stable',
90 'Environment :: Console',
91 'Environment :: Web Environment',
92 'Intended Audience :: Developers',
93 'License :: OSI Approved :: BSD License',
94 'Operating System :: OS Independent',
95 'Programming Language :: Python',
96 'Programming Language :: Python :: 2',
97 'Programming Language :: Python :: 2.6',
98 'Programming Language :: Python :: 2.7',
99 'Programming Language :: Python :: 3',
100 'Programming Language :: Python :: 3.3',
101 'Programming Language :: Python :: 3.4',
102 'Topic :: Documentation',
103 'Topic :: Text Processing',
104 ]
105 )
106
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@
name = 'mkdocs'
package = 'mkdocs'
-description = 'In progress.'
+description = 'Project documentation with Markdown.'
url = 'http://www.mkdocs.org'
author = 'Tom Christie'
author_email = '[email protected]'
@@ -23,7 +23,12 @@
'ghp-import>=0.4.1'
]
-long_description = """Work in progress."""
+long_description = (
+ "MkDocs is a fast, simple and downright gorgeous static site generator "
+ "that's geared towards building project documentation. Documentation "
+ "source files are written in Markdown, and configured with a single YAML "
+ "configuration file."
+)
def get_version(package):
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,7 +10,7 @@\n \n name = 'mkdocs'\n package = 'mkdocs'\n-description = 'In progress.'\n+description = 'Project documentation with Markdown.'\n url = 'http://www.mkdocs.org'\n author = 'Tom Christie'\n author_email = '[email protected]'\n@@ -23,7 +23,12 @@\n 'ghp-import>=0.4.1'\n ]\n \n-long_description = \"\"\"Work in progress.\"\"\"\n+long_description = (\n+ \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n+ \"that's geared towards building project documentation. Documentation \"\n+ \"source files are written in Markdown, and configured with a single YAML \"\n+ \"configuration file.\"\n+)\n \n \n def get_version(package):\n", "issue": "Update PyPI description\nAt the moment I wouldn't be tempted if I first seen this page.\n\nhttps://pypi.python.org/pypi/mkdocs\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nname = 'mkdocs'\npackage = 'mkdocs'\ndescription = 'In progress.'\nurl = 'http://www.mkdocs.org'\nauthor = 'Tom Christie'\nauthor_email = '[email protected]'\nlicense = 'BSD'\ninstall_requires = [\n 'Jinja2>=2.7.1',\n 'Markdown>=2.3.1,<2.5',\n 'PyYAML>=3.10',\n 'watchdog>=0.7.0',\n 'ghp-import>=0.4.1'\n]\n\nlong_description = \"\"\"Work in progress.\"\"\"\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"^__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py, re.MULTILINE).group(1)\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\n\nif sys.argv[-1] == 'publish':\n os.system(\"python setup.py sdist upload\")\n args = {'version': get_version(package)}\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a %(version)s -m 'version %(version)s'\" % args)\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=name,\n version=get_version(package),\n url=url,\n license=license,\n description=description,\n long_description=long_description,\n author=author,\n author_email=author_email,\n packages=get_packages(package),\n package_data=get_package_data(package),\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.main:run_main',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ]\n)\n", "path": "setup.py"}]}
| 1,507 | 189 |
gh_patches_debug_3389
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-5011
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Please make the rqrequeue service quieter
## Description
The rqrequeue service feels compelled to report that it has nothing to do, resulting in an endless stream of "No interrupted jobs found in started job registry." messages. This is not helpful during normal operations, and annoying during development.
</issue>
<code>
[start of securedrop/worker.py]
1 import logging
2 import os
3 from typing import Optional, List
4
5 from redis import Redis
6 from rq.queue import Queue
7 from rq.worker import Worker, WorkerStatus
8 from rq.exceptions import InvalidJobOperation, NoSuchJobError
9 from rq.registry import StartedJobRegistry
10
11 from sdconfig import config
12
13
14 def create_queue(name=None, timeout=3600):
15 # type: (str, int) -> Queue
16 """
17 Create an rq ``Queue`` named ``name`` with default timeout ``timeout``.
18
19 If ``name`` is omitted, ``config.RQ_WORKER_NAME`` is used.
20 """
21 if name is None:
22 name = config.RQ_WORKER_NAME
23 q = Queue(name=name, connection=Redis(), default_timeout=timeout)
24 return q
25
26
27 def rq_workers(queue=None):
28 # type: (Queue) -> List[Worker]
29 """
30 Returns the list of current rq ``Worker``s.
31 """
32
33 return Worker.all(connection=Redis(), queue=queue)
34
35
36 def worker_for_job(job_id):
37 # type: (str) -> Optional[Worker]
38 """
39 If the job is being run, return its ``Worker``.
40 """
41 for worker in rq_workers():
42 # If the worker process no longer exists, skip it. From "man 2
43 # kill": "If sig is 0, then no signal is sent, but existence
44 # and permission checks are still performed; this can be used
45 # to check for the existence of a process ID or process group
46 # ID that the caller is permitted to signal."
47 try:
48 os.kill(worker.pid, 0)
49 except OSError:
50 continue
51
52 # If it's running and working on the given job, return it.
53 if worker.state == WorkerStatus.BUSY and job_id == worker.get_current_job_id():
54 return worker
55 return None
56
57
58 def requeue_interrupted_jobs(queue_name=None):
59 # type: (str) -> None
60 """
61 Requeues jobs found in the given queue's started job registry.
62
63 Only restarts those that aren't already queued or being run.
64
65 When rq starts a job, it records it in the queue's started job
66 registry. If the server is rebooted before the job completes, the
67 job is not automatically restarted from the information in the
68 registry. For tasks like secure deletion of files, this means that
69 information thought to be deleted is still present in the case of
70 seizure or compromise. We have manage.py tasks to clean such files
71 up, but this utility attempts to reduce the need for manual
72 intervention by automatically resuming interrupted jobs.
73
74 This function is predicated on a risky assumption: that all jobs
75 are idempotent. At time of writing, we use rq for securely
76 deleting submission files and hashing submissions for the ETag
77 header. Both of these can be safely repeated. If we add rq tasks
78 that cannot, this function should be improved to omit those.
79 """
80 queue = create_queue(queue_name)
81 started_job_registry = StartedJobRegistry(queue=queue)
82
83 queued_job_ids = queue.get_job_ids()
84 logging.debug("queued jobs: {}".format(queued_job_ids))
85 started_job_ids = started_job_registry.get_job_ids()
86 logging.debug("started jobs: {}".format(started_job_ids))
87 job_ids = [j for j in started_job_ids if j not in queued_job_ids]
88 logging.debug("candidate job ids: {}".format(job_ids))
89
90 if not job_ids:
91 logging.info("No interrupted jobs found in started job registry.")
92
93 for job_id in job_ids:
94 logging.debug("Considering job %s", job_id)
95 try:
96 job = started_job_registry.job_class.fetch(job_id, started_job_registry.connection)
97 except NoSuchJobError as e:
98 logging.error(
99 "Could not find details for job %s: %s", job_id, e
100 )
101 continue
102
103 logging.debug(
104 "Job %s enqueued at %s, started at %s", job_id, job.enqueued_at, job.started_at
105 )
106
107 worker = worker_for_job(job_id)
108 if worker:
109 logging.info(
110 "Skipping job %s, which is already being run by worker %s", job_id, worker.key
111 )
112 continue
113
114 logging.info("Requeuing job %s", job)
115
116 try:
117 started_job_registry.remove(job)
118 except InvalidJobOperation as e:
119 logging.error("Could not remove job %s from started job registry: %s", job, e)
120 continue
121
122 try:
123 queue.enqueue_job(job)
124 logging.debug("Job now enqueued at %s, started at %s", job.enqueued_at, job.started_at)
125 except Exception as e:
126 logging.error("Could not requeue job %s: %s", job, e)
127 continue
128
[end of securedrop/worker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/securedrop/worker.py b/securedrop/worker.py
--- a/securedrop/worker.py
+++ b/securedrop/worker.py
@@ -88,7 +88,7 @@
logging.debug("candidate job ids: {}".format(job_ids))
if not job_ids:
- logging.info("No interrupted jobs found in started job registry.")
+ logging.debug("No interrupted jobs found in started job registry.")
for job_id in job_ids:
logging.debug("Considering job %s", job_id)
|
{"golden_diff": "diff --git a/securedrop/worker.py b/securedrop/worker.py\n--- a/securedrop/worker.py\n+++ b/securedrop/worker.py\n@@ -88,7 +88,7 @@\n logging.debug(\"candidate job ids: {}\".format(job_ids))\n \n if not job_ids:\n- logging.info(\"No interrupted jobs found in started job registry.\")\n+ logging.debug(\"No interrupted jobs found in started job registry.\")\n \n for job_id in job_ids:\n logging.debug(\"Considering job %s\", job_id)\n", "issue": "Please make the rqrequeue service quieter\n## Description\r\n\r\nThe rqrequeue service feels compelled to report that it has nothing to do, resulting in an endless stream of \"No interrupted jobs found in started job registry.\" messages. This is not helpful during normal operations, and annoying during development.\n", "before_files": [{"content": "import logging\nimport os\nfrom typing import Optional, List\n\nfrom redis import Redis\nfrom rq.queue import Queue\nfrom rq.worker import Worker, WorkerStatus\nfrom rq.exceptions import InvalidJobOperation, NoSuchJobError\nfrom rq.registry import StartedJobRegistry\n\nfrom sdconfig import config\n\n\ndef create_queue(name=None, timeout=3600):\n # type: (str, int) -> Queue\n \"\"\"\n Create an rq ``Queue`` named ``name`` with default timeout ``timeout``.\n\n If ``name`` is omitted, ``config.RQ_WORKER_NAME`` is used.\n \"\"\"\n if name is None:\n name = config.RQ_WORKER_NAME\n q = Queue(name=name, connection=Redis(), default_timeout=timeout)\n return q\n\n\ndef rq_workers(queue=None):\n # type: (Queue) -> List[Worker]\n \"\"\"\n Returns the list of current rq ``Worker``s.\n \"\"\"\n\n return Worker.all(connection=Redis(), queue=queue)\n\n\ndef worker_for_job(job_id):\n # type: (str) -> Optional[Worker]\n \"\"\"\n If the job is being run, return its ``Worker``.\n \"\"\"\n for worker in rq_workers():\n # If the worker process no longer exists, skip it. From \"man 2\n # kill\": \"If sig is 0, then no signal is sent, but existence\n # and permission checks are still performed; this can be used\n # to check for the existence of a process ID or process group\n # ID that the caller is permitted to signal.\"\n try:\n os.kill(worker.pid, 0)\n except OSError:\n continue\n\n # If it's running and working on the given job, return it.\n if worker.state == WorkerStatus.BUSY and job_id == worker.get_current_job_id():\n return worker\n return None\n\n\ndef requeue_interrupted_jobs(queue_name=None):\n # type: (str) -> None\n \"\"\"\n Requeues jobs found in the given queue's started job registry.\n\n Only restarts those that aren't already queued or being run.\n\n When rq starts a job, it records it in the queue's started job\n registry. If the server is rebooted before the job completes, the\n job is not automatically restarted from the information in the\n registry. For tasks like secure deletion of files, this means that\n information thought to be deleted is still present in the case of\n seizure or compromise. We have manage.py tasks to clean such files\n up, but this utility attempts to reduce the need for manual\n intervention by automatically resuming interrupted jobs.\n\n This function is predicated on a risky assumption: that all jobs\n are idempotent. At time of writing, we use rq for securely\n deleting submission files and hashing submissions for the ETag\n header. Both of these can be safely repeated. If we add rq tasks\n that cannot, this function should be improved to omit those.\n \"\"\"\n queue = create_queue(queue_name)\n started_job_registry = StartedJobRegistry(queue=queue)\n\n queued_job_ids = queue.get_job_ids()\n logging.debug(\"queued jobs: {}\".format(queued_job_ids))\n started_job_ids = started_job_registry.get_job_ids()\n logging.debug(\"started jobs: {}\".format(started_job_ids))\n job_ids = [j for j in started_job_ids if j not in queued_job_ids]\n logging.debug(\"candidate job ids: {}\".format(job_ids))\n\n if not job_ids:\n logging.info(\"No interrupted jobs found in started job registry.\")\n\n for job_id in job_ids:\n logging.debug(\"Considering job %s\", job_id)\n try:\n job = started_job_registry.job_class.fetch(job_id, started_job_registry.connection)\n except NoSuchJobError as e:\n logging.error(\n \"Could not find details for job %s: %s\", job_id, e\n )\n continue\n\n logging.debug(\n \"Job %s enqueued at %s, started at %s\", job_id, job.enqueued_at, job.started_at\n )\n\n worker = worker_for_job(job_id)\n if worker:\n logging.info(\n \"Skipping job %s, which is already being run by worker %s\", job_id, worker.key\n )\n continue\n\n logging.info(\"Requeuing job %s\", job)\n\n try:\n started_job_registry.remove(job)\n except InvalidJobOperation as e:\n logging.error(\"Could not remove job %s from started job registry: %s\", job, e)\n continue\n\n try:\n queue.enqueue_job(job)\n logging.debug(\"Job now enqueued at %s, started at %s\", job.enqueued_at, job.started_at)\n except Exception as e:\n logging.error(\"Could not requeue job %s: %s\", job, e)\n continue\n", "path": "securedrop/worker.py"}]}
| 1,921 | 117 |
gh_patches_debug_25831
|
rasdani/github-patches
|
git_diff
|
larq__larq-93
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docs: Add links to source code
This is really handy if people want to understand what's going on behind the scenes or want to implement more advanced stuff
</issue>
<code>
[start of generate_api_docs.py]
1 """https://github.com/NiklasRosenstein/pydoc-markdown/blob/master/pydocmd/__main__.py"""
2
3 import os
4 import sys
5 import yaml
6
7 from pydocmd.document import Index
8 from pydocmd.imp import dir_object
9 from pydocmd.loader import PythonLoader
10 from pydocmd.preprocessor import Preprocessor
11
12
13 with open("apidocs.yml", "r") as stream:
14 api_structure = yaml.safe_load(stream)
15
16 # Build the index and document structure first, we load the actual
17 # docstrings at a later point.
18 print("Building index...")
19 index = Index()
20
21
22 def add_sections(doc, object_names, depth=1):
23 if isinstance(object_names, list):
24 [add_sections(doc, x, depth) for x in object_names]
25 elif isinstance(object_names, dict):
26 for key, subsections in object_names.items():
27 add_sections(doc, key, depth)
28 add_sections(doc, subsections, depth + 1)
29 elif isinstance(object_names, str):
30 # Check how many levels of recursion we should be going.
31 expand_depth = len(object_names)
32 object_names = object_names.rstrip("+")
33 expand_depth -= len(object_names)
34
35 def create_sections(name, level):
36 if level > expand_depth:
37 return
38 index.new_section(doc, name, depth=depth + level, header_type="markdown")
39 for sub in dir_object(name, "line", False):
40 sub = name + "." + sub
41 create_sections(sub, level + 1)
42
43 create_sections(object_names, 0)
44 else:
45 raise RuntimeError(object_names)
46
47
48 # Make sure that we can find modules from the current working directory,
49 # and have them take precedence over installed modules.
50 sys.path.insert(0, ".")
51
52 for pages in api_structure:
53 for fname, object_names in pages.items():
54 doc = index.new_document(fname)
55 add_sections(doc, object_names)
56
57 loader = PythonLoader({})
58 preproc = Preprocessor({})
59
60 preproc.link_lookup = {}
61 for file, doc in index.documents.items():
62 for section in doc.sections:
63 preproc.link_lookup[section.identifier] = file
64 # Load the docstrings and fill the sections.
65 print("Started generating documentation...")
66 for doc in index.documents.values():
67 for section in filter(lambda s: s.identifier, doc.sections):
68 loader.load_section(section)
69 preproc.preprocess_section(section)
70
71 # Write out all the generated documents.
72 os.makedirs(os.path.join("docs", "api"), exist_ok=True)
73 for fname, doc in index.documents.items():
74 with open(os.path.join("docs", "api", fname), "w") as fp:
75 for section in doc.sections:
76 section.render(fp)
77
[end of generate_api_docs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/generate_api_docs.py b/generate_api_docs.py
--- a/generate_api_docs.py
+++ b/generate_api_docs.py
@@ -1,5 +1,6 @@
"""https://github.com/NiklasRosenstein/pydoc-markdown/blob/master/pydocmd/__main__.py"""
+import inspect
import os
import sys
import yaml
@@ -10,6 +11,23 @@
from pydocmd.preprocessor import Preprocessor
+def callable_to_source_link(obj, scope):
+ path = scope.__file__.lstrip(".")
+ source = inspect.getsourcelines(obj)
+ line = source[-1] + 1 if source[0][0].startswith("@") else source[-1]
+ link = f"https://github.com/plumerai/larq/blob/master{path}#L{line}"
+ return f'<a class="headerlink code-link" style="float:right;" href="{link}" title="Source Code"></a>'
+
+
+class PythonLoaderWithSource(PythonLoader):
+ def load_section(self, section):
+ super().load_section(section)
+ obj = section.loader_context["obj"]
+ if callable(obj):
+ scope = section.loader_context["scope"]
+ section.title += callable_to_source_link(obj, scope)
+
+
with open("apidocs.yml", "r") as stream:
api_structure = yaml.safe_load(stream)
@@ -54,7 +72,7 @@
doc = index.new_document(fname)
add_sections(doc, object_names)
-loader = PythonLoader({})
+loader = PythonLoaderWithSource({})
preproc = Preprocessor({})
preproc.link_lookup = {}
|
{"golden_diff": "diff --git a/generate_api_docs.py b/generate_api_docs.py\n--- a/generate_api_docs.py\n+++ b/generate_api_docs.py\n@@ -1,5 +1,6 @@\n \"\"\"https://github.com/NiklasRosenstein/pydoc-markdown/blob/master/pydocmd/__main__.py\"\"\"\n \n+import inspect\n import os\n import sys\n import yaml\n@@ -10,6 +11,23 @@\n from pydocmd.preprocessor import Preprocessor\n \n \n+def callable_to_source_link(obj, scope):\n+ path = scope.__file__.lstrip(\".\")\n+ source = inspect.getsourcelines(obj)\n+ line = source[-1] + 1 if source[0][0].startswith(\"@\") else source[-1]\n+ link = f\"https://github.com/plumerai/larq/blob/master{path}#L{line}\"\n+ return f'<a class=\"headerlink code-link\" style=\"float:right;\" href=\"{link}\" title=\"Source Code\"></a>'\n+\n+\n+class PythonLoaderWithSource(PythonLoader):\n+ def load_section(self, section):\n+ super().load_section(section)\n+ obj = section.loader_context[\"obj\"]\n+ if callable(obj):\n+ scope = section.loader_context[\"scope\"]\n+ section.title += callable_to_source_link(obj, scope)\n+\n+\n with open(\"apidocs.yml\", \"r\") as stream:\n api_structure = yaml.safe_load(stream)\n \n@@ -54,7 +72,7 @@\n doc = index.new_document(fname)\n add_sections(doc, object_names)\n \n-loader = PythonLoader({})\n+loader = PythonLoaderWithSource({})\n preproc = Preprocessor({})\n \n preproc.link_lookup = {}\n", "issue": "Docs: Add links to source code\nThis is really handy if people want to understand what's going on behind the scenes or want to implement more advanced stuff\n", "before_files": [{"content": "\"\"\"https://github.com/NiklasRosenstein/pydoc-markdown/blob/master/pydocmd/__main__.py\"\"\"\n\nimport os\nimport sys\nimport yaml\n\nfrom pydocmd.document import Index\nfrom pydocmd.imp import dir_object\nfrom pydocmd.loader import PythonLoader\nfrom pydocmd.preprocessor import Preprocessor\n\n\nwith open(\"apidocs.yml\", \"r\") as stream:\n api_structure = yaml.safe_load(stream)\n\n# Build the index and document structure first, we load the actual\n# docstrings at a later point.\nprint(\"Building index...\")\nindex = Index()\n\n\ndef add_sections(doc, object_names, depth=1):\n if isinstance(object_names, list):\n [add_sections(doc, x, depth) for x in object_names]\n elif isinstance(object_names, dict):\n for key, subsections in object_names.items():\n add_sections(doc, key, depth)\n add_sections(doc, subsections, depth + 1)\n elif isinstance(object_names, str):\n # Check how many levels of recursion we should be going.\n expand_depth = len(object_names)\n object_names = object_names.rstrip(\"+\")\n expand_depth -= len(object_names)\n\n def create_sections(name, level):\n if level > expand_depth:\n return\n index.new_section(doc, name, depth=depth + level, header_type=\"markdown\")\n for sub in dir_object(name, \"line\", False):\n sub = name + \".\" + sub\n create_sections(sub, level + 1)\n\n create_sections(object_names, 0)\n else:\n raise RuntimeError(object_names)\n\n\n# Make sure that we can find modules from the current working directory,\n# and have them take precedence over installed modules.\nsys.path.insert(0, \".\")\n\nfor pages in api_structure:\n for fname, object_names in pages.items():\n doc = index.new_document(fname)\n add_sections(doc, object_names)\n\nloader = PythonLoader({})\npreproc = Preprocessor({})\n\npreproc.link_lookup = {}\nfor file, doc in index.documents.items():\n for section in doc.sections:\n preproc.link_lookup[section.identifier] = file\n# Load the docstrings and fill the sections.\nprint(\"Started generating documentation...\")\nfor doc in index.documents.values():\n for section in filter(lambda s: s.identifier, doc.sections):\n loader.load_section(section)\n preproc.preprocess_section(section)\n\n# Write out all the generated documents.\nos.makedirs(os.path.join(\"docs\", \"api\"), exist_ok=True)\nfor fname, doc in index.documents.items():\n with open(os.path.join(\"docs\", \"api\", fname), \"w\") as fp:\n for section in doc.sections:\n section.render(fp)\n", "path": "generate_api_docs.py"}]}
| 1,281 | 367 |
gh_patches_debug_39768
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-4204
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
checkov skips all K8S standard policies if one or more custom policy is specified in --checks
**Description**
Using checkov to verify a kubernetes manifests (a single file with several objects: deployments, configmaps, etc) against a list of checks (so using the --check parameter), checkov verifies only the first check, and appears to skip all others checks in the provided list.
**Examples**
The [manifests are available here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s-manifest-yaml)
The [parameters available in the log](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-full_log_debug-log-L33)
**Version (please complete the following information):**
- Checkov Version 2.2.232
**Additional context**
The [full log, LOG_DEVEL=DEBUG, is available here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-full_log_debug-log)
The custom policies yaml files are available [here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s_pvc_gov01-yaml) and [here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s_sts_gov01-yaml)
</issue>
<code>
[start of checkov/kubernetes/checks/resource/base_registry.py]
1 from __future__ import annotations
2
3 from typing import Any, TYPE_CHECKING
4
5 from checkov.common.checks.base_check_registry import BaseCheckRegistry
6
7 if TYPE_CHECKING:
8 from checkov.common.checks.base_check import BaseCheck
9 from checkov.common.typing import _SkippedCheck, _CheckResult
10 from checkov.runner_filter import RunnerFilter
11
12
13 class Registry(BaseCheckRegistry):
14 def __init__(self, report_type: str) -> None:
15 super().__init__(report_type)
16
17 def extract_entity_details(self, entity: dict[str, Any]) -> tuple[str, dict[str, Any]]: # type:ignore[override]
18 kind = entity.get("kind") or ""
19 conf = entity
20 return kind, conf
21
22 def scan(
23 self,
24 scanned_file: str,
25 entity: dict[str, Any],
26 skipped_checks: list[_SkippedCheck],
27 runner_filter: RunnerFilter,
28 report_type: str | None = None,
29 ) -> dict[BaseCheck, _CheckResult]:
30 (entity_type, entity_configuration) = self.extract_entity_details(entity)
31 results = {}
32 checks = self.get_checks(entity_type)
33 for check in checks:
34 skip_info: "_SkippedCheck" = {}
35 if skipped_checks:
36 if check.id in [x['id'] for x in skipped_checks]:
37 skip_info = [x for x in skipped_checks if x['id'] == check.id][0]
38
39 if self._should_run_scan(check, entity_configuration, runner_filter, self.report_type):
40 self.logger.debug("Running check: {} on file {}".format(check.name, scanned_file))
41
42 result = check.run(scanned_file=scanned_file, entity_configuration=entity_configuration,
43 entity_name=entity_type, entity_type=entity_type, skip_info=skip_info)
44 results[check] = result
45 return results
46
47 @staticmethod
48 def _should_run_scan(
49 check: BaseCheck, entity_configuration: dict[str, Any], runner_filter: RunnerFilter, report_type: str
50 ) -> bool:
51 check_id_allowlist = runner_filter.checks
52 check_id_denylist = runner_filter.skip_checks
53 if check_id_allowlist or runner_filter.check_threshold:
54 # Allow list provides namespace-only allows, check-only allows, or both
55 # If namespaces not specified, all namespaces are scanned
56 # If checks not specified, all checks are scanned
57 run_check = False
58 allowed_namespaces = [string for string in check_id_allowlist if ("CKV_" not in string and "BC_" not in string)]
59 if not any(("CKV_" in check or "BC_" in check) for check in check_id_allowlist) and not runner_filter.check_threshold:
60 if "metadata" in entity_configuration and "namespace" in entity_configuration["metadata"]:
61 if entity_configuration["metadata"]["namespace"] in allowed_namespaces:
62 run_check = True
63 elif "parent_metadata" in entity_configuration and "namespace" in entity_configuration["parent_metadata"]:
64 if entity_configuration["parent_metadata"]["namespace"] in allowed_namespaces:
65 run_check = True
66 else:
67 if "default" in allowed_namespaces:
68 run_check = True
69 else:
70 if runner_filter.should_run_check(check=check, report_type=report_type):
71 if allowed_namespaces:
72 # Check if namespace in allowed namespaces
73 if "metadata" in entity_configuration and "namespace" in entity_configuration["metadata"]:
74 if entity_configuration["metadata"]["namespace"] in allowed_namespaces:
75 run_check = True
76 elif "parent_metadata" in entity_configuration and "namespace" in entity_configuration["parent_metadata"]:
77 if entity_configuration["parent_metadata"]["namespace"] in allowed_namespaces:
78 run_check = True
79 else:
80 if "default" in allowed_namespaces:
81 run_check = True
82 else:
83 # No namespaces to filter
84 run_check = True
85 if run_check:
86 return True
87 elif check_id_denylist or runner_filter.skip_check_threshold or runner_filter.use_enforcement_rules:
88 namespace_skip = False
89 if "metadata" in entity_configuration and "namespace" in entity_configuration["metadata"]:
90 if entity_configuration["metadata"]["namespace"] in check_id_denylist:
91 namespace_skip = True
92 elif "parent_metadata" in entity_configuration and "namespace" in entity_configuration["parent_metadata"]:
93 if entity_configuration["parent_metadata"]["namespace"] in check_id_denylist:
94 namespace_skip = True
95 else:
96 if "default" in check_id_denylist:
97 namespace_skip = True
98 if runner_filter.should_run_check(check=check, report_type=report_type) and not namespace_skip:
99 return True
100 else:
101 return True
102 return False
103
[end of checkov/kubernetes/checks/resource/base_registry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/kubernetes/checks/resource/base_registry.py b/checkov/kubernetes/checks/resource/base_registry.py
--- a/checkov/kubernetes/checks/resource/base_registry.py
+++ b/checkov/kubernetes/checks/resource/base_registry.py
@@ -54,35 +54,27 @@
# Allow list provides namespace-only allows, check-only allows, or both
# If namespaces not specified, all namespaces are scanned
# If checks not specified, all checks are scanned
- run_check = False
- allowed_namespaces = [string for string in check_id_allowlist if ("CKV_" not in string and "BC_" not in string)]
- if not any(("CKV_" in check or "BC_" in check) for check in check_id_allowlist) and not runner_filter.check_threshold:
+
+ if any("_" in check_id for check_id in check_id_allowlist) or runner_filter.check_threshold:
+ # a Kubernetes namespace can't have an '_' in its name,
+ # therefore we assume it is a built-in or custom check
+ if not runner_filter.should_run_check(check=check, report_type=report_type):
+ return False
+
+ allowed_namespaces = [check_id for check_id in check_id_allowlist if "_" not in check_id]
+ if allowed_namespaces:
+ # Check if namespace in allowed namespaces
if "metadata" in entity_configuration and "namespace" in entity_configuration["metadata"]:
if entity_configuration["metadata"]["namespace"] in allowed_namespaces:
- run_check = True
+ return True
elif "parent_metadata" in entity_configuration and "namespace" in entity_configuration["parent_metadata"]:
if entity_configuration["parent_metadata"]["namespace"] in allowed_namespaces:
- run_check = True
+ return True
else:
if "default" in allowed_namespaces:
- run_check = True
+ return True
else:
- if runner_filter.should_run_check(check=check, report_type=report_type):
- if allowed_namespaces:
- # Check if namespace in allowed namespaces
- if "metadata" in entity_configuration and "namespace" in entity_configuration["metadata"]:
- if entity_configuration["metadata"]["namespace"] in allowed_namespaces:
- run_check = True
- elif "parent_metadata" in entity_configuration and "namespace" in entity_configuration["parent_metadata"]:
- if entity_configuration["parent_metadata"]["namespace"] in allowed_namespaces:
- run_check = True
- else:
- if "default" in allowed_namespaces:
- run_check = True
- else:
- # No namespaces to filter
- run_check = True
- if run_check:
+ # No namespaces to filter
return True
elif check_id_denylist or runner_filter.skip_check_threshold or runner_filter.use_enforcement_rules:
namespace_skip = False
|
{"golden_diff": "diff --git a/checkov/kubernetes/checks/resource/base_registry.py b/checkov/kubernetes/checks/resource/base_registry.py\n--- a/checkov/kubernetes/checks/resource/base_registry.py\n+++ b/checkov/kubernetes/checks/resource/base_registry.py\n@@ -54,35 +54,27 @@\n # Allow list provides namespace-only allows, check-only allows, or both\n # If namespaces not specified, all namespaces are scanned\n # If checks not specified, all checks are scanned\n- run_check = False\n- allowed_namespaces = [string for string in check_id_allowlist if (\"CKV_\" not in string and \"BC_\" not in string)]\n- if not any((\"CKV_\" in check or \"BC_\" in check) for check in check_id_allowlist) and not runner_filter.check_threshold:\n+\n+ if any(\"_\" in check_id for check_id in check_id_allowlist) or runner_filter.check_threshold:\n+ # a Kubernetes namespace can't have an '_' in its name,\n+ # therefore we assume it is a built-in or custom check\n+ if not runner_filter.should_run_check(check=check, report_type=report_type):\n+ return False\n+\n+ allowed_namespaces = [check_id for check_id in check_id_allowlist if \"_\" not in check_id]\n+ if allowed_namespaces:\n+ # Check if namespace in allowed namespaces\n if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n if entity_configuration[\"metadata\"][\"namespace\"] in allowed_namespaces:\n- run_check = True\n+ return True\n elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n if entity_configuration[\"parent_metadata\"][\"namespace\"] in allowed_namespaces:\n- run_check = True\n+ return True\n else:\n if \"default\" in allowed_namespaces:\n- run_check = True\n+ return True\n else:\n- if runner_filter.should_run_check(check=check, report_type=report_type):\n- if allowed_namespaces:\n- # Check if namespace in allowed namespaces\n- if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n- if entity_configuration[\"metadata\"][\"namespace\"] in allowed_namespaces:\n- run_check = True\n- elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n- if entity_configuration[\"parent_metadata\"][\"namespace\"] in allowed_namespaces:\n- run_check = True\n- else:\n- if \"default\" in allowed_namespaces:\n- run_check = True\n- else:\n- # No namespaces to filter\n- run_check = True\n- if run_check:\n+ # No namespaces to filter\n return True\n elif check_id_denylist or runner_filter.skip_check_threshold or runner_filter.use_enforcement_rules:\n namespace_skip = False\n", "issue": "checkov skips all K8S standard policies if one or more custom policy is specified in --checks\n**Description**\r\nUsing checkov to verify a kubernetes manifests (a single file with several objects: deployments, configmaps, etc) against a list of checks (so using the --check parameter), checkov verifies only the first check, and appears to skip all others checks in the provided list.\r\n\r\n**Examples**\r\nThe [manifests are available here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s-manifest-yaml)\r\nThe [parameters available in the log](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-full_log_debug-log-L33)\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.2.232\r\n\r\n**Additional context**\r\nThe [full log, LOG_DEVEL=DEBUG, is available here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-full_log_debug-log)\r\nThe custom policies yaml files are available [here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s_pvc_gov01-yaml) and [here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s_sts_gov01-yaml)\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any, TYPE_CHECKING\n\nfrom checkov.common.checks.base_check_registry import BaseCheckRegistry\n\nif TYPE_CHECKING:\n from checkov.common.checks.base_check import BaseCheck\n from checkov.common.typing import _SkippedCheck, _CheckResult\n from checkov.runner_filter import RunnerFilter\n\n\nclass Registry(BaseCheckRegistry):\n def __init__(self, report_type: str) -> None:\n super().__init__(report_type)\n\n def extract_entity_details(self, entity: dict[str, Any]) -> tuple[str, dict[str, Any]]: # type:ignore[override]\n kind = entity.get(\"kind\") or \"\"\n conf = entity\n return kind, conf\n\n def scan(\n self,\n scanned_file: str,\n entity: dict[str, Any],\n skipped_checks: list[_SkippedCheck],\n runner_filter: RunnerFilter,\n report_type: str | None = None,\n ) -> dict[BaseCheck, _CheckResult]:\n (entity_type, entity_configuration) = self.extract_entity_details(entity)\n results = {}\n checks = self.get_checks(entity_type)\n for check in checks:\n skip_info: \"_SkippedCheck\" = {}\n if skipped_checks:\n if check.id in [x['id'] for x in skipped_checks]:\n skip_info = [x for x in skipped_checks if x['id'] == check.id][0]\n\n if self._should_run_scan(check, entity_configuration, runner_filter, self.report_type):\n self.logger.debug(\"Running check: {} on file {}\".format(check.name, scanned_file))\n\n result = check.run(scanned_file=scanned_file, entity_configuration=entity_configuration,\n entity_name=entity_type, entity_type=entity_type, skip_info=skip_info)\n results[check] = result\n return results\n\n @staticmethod\n def _should_run_scan(\n check: BaseCheck, entity_configuration: dict[str, Any], runner_filter: RunnerFilter, report_type: str\n ) -> bool:\n check_id_allowlist = runner_filter.checks\n check_id_denylist = runner_filter.skip_checks\n if check_id_allowlist or runner_filter.check_threshold:\n # Allow list provides namespace-only allows, check-only allows, or both\n # If namespaces not specified, all namespaces are scanned\n # If checks not specified, all checks are scanned\n run_check = False\n allowed_namespaces = [string for string in check_id_allowlist if (\"CKV_\" not in string and \"BC_\" not in string)]\n if not any((\"CKV_\" in check or \"BC_\" in check) for check in check_id_allowlist) and not runner_filter.check_threshold:\n if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n if entity_configuration[\"metadata\"][\"namespace\"] in allowed_namespaces:\n run_check = True\n elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n if entity_configuration[\"parent_metadata\"][\"namespace\"] in allowed_namespaces:\n run_check = True\n else:\n if \"default\" in allowed_namespaces:\n run_check = True\n else:\n if runner_filter.should_run_check(check=check, report_type=report_type):\n if allowed_namespaces:\n # Check if namespace in allowed namespaces\n if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n if entity_configuration[\"metadata\"][\"namespace\"] in allowed_namespaces:\n run_check = True\n elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n if entity_configuration[\"parent_metadata\"][\"namespace\"] in allowed_namespaces:\n run_check = True\n else:\n if \"default\" in allowed_namespaces:\n run_check = True\n else:\n # No namespaces to filter\n run_check = True\n if run_check:\n return True\n elif check_id_denylist or runner_filter.skip_check_threshold or runner_filter.use_enforcement_rules:\n namespace_skip = False\n if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n if entity_configuration[\"metadata\"][\"namespace\"] in check_id_denylist:\n namespace_skip = True\n elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n if entity_configuration[\"parent_metadata\"][\"namespace\"] in check_id_denylist:\n namespace_skip = True\n else:\n if \"default\" in check_id_denylist:\n namespace_skip = True\n if runner_filter.should_run_check(check=check, report_type=report_type) and not namespace_skip:\n return True\n else:\n return True\n return False\n", "path": "checkov/kubernetes/checks/resource/base_registry.py"}]}
| 2,147 | 619 |
gh_patches_debug_7481
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-3268
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'SystemSettings' object has no attribute 'enable_password_policy' during `bench restore`
Hello,
I ran `bench update` then tried to restore a backup and this error starts popping up.
It seems it might have come in from 7ccbbce5720bf16d5d3cc94c627e22ef0541e53b
</issue>
<code>
[start of frappe/utils/scheduler.py]
1 # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
2 # MIT License. See license.txt
3 """
4 Events:
5 always
6 daily
7 monthly
8 weekly
9 """
10
11 from __future__ import unicode_literals, print_function
12
13 import frappe
14 import json
15 import schedule
16 import time
17 import MySQLdb
18 import frappe.utils
19 from frappe.utils import get_sites
20 from datetime import datetime
21 from background_jobs import enqueue, get_jobs, queue_timeout
22 from frappe.limits import has_expired
23 from frappe.utils.data import get_datetime, now_datetime
24 from frappe.core.doctype.user.user import STANDARD_USERS
25 from frappe.installer import update_site_config
26
27 DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'
28
29 def start_scheduler():
30 '''Run enqueue_events_for_all_sites every 2 minutes (default).
31 Specify scheduler_interval in seconds in common_site_config.json'''
32
33 interval = frappe.get_conf().scheduler_interval or 240
34 schedule.every(interval).seconds.do(enqueue_events_for_all_sites)
35
36 while True:
37 schedule.run_pending()
38 time.sleep(1)
39
40 def enqueue_events_for_all_sites():
41 '''Loop through sites and enqueue events that are not already queued'''
42 with frappe.init_site():
43 jobs_per_site = get_jobs()
44 sites = get_sites()
45
46 for site in sites:
47 try:
48 enqueue_events_for_site(site=site, queued_jobs=jobs_per_site[site])
49 except:
50 # it should try to enqueue other sites
51 print(frappe.get_traceback())
52
53 def enqueue_events_for_site(site, queued_jobs):
54 try:
55 frappe.init(site=site)
56 if frappe.local.conf.maintenance_mode:
57 return
58
59 if frappe.local.conf.pause_scheduler:
60 return
61
62 frappe.connect()
63 if is_scheduler_disabled():
64 return
65
66 enqueue_events(site=site, queued_jobs=queued_jobs)
67
68 frappe.logger(__name__).debug('Queued events for site {0}'.format(site))
69
70 except:
71 frappe.logger(__name__).error('Exception in Enqueue Events for Site {0}'.format(site) +
72 '\n' + frappe.get_traceback())
73 raise
74
75 finally:
76 frappe.destroy()
77
78 def enqueue_events(site, queued_jobs):
79 nowtime = frappe.utils.now_datetime()
80 last = frappe.db.get_value('System Settings', 'System Settings', 'scheduler_last_event')
81
82 # set scheduler last event
83 frappe.db.set_value('System Settings', 'System Settings',
84 'scheduler_last_event', nowtime.strftime(DATETIME_FORMAT),
85 update_modified=False)
86 frappe.db.commit()
87
88 out = []
89 if last:
90 last = datetime.strptime(last, DATETIME_FORMAT)
91 out = enqueue_applicable_events(site, nowtime, last, queued_jobs)
92
93 return '\n'.join(out)
94
95 def enqueue_applicable_events(site, nowtime, last, queued_jobs=()):
96 nowtime_str = nowtime.strftime(DATETIME_FORMAT)
97 out = []
98
99 enabled_events = get_enabled_scheduler_events()
100
101 def trigger_if_enabled(site, event):
102 if event in enabled_events:
103 trigger(site, event, queued_jobs)
104 _log(event)
105
106 def _log(event):
107 out.append("{time} - {event} - queued".format(time=nowtime_str, event=event))
108
109 if nowtime.day != last.day:
110 # if first task of the day execute daily tasks
111 trigger_if_enabled(site, "daily")
112 trigger_if_enabled(site, "daily_long")
113
114 if nowtime.month != last.month:
115 trigger_if_enabled(site, "monthly")
116 trigger_if_enabled(site, "monthly_long")
117
118 if nowtime.weekday()==0:
119 trigger_if_enabled(site, "weekly")
120 trigger_if_enabled(site, "weekly_long")
121
122 if "all" not in enabled_events:
123 trigger(site, "all", queued_jobs)
124
125 if "hourly" not in enabled_events:
126 trigger(site, "hourly", queued_jobs)
127
128 if nowtime.hour != last.hour:
129 trigger_if_enabled(site, "hourly")
130 trigger_if_enabled(site, "hourly_long")
131
132 if "all" not in enabled_events:
133 trigger(site, "all", queued_jobs)
134
135 trigger_if_enabled(site, "all")
136
137 return out
138
139 def trigger(site, event, queued_jobs=(), now=False):
140 """trigger method in hooks.scheduler_events"""
141 queue = 'long' if event.endswith('_long') else 'short'
142 timeout = queue_timeout[queue]
143 if not queued_jobs and not now:
144 queued_jobs = get_jobs(site=site, queue=queue)
145
146 if frappe.flags.in_test:
147 frappe.flags.ran_schedulers.append(event)
148
149 events = get_scheduler_events(event)
150 if not events:
151 return
152
153 for handler in events:
154 if not now:
155 if handler not in queued_jobs:
156 enqueue(handler, queue, timeout, event)
157 else:
158 scheduler_task(site=site, event=event, handler=handler, now=True)
159
160 def get_scheduler_events(event):
161 '''Get scheduler events from hooks and integrations'''
162 scheduler_events = frappe.cache().get_value('scheduler_events')
163 if not scheduler_events:
164 scheduler_events = frappe.get_hooks("scheduler_events")
165 frappe.cache().set_value('scheduler_events', scheduler_events)
166
167 return scheduler_events.get(event) or []
168
169 def log(method, message=None):
170 """log error in patch_log"""
171 message = frappe.utils.cstr(message) + "\n" if message else ""
172 message += frappe.get_traceback()
173
174 if not (frappe.db and frappe.db._conn):
175 frappe.connect()
176
177 frappe.db.rollback()
178 frappe.db.begin()
179
180 d = frappe.new_doc("Error Log")
181 d.method = method
182 d.error = message
183 d.insert(ignore_permissions=True)
184
185 frappe.db.commit()
186
187 return message
188
189 def get_enabled_scheduler_events():
190 if 'enabled_events' in frappe.flags:
191 return frappe.flags.enabled_events
192
193 enabled_events = frappe.db.get_global("enabled_scheduler_events")
194 if enabled_events:
195 if isinstance(enabled_events, basestring):
196 enabled_events = json.loads(enabled_events)
197
198 return enabled_events
199
200 return ["all", "hourly", "hourly_long", "daily", "daily_long",
201 "weekly", "weekly_long", "monthly", "monthly_long"]
202
203 def is_scheduler_disabled():
204 if frappe.conf.disable_scheduler:
205 return True
206
207 return not frappe.utils.cint(frappe.db.get_single_value("System Settings", "enable_scheduler"))
208
209 def toggle_scheduler(enable):
210 ss = frappe.get_doc("System Settings")
211 ss.enable_scheduler = 1 if enable else 0
212 ss.flags.ignore_mandatory = True
213 ss.flags.ignore_permissions = True
214 ss.save()
215
216 def enable_scheduler():
217 toggle_scheduler(True)
218
219 def disable_scheduler():
220 toggle_scheduler(False)
221
222 def get_errors(from_date, to_date, limit):
223 errors = frappe.db.sql("""select modified, method, error from `tabError Log`
224 where date(modified) between %s and %s
225 and error not like '%%[Errno 110] Connection timed out%%'
226 order by modified limit %s""", (from_date, to_date, limit), as_dict=True)
227 return ["""<p>Time: {modified}</p><pre><code>Method: {method}\n{error}</code></pre>""".format(**e)
228 for e in errors]
229
230 def get_error_report(from_date=None, to_date=None, limit=10):
231 from frappe.utils import get_url, now_datetime, add_days
232
233 if not from_date:
234 from_date = add_days(now_datetime().date(), -1)
235 if not to_date:
236 to_date = add_days(now_datetime().date(), -1)
237
238 errors = get_errors(from_date, to_date, limit)
239
240 if errors:
241 return 1, """<h4>Error Logs (max {limit}):</h4>
242 <p>URL: <a href="{url}" target="_blank">{url}</a></p><hr>{errors}""".format(
243 limit=limit, url=get_url(), errors="<hr>".join(errors))
244 else:
245 return 0, "<p>No error logs</p>"
246
247 def scheduler_task(site, event, handler, now=False):
248 '''This is a wrapper function that runs a hooks.scheduler_events method'''
249 frappe.logger(__name__).info('running {handler} for {site} for event: {event}'.format(handler=handler, site=site, event=event))
250 try:
251 if not now:
252 frappe.connect(site=site)
253
254 frappe.flags.in_scheduler = True
255 frappe.get_attr(handler)()
256
257 except Exception:
258 frappe.db.rollback()
259 traceback = log(handler, "Method: {event}, Handler: {handler}".format(event=event, handler=handler))
260 frappe.logger(__name__).error(traceback)
261 raise
262
263 else:
264 frappe.db.commit()
265
266 frappe.logger(__name__).info('ran {handler} for {site} for event: {event}'.format(handler=handler, site=site, event=event))
267
268
269 def reset_enabled_scheduler_events(login_manager):
270 if login_manager.info.user_type == "System User":
271 try:
272 frappe.db.set_global('enabled_scheduler_events', None)
273 except MySQLdb.OperationalError as e:
274 if e.args[0]==1205:
275 frappe.log_error(frappe.get_traceback(), "Error in reset_enabled_scheduler_events")
276 else:
277 raise
278 else:
279 is_dormant = frappe.conf.get('dormant')
280 if is_dormant:
281 update_site_config('dormant', 'None')
282
283 def disable_scheduler_on_expiry():
284 if has_expired():
285 disable_scheduler()
286
287 def restrict_scheduler_events_if_dormant():
288 if is_dormant():
289 restrict_scheduler_events()
290 update_site_config('dormant', True)
291
292 def restrict_scheduler_events(*args, **kwargs):
293 val = json.dumps(["hourly", "hourly_long", "daily", "daily_long", "weekly", "weekly_long", "monthly", "monthly_long"])
294 frappe.db.set_global('enabled_scheduler_events', val)
295
296 def is_dormant(since = 345600):
297 last_active = get_datetime(get_last_active())
298 # Get now without tz info
299 now = now_datetime().replace(tzinfo=None)
300 time_since_last_active = now - last_active
301 if time_since_last_active.total_seconds() > since: # 4 days
302 return True
303 return False
304
305 def get_last_active():
306 return frappe.db.sql("""select max(ifnull(last_active, "2000-01-01 00:00:00")) from `tabUser`
307 where user_type = 'System User' and name not in ({standard_users})"""\
308 .format(standard_users=", ".join(["%s"]*len(STANDARD_USERS))),
309 STANDARD_USERS)[0][0]
310
[end of frappe/utils/scheduler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/frappe/utils/scheduler.py b/frappe/utils/scheduler.py
--- a/frappe/utils/scheduler.py
+++ b/frappe/utils/scheduler.py
@@ -207,11 +207,7 @@
return not frappe.utils.cint(frappe.db.get_single_value("System Settings", "enable_scheduler"))
def toggle_scheduler(enable):
- ss = frappe.get_doc("System Settings")
- ss.enable_scheduler = 1 if enable else 0
- ss.flags.ignore_mandatory = True
- ss.flags.ignore_permissions = True
- ss.save()
+ frappe.db.set_value("System Settings", None, "enable_scheduler", 1 if enable else 0)
def enable_scheduler():
toggle_scheduler(True)
|
{"golden_diff": "diff --git a/frappe/utils/scheduler.py b/frappe/utils/scheduler.py\n--- a/frappe/utils/scheduler.py\n+++ b/frappe/utils/scheduler.py\n@@ -207,11 +207,7 @@\n \treturn not frappe.utils.cint(frappe.db.get_single_value(\"System Settings\", \"enable_scheduler\"))\n \n def toggle_scheduler(enable):\n-\tss = frappe.get_doc(\"System Settings\")\n-\tss.enable_scheduler = 1 if enable else 0\n-\tss.flags.ignore_mandatory = True\n-\tss.flags.ignore_permissions = True\n-\tss.save()\n+\tfrappe.db.set_value(\"System Settings\", None, \"enable_scheduler\", 1 if enable else 0)\n \n def enable_scheduler():\n \ttoggle_scheduler(True)\n", "issue": "AttributeError: 'SystemSettings' object has no attribute 'enable_password_policy' during `bench restore`\nHello,\r\nI ran `bench update` then tried to restore a backup and this error starts popping up.\r\n\r\nIt seems it might have come in from 7ccbbce5720bf16d5d3cc94c627e22ef0541e53b\n", "before_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# MIT License. See license.txt\n\"\"\"\nEvents:\n\talways\n\tdaily\n\tmonthly\n\tweekly\n\"\"\"\n\nfrom __future__ import unicode_literals, print_function\n\nimport frappe\nimport json\nimport schedule\nimport time\nimport MySQLdb\nimport frappe.utils\nfrom frappe.utils import get_sites\nfrom datetime import datetime\nfrom background_jobs import enqueue, get_jobs, queue_timeout\nfrom frappe.limits import has_expired\nfrom frappe.utils.data import get_datetime, now_datetime\nfrom frappe.core.doctype.user.user import STANDARD_USERS\nfrom frappe.installer import update_site_config\n\nDATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'\n\ndef start_scheduler():\n\t'''Run enqueue_events_for_all_sites every 2 minutes (default).\n\tSpecify scheduler_interval in seconds in common_site_config.json'''\n\n\tinterval = frappe.get_conf().scheduler_interval or 240\n\tschedule.every(interval).seconds.do(enqueue_events_for_all_sites)\n\n\twhile True:\n\t\tschedule.run_pending()\n\t\ttime.sleep(1)\n\ndef enqueue_events_for_all_sites():\n\t'''Loop through sites and enqueue events that are not already queued'''\n\twith frappe.init_site():\n\t\tjobs_per_site = get_jobs()\n\t\tsites = get_sites()\n\n\tfor site in sites:\n\t\ttry:\n\t\t\tenqueue_events_for_site(site=site, queued_jobs=jobs_per_site[site])\n\t\texcept:\n\t\t\t# it should try to enqueue other sites\n\t\t\tprint(frappe.get_traceback())\n\ndef enqueue_events_for_site(site, queued_jobs):\n\ttry:\n\t\tfrappe.init(site=site)\n\t\tif frappe.local.conf.maintenance_mode:\n\t\t\treturn\n\n\t\tif frappe.local.conf.pause_scheduler:\n\t\t\treturn\n\n\t\tfrappe.connect()\n\t\tif is_scheduler_disabled():\n\t\t\treturn\n\n\t\tenqueue_events(site=site, queued_jobs=queued_jobs)\n\n\t\tfrappe.logger(__name__).debug('Queued events for site {0}'.format(site))\n\n\texcept:\n\t\tfrappe.logger(__name__).error('Exception in Enqueue Events for Site {0}'.format(site) +\n\t\t\t'\\n' + frappe.get_traceback())\n\t\traise\n\n\tfinally:\n\t\tfrappe.destroy()\n\ndef enqueue_events(site, queued_jobs):\n\tnowtime = frappe.utils.now_datetime()\n\tlast = frappe.db.get_value('System Settings', 'System Settings', 'scheduler_last_event')\n\n\t# set scheduler last event\n\tfrappe.db.set_value('System Settings', 'System Settings',\n\t\t'scheduler_last_event', nowtime.strftime(DATETIME_FORMAT),\n\t\tupdate_modified=False)\n\tfrappe.db.commit()\n\n\tout = []\n\tif last:\n\t\tlast = datetime.strptime(last, DATETIME_FORMAT)\n\t\tout = enqueue_applicable_events(site, nowtime, last, queued_jobs)\n\n\treturn '\\n'.join(out)\n\ndef enqueue_applicable_events(site, nowtime, last, queued_jobs=()):\n\tnowtime_str = nowtime.strftime(DATETIME_FORMAT)\n\tout = []\n\n\tenabled_events = get_enabled_scheduler_events()\n\n\tdef trigger_if_enabled(site, event):\n\t\tif event in enabled_events:\n\t\t\ttrigger(site, event, queued_jobs)\n\t\t\t_log(event)\n\n\tdef _log(event):\n\t\tout.append(\"{time} - {event} - queued\".format(time=nowtime_str, event=event))\n\n\tif nowtime.day != last.day:\n\t\t# if first task of the day execute daily tasks\n\t\ttrigger_if_enabled(site, \"daily\")\n\t\ttrigger_if_enabled(site, \"daily_long\")\n\n\t\tif nowtime.month != last.month:\n\t\t\ttrigger_if_enabled(site, \"monthly\")\n\t\t\ttrigger_if_enabled(site, \"monthly_long\")\n\n\t\tif nowtime.weekday()==0:\n\t\t\ttrigger_if_enabled(site, \"weekly\")\n\t\t\ttrigger_if_enabled(site, \"weekly_long\")\n\n\t\tif \"all\" not in enabled_events:\n\t\t\ttrigger(site, \"all\", queued_jobs)\n\n\t\tif \"hourly\" not in enabled_events:\n\t\t\ttrigger(site, \"hourly\", queued_jobs)\n\n\tif nowtime.hour != last.hour:\n\t\ttrigger_if_enabled(site, \"hourly\")\n\t\ttrigger_if_enabled(site, \"hourly_long\")\n\n\t\tif \"all\" not in enabled_events:\n\t\t\ttrigger(site, \"all\", queued_jobs)\n\n\ttrigger_if_enabled(site, \"all\")\n\n\treturn out\n\ndef trigger(site, event, queued_jobs=(), now=False):\n\t\"\"\"trigger method in hooks.scheduler_events\"\"\"\n\tqueue = 'long' if event.endswith('_long') else 'short'\n\ttimeout = queue_timeout[queue]\n\tif not queued_jobs and not now:\n\t\tqueued_jobs = get_jobs(site=site, queue=queue)\n\n\tif frappe.flags.in_test:\n\t\tfrappe.flags.ran_schedulers.append(event)\n\n\tevents = get_scheduler_events(event)\n\tif not events:\n\t\treturn\n\n\tfor handler in events:\n\t\tif not now:\n\t\t\tif handler not in queued_jobs:\n\t\t\t\tenqueue(handler, queue, timeout, event)\n\t\telse:\n\t\t\tscheduler_task(site=site, event=event, handler=handler, now=True)\n\ndef get_scheduler_events(event):\n\t'''Get scheduler events from hooks and integrations'''\n\tscheduler_events = frappe.cache().get_value('scheduler_events')\n\tif not scheduler_events:\n\t\tscheduler_events = frappe.get_hooks(\"scheduler_events\")\n\t\tfrappe.cache().set_value('scheduler_events', scheduler_events)\n\n\treturn scheduler_events.get(event) or []\n\ndef log(method, message=None):\n\t\"\"\"log error in patch_log\"\"\"\n\tmessage = frappe.utils.cstr(message) + \"\\n\" if message else \"\"\n\tmessage += frappe.get_traceback()\n\n\tif not (frappe.db and frappe.db._conn):\n\t\tfrappe.connect()\n\n\tfrappe.db.rollback()\n\tfrappe.db.begin()\n\n\td = frappe.new_doc(\"Error Log\")\n\td.method = method\n\td.error = message\n\td.insert(ignore_permissions=True)\n\n\tfrappe.db.commit()\n\n\treturn message\n\ndef get_enabled_scheduler_events():\n\tif 'enabled_events' in frappe.flags:\n\t\treturn frappe.flags.enabled_events\n\n\tenabled_events = frappe.db.get_global(\"enabled_scheduler_events\")\n\tif enabled_events:\n\t\tif isinstance(enabled_events, basestring):\n\t\t\tenabled_events = json.loads(enabled_events)\n\n\t\treturn enabled_events\n\n\treturn [\"all\", \"hourly\", \"hourly_long\", \"daily\", \"daily_long\",\n\t\t\"weekly\", \"weekly_long\", \"monthly\", \"monthly_long\"]\n\ndef is_scheduler_disabled():\n\tif frappe.conf.disable_scheduler:\n\t\treturn True\n\n\treturn not frappe.utils.cint(frappe.db.get_single_value(\"System Settings\", \"enable_scheduler\"))\n\ndef toggle_scheduler(enable):\n\tss = frappe.get_doc(\"System Settings\")\n\tss.enable_scheduler = 1 if enable else 0\n\tss.flags.ignore_mandatory = True\n\tss.flags.ignore_permissions = True\n\tss.save()\n\ndef enable_scheduler():\n\ttoggle_scheduler(True)\n\ndef disable_scheduler():\n\ttoggle_scheduler(False)\n\ndef get_errors(from_date, to_date, limit):\n\terrors = frappe.db.sql(\"\"\"select modified, method, error from `tabError Log`\n\t\twhere date(modified) between %s and %s\n\t\tand error not like '%%[Errno 110] Connection timed out%%'\n\t\torder by modified limit %s\"\"\", (from_date, to_date, limit), as_dict=True)\n\treturn [\"\"\"<p>Time: {modified}</p><pre><code>Method: {method}\\n{error}</code></pre>\"\"\".format(**e)\n\t\tfor e in errors]\n\ndef get_error_report(from_date=None, to_date=None, limit=10):\n\tfrom frappe.utils import get_url, now_datetime, add_days\n\n\tif not from_date:\n\t\tfrom_date = add_days(now_datetime().date(), -1)\n\tif not to_date:\n\t\tto_date = add_days(now_datetime().date(), -1)\n\n\terrors = get_errors(from_date, to_date, limit)\n\n\tif errors:\n\t\treturn 1, \"\"\"<h4>Error Logs (max {limit}):</h4>\n\t\t\t<p>URL: <a href=\"{url}\" target=\"_blank\">{url}</a></p><hr>{errors}\"\"\".format(\n\t\t\tlimit=limit, url=get_url(), errors=\"<hr>\".join(errors))\n\telse:\n\t\treturn 0, \"<p>No error logs</p>\"\n\ndef scheduler_task(site, event, handler, now=False):\n\t'''This is a wrapper function that runs a hooks.scheduler_events method'''\n\tfrappe.logger(__name__).info('running {handler} for {site} for event: {event}'.format(handler=handler, site=site, event=event))\n\ttry:\n\t\tif not now:\n\t\t\tfrappe.connect(site=site)\n\n\t\tfrappe.flags.in_scheduler = True\n\t\tfrappe.get_attr(handler)()\n\n\texcept Exception:\n\t\tfrappe.db.rollback()\n\t\ttraceback = log(handler, \"Method: {event}, Handler: {handler}\".format(event=event, handler=handler))\n\t\tfrappe.logger(__name__).error(traceback)\n\t\traise\n\n\telse:\n\t\tfrappe.db.commit()\n\n\tfrappe.logger(__name__).info('ran {handler} for {site} for event: {event}'.format(handler=handler, site=site, event=event))\n\n\ndef reset_enabled_scheduler_events(login_manager):\n\tif login_manager.info.user_type == \"System User\":\n\t\ttry:\n\t\t\tfrappe.db.set_global('enabled_scheduler_events', None)\n\t\texcept MySQLdb.OperationalError as e:\n\t\t\tif e.args[0]==1205:\n\t\t\t\tfrappe.log_error(frappe.get_traceback(), \"Error in reset_enabled_scheduler_events\")\n\t\t\telse:\n\t\t\t\traise\n\t\telse:\n\t\t\tis_dormant = frappe.conf.get('dormant')\n\t\t\tif is_dormant:\n\t\t\t\tupdate_site_config('dormant', 'None')\n\ndef disable_scheduler_on_expiry():\n\tif has_expired():\n\t\tdisable_scheduler()\n\ndef restrict_scheduler_events_if_dormant():\n\tif is_dormant():\n\t\trestrict_scheduler_events()\n\t\tupdate_site_config('dormant', True)\n\ndef restrict_scheduler_events(*args, **kwargs):\n\tval = json.dumps([\"hourly\", \"hourly_long\", \"daily\", \"daily_long\", \"weekly\", \"weekly_long\", \"monthly\", \"monthly_long\"])\n\tfrappe.db.set_global('enabled_scheduler_events', val)\n\ndef is_dormant(since = 345600):\n\tlast_active = get_datetime(get_last_active())\n\t# Get now without tz info\n\tnow = now_datetime().replace(tzinfo=None)\n\ttime_since_last_active = now - last_active\n\tif time_since_last_active.total_seconds() > since: # 4 days\n\t\treturn True\n\treturn False\n\ndef get_last_active():\n\treturn frappe.db.sql(\"\"\"select max(ifnull(last_active, \"2000-01-01 00:00:00\")) from `tabUser`\n\t\twhere user_type = 'System User' and name not in ({standard_users})\"\"\"\\\n\t\t.format(standard_users=\", \".join([\"%s\"]*len(STANDARD_USERS))),\n\t\tSTANDARD_USERS)[0][0]\n", "path": "frappe/utils/scheduler.py"}]}
| 3,918 | 161 |
gh_patches_debug_17733
|
rasdani/github-patches
|
git_diff
|
holoviz__panel-5243
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unify JupyterLite Install Instructions
### Description
Instructions on installing Panel in a (Pyodide-based) JupyterLite environment are presently both outdated and broken.
There are two different outdated scripts (for Panel versions <1.0.0) at:
- [ ] [Setting up JupyterLite](https://panel.holoviz.org/how_to/wasm/jupyterlite.html#optimized-wheels-optional)
- [ ] [Installing Panel in the browser](https://panel.holoviz.org/how_to/wasm/standalone.html#pyodide)
If I try to install those into a JupyterLite Pyodide environment, I get:
```
await micropip.install("https://cdn.holoviz.org/panel/0.14.0/wheels/panel-0.14.0-py3-none-any.whl", keep_going=True)
```
```
ValueError: Can't fetch wheel from 'https://cdn.holoviz.org/panel/0.14.0/wheels/panel-0.14.0-py3-none-any.whl'.
One common reason for this is when the server blocks Cross-Origin Resource Sharing (CORS).
Check if the server is sending the correct 'Access-Control-Allow-Origin' header.
```
On the other hand, if I try to install the Bokeh and Panel `py3-none-any` wheels directly from pip, I get an error related to python packages that have not yet been compiled for WASM:
```
micropip.install("https://files.pythonhosted.org/packages/56/98/da78cec88a7c47b761c9b3a18677b5508ef17417184396b3d1361fc811f1/bokeh-3.2.0-py3-none-any.whl", keep_going=True)
```
```
File /lib/python3.11/site-packages/micropip/_micropip.py:580, in install(requirements, keep_going, deps, credentials, pre)
578 if transaction.failed:
579 failed_requirements = ", ".join([f"'{req}'" for req in transaction.failed])
--> 580 raise ValueError(
581 f"Can't find a pure Python 3 wheel for: {failed_requirements}\n"
582 f"See: {FAQ_URLS['cant_find_wheel']}\n"
583 )
585 wheel_promises = []
586 # Install built-in packages
ValueError: Can't find a pure Python 3 wheel for: 'contourpy>=1', 'tornado>=5.1'
See: https://pyodide.org/en/stable/usage/faq.html#micropip-can-t-find-a-pure-python-wheel
```
```
micropip.install("https://files.pythonhosted.org/packages/90/a3/cc9cfdf1b18e5456a0ebd9370baa0a5d58501b4904fa3b3d1ecccbdbd1a2/panel-1.1.1-py2.py3-none-any.whl", keep_going=True)
```
```
File /lib/python3.11/site-packages/micropip/_micropip.py:580, in install(requirements, keep_going, deps, credentials, pre)
578 if transaction.failed:
579 failed_requirements = ", ".join([f"'{req}'" for req in transaction.failed])
--> 580 raise ValueError(
581 f"Can't find a pure Python 3 wheel for: {failed_requirements}\n"
582 f"See: {FAQ_URLS['cant_find_wheel']}\n"
583 )
585 wheel_promises = []
586 # Install built-in packages
ValueError: Can't find a pure Python 3 wheel for: 'contourpy>=1', 'tornado>=5.1'
See: https://pyodide.org/en/stable/usage/faq.html#micropip-can-t-find-a-pure-python-wheel
```
#### Describe the solution you'd like
A working, unified script pointing to the latest version of Panel and Bokeh.
#### Describe alternatives you've considered
N/A
#### Additional context
Try the installation for yourself at https://jupyterlite.readthedocs.io/en/latest/_static/lab/index.html
</issue>
<code>
[start of doc/conf.py]
1 import json
2 import os
3 import pathlib
4
5 import param
6
7 param.parameterized.docstring_signature = False
8 param.parameterized.docstring_describe_params = False
9
10 from nbsite.shared_conf import *
11
12 project = 'Panel'
13 authors = 'Panel contributors'
14 copyright_years['start_year'] = '2019'
15 copyright = copyright_fmt.format(**copyright_years)
16 description = 'High-level dashboarding for python visualization libraries'
17
18 import panel
19
20 from panel.io.convert import BOKEH_VERSION, MINIMUM_VERSIONS, PY_VERSION
21 from panel.io.resources import CDN_DIST
22
23 PANEL_ROOT = pathlib.Path(panel.__file__).parent
24
25 version = release = base_version(panel.__version__)
26 js_version = json.loads((PANEL_ROOT / 'package.json').read_text())['version']
27
28 is_dev = any(ext in version for ext in ('a', 'b', 'rc'))
29
30 # For the interactivity warning box created by nbsite to point to the right
31 # git tag instead of the default i.e. main.
32 os.environ['BRANCH'] = f"v{release}"
33
34 html_static_path += ['_static']
35
36 html_css_files += [
37 'css/custom.css',
38 ]
39
40 html_theme = "pydata_sphinx_theme"
41 html_favicon = "_static/icons/favicon.ico"
42
43 html_theme_options = {
44 "logo": {
45 "image_light": "_static/logo_horizontal_light_theme.png",
46 "image_dark": "_static/logo_horizontal_dark_theme.png",
47 },
48 "github_url": "https://github.com/holoviz/panel",
49 "icon_links": [
50 {
51 "name": "Twitter",
52 "url": "https://twitter.com/Panel_Org",
53 "icon": "fa-brands fa-twitter-square",
54 },
55 {
56 "name": "Discourse",
57 "url": "https://discourse.holoviz.org/c/panel/5",
58 "icon": "fa-brands fa-discourse",
59 },
60 {
61 "name": "Discord",
62 "url": "https://discord.gg/UXdtYyGVQX",
63 "icon": "fa-brands fa-discord",
64 },
65 ],
66 "analytics": {"google_analytics_id": "G-L0C8PGT2LM"},
67 "pygment_light_style": "material",
68 "pygment_dark_style": "material",
69 "header_links_before_dropdown": 5,
70 'secondary_sidebar_items': [
71 "github-stars-button",
72 "panelitelink",
73 "page-toc",
74 ],
75 }
76
77 extensions += [
78 'sphinx.ext.napoleon',
79 'nbsite.gallery',
80 'sphinx_copybutton',
81 'nbsite.pyodide'
82 ]
83 napoleon_numpy_docstring = True
84
85 myst_enable_extensions = ["colon_fence", "deflist"]
86
87 gallery_endpoint = 'panel-gallery-dev' if is_dev else 'panel-gallery'
88 gallery_url = f'https://{gallery_endpoint}.pyviz.demo.anaconda.com'
89 jlite_url = 'https://pyviz-dev.github.io/panelite-dev' if is_dev else 'https://panelite.holoviz.org'
90 pyodide_url = 'https://pyviz-dev.github.io/panel/pyodide' if is_dev else 'https://panel.holoviz.org/pyodide'
91
92 nbsite_gallery_conf = {
93 'github_org': 'holoviz',
94 'github_project': 'panel',
95 'galleries': {
96 'reference': {
97 'title': 'Component Gallery',
98 'sections': [
99 'panes',
100 'layouts',
101 'templates',
102 'global',
103 'indicators',
104 'widgets',
105 ],
106 'titles': {
107 'Vega': 'Altair & Vega',
108 'DeckGL': 'PyDeck & Deck.gl',
109 'ECharts': 'PyEcharts & ECharts',
110 'IPyWidget': 'ipywidgets'
111 },
112 'as_pyodide': True,
113 'normalize_titles': False
114 }
115 },
116 'thumbnail_url': 'https://assets.holoviz.org/panel/thumbnails',
117 'deployment_url': gallery_url,
118 'jupyterlite_url': jlite_url,
119 }
120
121 if panel.__version__ != version and (PANEL_ROOT / 'dist' / 'wheels').is_dir():
122 py_version = panel.__version__.replace("-dirty", "")
123 panel_req = f'./wheels/panel-{py_version}-py3-none-any.whl'
124 bokeh_req = f'./wheels/bokeh-{BOKEH_VERSION}-py3-none-any.whl'
125 else:
126 panel_req = f'{CDN_DIST}wheels/panel-{PY_VERSION}-py3-none-any.whl'
127 bokeh_req = f'{CDN_DIST}wheels/bokeh-{BOKEH_VERSION}-py3-none-any.whl'
128
129 def get_requirements():
130 with open('pyodide_dependencies.json') as deps:
131 dependencies = json.load(deps)
132 requirements = {}
133 for src, deps in dependencies.items():
134 if deps is None:
135 continue
136 src = src.replace('.ipynb', '').replace('.md', '')
137 for name, min_version in MINIMUM_VERSIONS.items():
138 if any(name in req for req in deps):
139 deps = [f'{name}>={min_version}' if name in req else req for req in deps]
140 requirements[src] = deps
141 return requirements
142
143 nbsite_pyodide_conf = {
144 'PYODIDE_URL': 'https://cdn.jsdelivr.net/pyodide/v0.23.1/full/pyodide.js',
145 'requirements': [bokeh_req, panel_req, 'pyodide-http'],
146 'requires': get_requirements()
147 }
148
149 templates_path += [
150 '_templates'
151 ]
152
153 html_context.update({
154 "last_release": f"v{release}",
155 "github_user": "holoviz",
156 "github_repo": "panel",
157 "default_mode": "light",
158 "panelite_endpoint": jlite_url,
159 "gallery_url": gallery_url,
160 "pyodide_url": pyodide_url
161 })
162
163 nbbuild_patterns_to_take_along = ["simple.html", "*.json", "json_*"]
164
165 # Override the Sphinx default title that appends `documentation`
166 html_title = f'{project} v{version}'
167
168
169 # Patching GridItemCardDirective to be able to substitute the domain name
170 # in the link option.
171 from sphinx_design.cards import CardDirective
172 from sphinx_design.grids import GridItemCardDirective
173
174 orig_grid_run = GridItemCardDirective.run
175
176 def patched_grid_run(self):
177 app = self.state.document.settings.env.app
178 existing_link = self.options.get('link')
179 domain = getattr(app.config, 'grid_item_link_domain', None)
180 if self.has_content:
181 self.content.replace('|gallery-endpoint|', domain)
182 if existing_link and domain:
183 new_link = existing_link.replace('|gallery-endpoint|', domain)
184 self.options['link'] = new_link
185 return list(orig_grid_run(self))
186
187 GridItemCardDirective.run = patched_grid_run
188
189 orig_card_run = CardDirective.run
190
191 def patched_card_run(self):
192 app = self.state.document.settings.env.app
193 existing_link = self.options.get('link')
194 domain = getattr(app.config, 'grid_item_link_domain', None)
195 if existing_link and domain:
196 new_link = existing_link.replace('|gallery-endpoint|', domain)
197 self.options['link'] = new_link
198 return orig_card_run(self)
199
200 CardDirective.run = patched_card_run
201
202 def setup(app) -> None:
203 try:
204 from nbsite.paramdoc import param_formatter, param_skip
205 app.connect('autodoc-process-docstring', param_formatter)
206 app.connect('autodoc-skip-member', param_skip)
207 except ImportError:
208 print('no param_formatter (no param?)')
209
210 nbbuild.setup(app)
211 app.add_config_value('grid_item_link_domain', '', 'html')
212
213 grid_item_link_domain = gallery_endpoint
214
[end of doc/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/doc/conf.py b/doc/conf.py
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -199,6 +199,19 @@
CardDirective.run = patched_card_run
+def update_versions(app, docname, source):
+ # Inspired by: https://stackoverflow.com/questions/8821511
+ version_replace = {
+ "{{PANEL_VERSION}}" : PY_VERSION,
+ "{{BOKEH_VERSION}}" : BOKEH_VERSION,
+ "{{PYSCRIPT_VERSION}}" : "2022.12.1",
+ "{{PYODIDE_VERSION}}" : "0.23.4",
+ }
+
+ for old, new in version_replace.items():
+ source[0] = source[0].replace(old, new)
+
+
def setup(app) -> None:
try:
from nbsite.paramdoc import param_formatter, param_skip
@@ -207,6 +220,7 @@
except ImportError:
print('no param_formatter (no param?)')
+ app.connect('source-read', update_versions)
nbbuild.setup(app)
app.add_config_value('grid_item_link_domain', '', 'html')
|
{"golden_diff": "diff --git a/doc/conf.py b/doc/conf.py\n--- a/doc/conf.py\n+++ b/doc/conf.py\n@@ -199,6 +199,19 @@\n \n CardDirective.run = patched_card_run\n \n+def update_versions(app, docname, source):\n+ # Inspired by: https://stackoverflow.com/questions/8821511\n+ version_replace = {\n+ \"{{PANEL_VERSION}}\" : PY_VERSION,\n+ \"{{BOKEH_VERSION}}\" : BOKEH_VERSION,\n+ \"{{PYSCRIPT_VERSION}}\" : \"2022.12.1\",\n+ \"{{PYODIDE_VERSION}}\" : \"0.23.4\",\n+ }\n+\n+ for old, new in version_replace.items():\n+ source[0] = source[0].replace(old, new)\n+\n+\n def setup(app) -> None:\n try:\n from nbsite.paramdoc import param_formatter, param_skip\n@@ -207,6 +220,7 @@\n except ImportError:\n print('no param_formatter (no param?)')\n \n+ app.connect('source-read', update_versions)\n nbbuild.setup(app)\n app.add_config_value('grid_item_link_domain', '', 'html')\n", "issue": "Unify JupyterLite Install Instructions\n### Description\r\n\r\nInstructions on installing Panel in a (Pyodide-based) JupyterLite environment are presently both outdated and broken.\r\n\r\nThere are two different outdated scripts (for Panel versions <1.0.0) at:\r\n\r\n- [ ] [Setting up JupyterLite](https://panel.holoviz.org/how_to/wasm/jupyterlite.html#optimized-wheels-optional)\r\n- [ ] [Installing Panel in the browser](https://panel.holoviz.org/how_to/wasm/standalone.html#pyodide)\r\n\r\nIf I try to install those into a JupyterLite Pyodide environment, I get:\r\n\r\n```\r\nawait micropip.install(\"https://cdn.holoviz.org/panel/0.14.0/wheels/panel-0.14.0-py3-none-any.whl\", keep_going=True)\r\n```\r\n\r\n```\r\nValueError: Can't fetch wheel from 'https://cdn.holoviz.org/panel/0.14.0/wheels/panel-0.14.0-py3-none-any.whl'.\r\nOne common reason for this is when the server blocks Cross-Origin Resource Sharing (CORS).\r\nCheck if the server is sending the correct 'Access-Control-Allow-Origin' header.\r\n```\r\n\r\nOn the other hand, if I try to install the Bokeh and Panel `py3-none-any` wheels directly from pip, I get an error related to python packages that have not yet been compiled for WASM:\r\n\r\n```\r\nmicropip.install(\"https://files.pythonhosted.org/packages/56/98/da78cec88a7c47b761c9b3a18677b5508ef17417184396b3d1361fc811f1/bokeh-3.2.0-py3-none-any.whl\", keep_going=True)\r\n```\r\n\r\n```\r\nFile /lib/python3.11/site-packages/micropip/_micropip.py:580, in install(requirements, keep_going, deps, credentials, pre)\r\n 578 if transaction.failed:\r\n 579 failed_requirements = \", \".join([f\"'{req}'\" for req in transaction.failed])\r\n--> 580 raise ValueError(\r\n 581 f\"Can't find a pure Python 3 wheel for: {failed_requirements}\\n\"\r\n 582 f\"See: {FAQ_URLS['cant_find_wheel']}\\n\"\r\n 583 )\r\n 585 wheel_promises = []\r\n 586 # Install built-in packages\r\n\r\nValueError: Can't find a pure Python 3 wheel for: 'contourpy>=1', 'tornado>=5.1'\r\nSee: https://pyodide.org/en/stable/usage/faq.html#micropip-can-t-find-a-pure-python-wheel\r\n```\r\n\r\n```\r\nmicropip.install(\"https://files.pythonhosted.org/packages/90/a3/cc9cfdf1b18e5456a0ebd9370baa0a5d58501b4904fa3b3d1ecccbdbd1a2/panel-1.1.1-py2.py3-none-any.whl\", keep_going=True)\r\n```\r\n\r\n```\r\nFile /lib/python3.11/site-packages/micropip/_micropip.py:580, in install(requirements, keep_going, deps, credentials, pre)\r\n 578 if transaction.failed:\r\n 579 failed_requirements = \", \".join([f\"'{req}'\" for req in transaction.failed])\r\n--> 580 raise ValueError(\r\n 581 f\"Can't find a pure Python 3 wheel for: {failed_requirements}\\n\"\r\n 582 f\"See: {FAQ_URLS['cant_find_wheel']}\\n\"\r\n 583 )\r\n 585 wheel_promises = []\r\n 586 # Install built-in packages\r\n\r\nValueError: Can't find a pure Python 3 wheel for: 'contourpy>=1', 'tornado>=5.1'\r\nSee: https://pyodide.org/en/stable/usage/faq.html#micropip-can-t-find-a-pure-python-wheel\r\n```\r\n\r\n\r\n#### Describe the solution you'd like\r\n\r\nA working, unified script pointing to the latest version of Panel and Bokeh.\r\n\r\n#### Describe alternatives you've considered\r\n\r\nN/A\r\n\r\n#### Additional context\r\n\r\nTry the installation for yourself at https://jupyterlite.readthedocs.io/en/latest/_static/lab/index.html\r\n\n", "before_files": [{"content": "import json\nimport os\nimport pathlib\n\nimport param\n\nparam.parameterized.docstring_signature = False\nparam.parameterized.docstring_describe_params = False\n\nfrom nbsite.shared_conf import *\n\nproject = 'Panel'\nauthors = 'Panel contributors'\ncopyright_years['start_year'] = '2019'\ncopyright = copyright_fmt.format(**copyright_years)\ndescription = 'High-level dashboarding for python visualization libraries'\n\nimport panel\n\nfrom panel.io.convert import BOKEH_VERSION, MINIMUM_VERSIONS, PY_VERSION\nfrom panel.io.resources import CDN_DIST\n\nPANEL_ROOT = pathlib.Path(panel.__file__).parent\n\nversion = release = base_version(panel.__version__)\njs_version = json.loads((PANEL_ROOT / 'package.json').read_text())['version']\n\nis_dev = any(ext in version for ext in ('a', 'b', 'rc'))\n\n# For the interactivity warning box created by nbsite to point to the right\n# git tag instead of the default i.e. main.\nos.environ['BRANCH'] = f\"v{release}\"\n\nhtml_static_path += ['_static']\n\nhtml_css_files += [\n 'css/custom.css',\n]\n\nhtml_theme = \"pydata_sphinx_theme\"\nhtml_favicon = \"_static/icons/favicon.ico\"\n\nhtml_theme_options = {\n \"logo\": {\n \"image_light\": \"_static/logo_horizontal_light_theme.png\",\n \"image_dark\": \"_static/logo_horizontal_dark_theme.png\",\n },\n \"github_url\": \"https://github.com/holoviz/panel\",\n \"icon_links\": [\n {\n \"name\": \"Twitter\",\n \"url\": \"https://twitter.com/Panel_Org\",\n \"icon\": \"fa-brands fa-twitter-square\",\n },\n {\n \"name\": \"Discourse\",\n \"url\": \"https://discourse.holoviz.org/c/panel/5\",\n \"icon\": \"fa-brands fa-discourse\",\n },\n {\n \"name\": \"Discord\",\n \"url\": \"https://discord.gg/UXdtYyGVQX\",\n \"icon\": \"fa-brands fa-discord\",\n },\n ],\n \"analytics\": {\"google_analytics_id\": \"G-L0C8PGT2LM\"},\n \"pygment_light_style\": \"material\",\n \"pygment_dark_style\": \"material\",\n \"header_links_before_dropdown\": 5,\n 'secondary_sidebar_items': [\n \"github-stars-button\",\n \"panelitelink\",\n \"page-toc\",\n ],\n}\n\nextensions += [\n 'sphinx.ext.napoleon',\n 'nbsite.gallery',\n 'sphinx_copybutton',\n 'nbsite.pyodide'\n]\nnapoleon_numpy_docstring = True\n\nmyst_enable_extensions = [\"colon_fence\", \"deflist\"]\n\ngallery_endpoint = 'panel-gallery-dev' if is_dev else 'panel-gallery'\ngallery_url = f'https://{gallery_endpoint}.pyviz.demo.anaconda.com'\njlite_url = 'https://pyviz-dev.github.io/panelite-dev' if is_dev else 'https://panelite.holoviz.org'\npyodide_url = 'https://pyviz-dev.github.io/panel/pyodide' if is_dev else 'https://panel.holoviz.org/pyodide'\n\nnbsite_gallery_conf = {\n 'github_org': 'holoviz',\n 'github_project': 'panel',\n 'galleries': {\n 'reference': {\n 'title': 'Component Gallery',\n 'sections': [\n 'panes',\n 'layouts',\n 'templates',\n 'global',\n 'indicators',\n 'widgets',\n ],\n 'titles': {\n 'Vega': 'Altair & Vega',\n 'DeckGL': 'PyDeck & Deck.gl',\n 'ECharts': 'PyEcharts & ECharts',\n 'IPyWidget': 'ipywidgets'\n },\n 'as_pyodide': True,\n 'normalize_titles': False\n }\n },\n 'thumbnail_url': 'https://assets.holoviz.org/panel/thumbnails',\n 'deployment_url': gallery_url,\n 'jupyterlite_url': jlite_url,\n}\n\nif panel.__version__ != version and (PANEL_ROOT / 'dist' / 'wheels').is_dir():\n py_version = panel.__version__.replace(\"-dirty\", \"\")\n panel_req = f'./wheels/panel-{py_version}-py3-none-any.whl'\n bokeh_req = f'./wheels/bokeh-{BOKEH_VERSION}-py3-none-any.whl'\nelse:\n panel_req = f'{CDN_DIST}wheels/panel-{PY_VERSION}-py3-none-any.whl'\n bokeh_req = f'{CDN_DIST}wheels/bokeh-{BOKEH_VERSION}-py3-none-any.whl'\n\ndef get_requirements():\n with open('pyodide_dependencies.json') as deps:\n dependencies = json.load(deps)\n requirements = {}\n for src, deps in dependencies.items():\n if deps is None:\n continue\n src = src.replace('.ipynb', '').replace('.md', '')\n for name, min_version in MINIMUM_VERSIONS.items():\n if any(name in req for req in deps):\n deps = [f'{name}>={min_version}' if name in req else req for req in deps]\n requirements[src] = deps\n return requirements\n\nnbsite_pyodide_conf = {\n 'PYODIDE_URL': 'https://cdn.jsdelivr.net/pyodide/v0.23.1/full/pyodide.js',\n 'requirements': [bokeh_req, panel_req, 'pyodide-http'],\n 'requires': get_requirements()\n}\n\ntemplates_path += [\n '_templates'\n]\n\nhtml_context.update({\n \"last_release\": f\"v{release}\",\n \"github_user\": \"holoviz\",\n \"github_repo\": \"panel\",\n \"default_mode\": \"light\",\n \"panelite_endpoint\": jlite_url,\n \"gallery_url\": gallery_url,\n \"pyodide_url\": pyodide_url\n})\n\nnbbuild_patterns_to_take_along = [\"simple.html\", \"*.json\", \"json_*\"]\n\n# Override the Sphinx default title that appends `documentation`\nhtml_title = f'{project} v{version}'\n\n\n# Patching GridItemCardDirective to be able to substitute the domain name\n# in the link option.\nfrom sphinx_design.cards import CardDirective\nfrom sphinx_design.grids import GridItemCardDirective\n\norig_grid_run = GridItemCardDirective.run\n\ndef patched_grid_run(self):\n app = self.state.document.settings.env.app\n existing_link = self.options.get('link')\n domain = getattr(app.config, 'grid_item_link_domain', None)\n if self.has_content:\n self.content.replace('|gallery-endpoint|', domain)\n if existing_link and domain:\n new_link = existing_link.replace('|gallery-endpoint|', domain)\n self.options['link'] = new_link\n return list(orig_grid_run(self))\n\nGridItemCardDirective.run = patched_grid_run\n\norig_card_run = CardDirective.run\n\ndef patched_card_run(self):\n app = self.state.document.settings.env.app\n existing_link = self.options.get('link')\n domain = getattr(app.config, 'grid_item_link_domain', None)\n if existing_link and domain:\n new_link = existing_link.replace('|gallery-endpoint|', domain)\n self.options['link'] = new_link\n return orig_card_run(self)\n\nCardDirective.run = patched_card_run\n\ndef setup(app) -> None:\n try:\n from nbsite.paramdoc import param_formatter, param_skip\n app.connect('autodoc-process-docstring', param_formatter)\n app.connect('autodoc-skip-member', param_skip)\n except ImportError:\n print('no param_formatter (no param?)')\n\n nbbuild.setup(app)\n app.add_config_value('grid_item_link_domain', '', 'html')\n\ngrid_item_link_domain = gallery_endpoint\n", "path": "doc/conf.py"}]}
| 3,776 | 264 |
gh_patches_debug_38033
|
rasdani/github-patches
|
git_diff
|
google__clusterfuzz-1524
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support authentication with Cloud IAP
</issue>
<code>
[start of src/appengine/libs/auth.py]
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Authentication helpers."""
15
16 import collections
17
18 from firebase_admin import auth
19 from google.cloud import ndb
20 import webapp2
21
22 from base import utils
23 from config import local_config
24 from datastore import data_types
25 from metrics import logs
26 from system import environment
27
28 User = collections.namedtuple('User', ['email'])
29
30
31 class AuthError(Exception):
32 """Auth error."""
33
34
35 def auth_domain():
36 """Get the auth domain."""
37 domain = local_config.ProjectConfig().get('firebase.auth_domain')
38 if domain:
39 return domain
40
41 return utils.get_application_id() + '.firebaseapp.com'
42
43
44 def is_current_user_admin():
45 """Returns whether or not the current logged in user is an admin."""
46 if environment.is_local_development():
47 return True
48
49 user = get_current_user()
50 if not user:
51 return False
52
53 key = ndb.Key(data_types.Admin, user.email)
54 return bool(key.get())
55
56
57 def get_current_user():
58 """Get the current logged in user, or None."""
59 if environment.is_local_development():
60 return User('user@localhost')
61
62 loas_user = environment.get_value('LOAS_PEER_USERNAME')
63 if loas_user:
64 return User(loas_user + '@google.com')
65
66 current_request = get_current_request()
67 oauth_email = getattr(current_request, '_oauth_email', None)
68 if oauth_email:
69 return User(oauth_email)
70
71 cached_email = getattr(current_request, '_cached_email', None)
72 if cached_email:
73 return User(cached_email)
74
75 session_cookie = get_session_cookie()
76 if not session_cookie:
77 return None
78
79 try:
80 decoded_claims = decode_claims(get_session_cookie())
81 except AuthError:
82 logs.log_warn('Invalid session cookie.')
83 return None
84
85 if not decoded_claims.get('email_verified'):
86 return None
87
88 email = decoded_claims.get('email')
89 if not email:
90 return None
91
92 # We cache the email for this request if we've validated the user to make
93 # subsequent get_current_user() calls fast.
94 setattr(current_request, '_cached_email', email)
95 return User(email)
96
97
98 def create_session_cookie(id_token, expires_in):
99 """Create a new session cookie."""
100 try:
101 return auth.create_session_cookie(id_token, expires_in=expires_in)
102 except auth.AuthError:
103 raise AuthError('Failed to create session cookie.')
104
105
106 def get_current_request():
107 """Get the current request."""
108 return webapp2.get_request()
109
110
111 def get_session_cookie():
112 """Get the current session cookie."""
113 return get_current_request().cookies.get('session')
114
115
116 def revoke_session_cookie(session_cookie):
117 """Revoke a session cookie."""
118 decoded_claims = decode_claims(session_cookie)
119 auth.revoke_refresh_tokens(decoded_claims['sub'])
120
121
122 def decode_claims(session_cookie):
123 """Decode the claims for the current session cookie."""
124 try:
125 return auth.verify_session_cookie(session_cookie, check_revoked=True)
126 except (ValueError, auth.AuthError):
127 raise AuthError('Invalid session cookie.')
128
[end of src/appengine/libs/auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/appengine/libs/auth.py b/src/appengine/libs/auth.py
--- a/src/appengine/libs/auth.py
+++ b/src/appengine/libs/auth.py
@@ -13,12 +13,17 @@
# limitations under the License.
"""Authentication helpers."""
+from builtins import str
import collections
+import jwt
from firebase_admin import auth
from google.cloud import ndb
+from googleapiclient.discovery import build
+import requests
import webapp2
+from base import memoize
from base import utils
from config import local_config
from datastore import data_types
@@ -54,6 +59,68 @@
return bool(key.get())
[email protected](memoize.FifoInMemory(1))
+def _project_number_from_id(project_id):
+ """Get the project number from project ID."""
+ resource_manager = build('cloudresourcemanager', 'v1')
+ result = resource_manager.projects().get(projectId=project_id).execute()
+ if 'projectNumber' not in result:
+ raise AuthError('Failed to get project number.')
+
+ return result['projectNumber']
+
+
[email protected](memoize.FifoInMemory(1))
+def _get_iap_key(key_id):
+ """Retrieves a public key from the list published by Identity-Aware Proxy,
+ re-fetching the key file if necessary.
+ """
+ resp = requests.get('https://www.gstatic.com/iap/verify/public_key')
+ if resp.status_code != 200:
+ raise AuthError('Unable to fetch IAP keys: {} / {} / {}'.format(
+ resp.status_code, resp.headers, resp.text))
+
+ result = resp.json()
+ key = result.get(key_id)
+ if not key:
+ raise AuthError('Key {!r} not found'.format(key_id))
+
+ return key
+
+
+def _validate_iap_jwt(iap_jwt):
+ """Validate JWT assertion."""
+ project_id = utils.get_application_id()
+ expected_audience = '/projects/{}/apps/{}'.format(
+ _project_number_from_id(project_id), project_id)
+
+ try:
+ key_id = jwt.get_unverified_header(iap_jwt).get('kid')
+ if not key_id:
+ raise AuthError('No key ID.')
+
+ key = _get_iap_key(key_id)
+ decoded_jwt = jwt.decode(
+ iap_jwt,
+ key,
+ algorithms=['ES256'],
+ issuer='https://cloud.google.com/iap',
+ audience=expected_audience)
+ return decoded_jwt['email']
+ except (jwt.exceptions.InvalidTokenError,
+ requests.exceptions.RequestException) as e:
+ raise AuthError('JWT assertion decode error: ' + str(e))
+
+
+def get_iap_email(current_request):
+ """Get Cloud IAP email."""
+ jwt_assertion = current_request.headers.get('X-Goog-IAP-JWT-Assertion')
+ if not jwt_assertion:
+ return None
+
+ return _validate_iap_jwt(jwt_assertion)
+
+
def get_current_user():
"""Get the current logged in user, or None."""
if environment.is_local_development():
@@ -64,6 +131,10 @@
return User(loas_user + '@google.com')
current_request = get_current_request()
+ iap_email = get_iap_email(current_request)
+ if iap_email:
+ return User(iap_email)
+
oauth_email = getattr(current_request, '_oauth_email', None)
if oauth_email:
return User(oauth_email)
|
{"golden_diff": "diff --git a/src/appengine/libs/auth.py b/src/appengine/libs/auth.py\n--- a/src/appengine/libs/auth.py\n+++ b/src/appengine/libs/auth.py\n@@ -13,12 +13,17 @@\n # limitations under the License.\n \"\"\"Authentication helpers.\"\"\"\n \n+from builtins import str\n import collections\n+import jwt\n \n from firebase_admin import auth\n from google.cloud import ndb\n+from googleapiclient.discovery import build\n+import requests\n import webapp2\n \n+from base import memoize\n from base import utils\n from config import local_config\n from datastore import data_types\n@@ -54,6 +59,68 @@\n return bool(key.get())\n \n \[email protected](memoize.FifoInMemory(1))\n+def _project_number_from_id(project_id):\n+ \"\"\"Get the project number from project ID.\"\"\"\n+ resource_manager = build('cloudresourcemanager', 'v1')\n+ result = resource_manager.projects().get(projectId=project_id).execute()\n+ if 'projectNumber' not in result:\n+ raise AuthError('Failed to get project number.')\n+\n+ return result['projectNumber']\n+\n+\[email protected](memoize.FifoInMemory(1))\n+def _get_iap_key(key_id):\n+ \"\"\"Retrieves a public key from the list published by Identity-Aware Proxy,\n+ re-fetching the key file if necessary.\n+ \"\"\"\n+ resp = requests.get('https://www.gstatic.com/iap/verify/public_key')\n+ if resp.status_code != 200:\n+ raise AuthError('Unable to fetch IAP keys: {} / {} / {}'.format(\n+ resp.status_code, resp.headers, resp.text))\n+\n+ result = resp.json()\n+ key = result.get(key_id)\n+ if not key:\n+ raise AuthError('Key {!r} not found'.format(key_id))\n+\n+ return key\n+\n+\n+def _validate_iap_jwt(iap_jwt):\n+ \"\"\"Validate JWT assertion.\"\"\"\n+ project_id = utils.get_application_id()\n+ expected_audience = '/projects/{}/apps/{}'.format(\n+ _project_number_from_id(project_id), project_id)\n+\n+ try:\n+ key_id = jwt.get_unverified_header(iap_jwt).get('kid')\n+ if not key_id:\n+ raise AuthError('No key ID.')\n+\n+ key = _get_iap_key(key_id)\n+ decoded_jwt = jwt.decode(\n+ iap_jwt,\n+ key,\n+ algorithms=['ES256'],\n+ issuer='https://cloud.google.com/iap',\n+ audience=expected_audience)\n+ return decoded_jwt['email']\n+ except (jwt.exceptions.InvalidTokenError,\n+ requests.exceptions.RequestException) as e:\n+ raise AuthError('JWT assertion decode error: ' + str(e))\n+\n+\n+def get_iap_email(current_request):\n+ \"\"\"Get Cloud IAP email.\"\"\"\n+ jwt_assertion = current_request.headers.get('X-Goog-IAP-JWT-Assertion')\n+ if not jwt_assertion:\n+ return None\n+\n+ return _validate_iap_jwt(jwt_assertion)\n+\n+\n def get_current_user():\n \"\"\"Get the current logged in user, or None.\"\"\"\n if environment.is_local_development():\n@@ -64,6 +131,10 @@\n return User(loas_user + '@google.com')\n \n current_request = get_current_request()\n+ iap_email = get_iap_email(current_request)\n+ if iap_email:\n+ return User(iap_email)\n+\n oauth_email = getattr(current_request, '_oauth_email', None)\n if oauth_email:\n return User(oauth_email)\n", "issue": "Support authentication with Cloud IAP\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Authentication helpers.\"\"\"\n\nimport collections\n\nfrom firebase_admin import auth\nfrom google.cloud import ndb\nimport webapp2\n\nfrom base import utils\nfrom config import local_config\nfrom datastore import data_types\nfrom metrics import logs\nfrom system import environment\n\nUser = collections.namedtuple('User', ['email'])\n\n\nclass AuthError(Exception):\n \"\"\"Auth error.\"\"\"\n\n\ndef auth_domain():\n \"\"\"Get the auth domain.\"\"\"\n domain = local_config.ProjectConfig().get('firebase.auth_domain')\n if domain:\n return domain\n\n return utils.get_application_id() + '.firebaseapp.com'\n\n\ndef is_current_user_admin():\n \"\"\"Returns whether or not the current logged in user is an admin.\"\"\"\n if environment.is_local_development():\n return True\n\n user = get_current_user()\n if not user:\n return False\n\n key = ndb.Key(data_types.Admin, user.email)\n return bool(key.get())\n\n\ndef get_current_user():\n \"\"\"Get the current logged in user, or None.\"\"\"\n if environment.is_local_development():\n return User('user@localhost')\n\n loas_user = environment.get_value('LOAS_PEER_USERNAME')\n if loas_user:\n return User(loas_user + '@google.com')\n\n current_request = get_current_request()\n oauth_email = getattr(current_request, '_oauth_email', None)\n if oauth_email:\n return User(oauth_email)\n\n cached_email = getattr(current_request, '_cached_email', None)\n if cached_email:\n return User(cached_email)\n\n session_cookie = get_session_cookie()\n if not session_cookie:\n return None\n\n try:\n decoded_claims = decode_claims(get_session_cookie())\n except AuthError:\n logs.log_warn('Invalid session cookie.')\n return None\n\n if not decoded_claims.get('email_verified'):\n return None\n\n email = decoded_claims.get('email')\n if not email:\n return None\n\n # We cache the email for this request if we've validated the user to make\n # subsequent get_current_user() calls fast.\n setattr(current_request, '_cached_email', email)\n return User(email)\n\n\ndef create_session_cookie(id_token, expires_in):\n \"\"\"Create a new session cookie.\"\"\"\n try:\n return auth.create_session_cookie(id_token, expires_in=expires_in)\n except auth.AuthError:\n raise AuthError('Failed to create session cookie.')\n\n\ndef get_current_request():\n \"\"\"Get the current request.\"\"\"\n return webapp2.get_request()\n\n\ndef get_session_cookie():\n \"\"\"Get the current session cookie.\"\"\"\n return get_current_request().cookies.get('session')\n\n\ndef revoke_session_cookie(session_cookie):\n \"\"\"Revoke a session cookie.\"\"\"\n decoded_claims = decode_claims(session_cookie)\n auth.revoke_refresh_tokens(decoded_claims['sub'])\n\n\ndef decode_claims(session_cookie):\n \"\"\"Decode the claims for the current session cookie.\"\"\"\n try:\n return auth.verify_session_cookie(session_cookie, check_revoked=True)\n except (ValueError, auth.AuthError):\n raise AuthError('Invalid session cookie.')\n", "path": "src/appengine/libs/auth.py"}]}
| 1,612 | 810 |
gh_patches_debug_15672
|
rasdani/github-patches
|
git_diff
|
spack__spack-20794
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to concretize with Clingo when libyogrt is part of dependency tree
<!-- Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran `spack find` to list all the installed packages and ..." -->
### Steps to reproduce the issue
Any of the above result in the same error:
```console
$ spack spec -I libyogrt
$ spack spec -I scr # SCR depends on libyogrt
$ spack spec -I axom # axom depends on SCR
$ spack spec -I macsio # macsio depends on SCR
...
```
### Error Message
<!-- If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect. -->
```console
Concretized
--------------------------------
==> Error: invalid values for variant "scheduler" in package "libyogrt": ['lsf']
```
I imagine this is because https://github.com/spack/spack/blob/c22141f444861abeaee297a3d92696e9ae94a509/var/spack/repos/builtin/packages/libyogrt/package.py#L39
references an invalid value of the 'scheduler` variant:
https://github.com/spack/spack/blob/c22141f444861abeaee297a3d92696e9ae94a509/var/spack/repos/builtin/packages/libyogrt/package.py#L36
Adding `lsf` to the possible values for `scheduler` fixes the issue, but I am not sure that this fix is correct.
### Information on your system
* **Spack:** 0.16.0
* **Python:** 3.7.2
* **Platform:** linux-rhel7-power9le
* **Concretizer:** clingo
### Additional information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have searched the issues of this repo and believe this is not a duplicate
- [x] I have run the failing commands in debug mode and reported the output
<!-- We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack! -->
</issue>
<code>
[start of var/spack/repos/builtin/packages/libyogrt/package.py]
1 # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class Libyogrt(AutotoolsPackage):
10 """Your One Get Remaining Time Library."""
11
12 homepage = "https://github.com/LLNL/libyogrt"
13 url = "https://github.com/LLNL/libyogrt/releases/download/1.21/libyogrt-1.21.tar.gz"
14
15 version('1.24', sha256='36695030e72b24b1f22bfcfe42bfd1d3c87f9c0eea5e94ce0120782581ea522f')
16 version('1.23', sha256='c95e7a6be29c0d1ac1b673b0ba1d4e5781981722f93d0da99ae62ff3b5f35b5f')
17 version('1.22', sha256='38e7d1ea3fa030f0169197aa96cde9f01caa595a590764ef1cb2ae07379cb711')
18 version('1.21', sha256='5f8f0942d35ee4e418273e478e632210b3fa648dcb6a2e6a92c6ba4213cdc362')
19 version('1.20-7', sha256='735e9d6fa572e239ccc73e11c84b4583338b24df0fa91c48e8bc038d882003f7')
20 version('1.20-6', sha256='ba5a2e202f995cf7ae3bf87b451943733e760ede02ca172f712cbf2eea693222')
21 version('1.20-5', sha256='1e41bc656abffb121145264bc898421c3f355d3be35f1711b7b5e3ffe7effdd9')
22 version('1.20-4', sha256='0858a729068b272d4047d79f6a5187cdbd427bdfec64db4e143524b4789a06c5')
23 version('1.20-3', sha256='61a8f28f452aef0e09d700dbaaffd91ae3855f7ac221c7ebe478a028df635e31')
24 version('1.20-2', sha256='bf22a82ab3bfede780be3fb6c132cc354234f8d57d3cccd58fe594f074ed7f95')
25
26 # libyogrt supports the following schedulers:
27 # lcrm, lsf, moab, slurm, AIX+slurm
28
29 # however, only slurm exists in spack
30 # libyogrt's build system is smart enough to detect the system scheduler
31 # the slurm option here connects to a spack-installed slurm
32 # if/when other schedulers have spack packages, they can be added
33
34 variant('scheduler', default='system',
35 description="Select scheduler integration",
36 values=['system', 'slurm'], multi=False)
37 depends_on('slurm', when='scheduler=slurm')
38
39 conflicts('scheduler=lsf', when='@:1.22')
40
41 variant('static', default='False',
42 description="build static library")
43
44 def url_for_version(self, version):
45 if version < Version(1.21):
46 return "https://github.com/LLNL/libyogrt/archive/%s.tar.gz" % version
47 else:
48 return "https://github.com/LLNL/libyogrt/releases/download/{0}/libyogrt-{0}.tar.gz".format(version)
49
50 def configure_args(self):
51 args = []
52
53 sched = self.spec.variants['scheduler'].value
54 if sched != "system":
55 args.append('--with-%s=%s' % (sched, self.spec[sched].prefix))
56
57 if '+static' in self.spec:
58 args.append('--enable-static=yes')
59
60 return args
61
[end of var/spack/repos/builtin/packages/libyogrt/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/var/spack/repos/builtin/packages/libyogrt/package.py b/var/spack/repos/builtin/packages/libyogrt/package.py
--- a/var/spack/repos/builtin/packages/libyogrt/package.py
+++ b/var/spack/repos/builtin/packages/libyogrt/package.py
@@ -34,13 +34,11 @@
variant('scheduler', default='system',
description="Select scheduler integration",
values=['system', 'slurm'], multi=False)
- depends_on('slurm', when='scheduler=slurm')
-
- conflicts('scheduler=lsf', when='@:1.22')
-
variant('static', default='False',
description="build static library")
+ depends_on('slurm', when='scheduler=slurm')
+
def url_for_version(self, version):
if version < Version(1.21):
return "https://github.com/LLNL/libyogrt/archive/%s.tar.gz" % version
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/libyogrt/package.py b/var/spack/repos/builtin/packages/libyogrt/package.py\n--- a/var/spack/repos/builtin/packages/libyogrt/package.py\n+++ b/var/spack/repos/builtin/packages/libyogrt/package.py\n@@ -34,13 +34,11 @@\n variant('scheduler', default='system',\n description=\"Select scheduler integration\",\n values=['system', 'slurm'], multi=False)\n- depends_on('slurm', when='scheduler=slurm')\n-\n- conflicts('scheduler=lsf', when='@:1.22')\n-\n variant('static', default='False',\n description=\"build static library\")\n \n+ depends_on('slurm', when='scheduler=slurm')\n+\n def url_for_version(self, version):\n if version < Version(1.21):\n return \"https://github.com/LLNL/libyogrt/archive/%s.tar.gz\" % version\n", "issue": "Unable to concretize with Clingo when libyogrt is part of dependency tree\n<!-- Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.\r\nExample: \"I ran `spack find` to list all the installed packages and ...\" -->\r\n\r\n### Steps to reproduce the issue\r\nAny of the above result in the same error:\r\n```console\r\n$ spack spec -I libyogrt\r\n$ spack spec -I scr # SCR depends on libyogrt\r\n$ spack spec -I axom # axom depends on SCR\r\n$ spack spec -I macsio # macsio depends on SCR\r\n...\r\n```\r\n\r\n### Error Message\r\n\r\n<!-- If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect. -->\r\n```console\r\nConcretized\r\n--------------------------------\r\n==> Error: invalid values for variant \"scheduler\" in package \"libyogrt\": ['lsf']\r\n```\r\n\r\nI imagine this is because https://github.com/spack/spack/blob/c22141f444861abeaee297a3d92696e9ae94a509/var/spack/repos/builtin/packages/libyogrt/package.py#L39\r\n\r\nreferences an invalid value of the 'scheduler` variant:\r\nhttps://github.com/spack/spack/blob/c22141f444861abeaee297a3d92696e9ae94a509/var/spack/repos/builtin/packages/libyogrt/package.py#L36\r\n\r\nAdding `lsf` to the possible values for `scheduler` fixes the issue, but I am not sure that this fix is correct.\r\n\r\n### Information on your system\r\n\r\n* **Spack:** 0.16.0\r\n* **Python:** 3.7.2\r\n* **Platform:** linux-rhel7-power9le\r\n* **Concretizer:** clingo\r\n\r\n\r\n### Additional information\r\n\r\n<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->\r\n- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform\r\n- [x] I have searched the issues of this repo and believe this is not a duplicate\r\n- [x] I have run the failing commands in debug mode and reported the output\r\n\r\n<!-- We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!\r\n\r\nIf you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.\r\n\r\nOther than that, thanks for taking the time to contribute to Spack! -->\r\n\n", "before_files": [{"content": "# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Libyogrt(AutotoolsPackage):\n \"\"\"Your One Get Remaining Time Library.\"\"\"\n\n homepage = \"https://github.com/LLNL/libyogrt\"\n url = \"https://github.com/LLNL/libyogrt/releases/download/1.21/libyogrt-1.21.tar.gz\"\n\n version('1.24', sha256='36695030e72b24b1f22bfcfe42bfd1d3c87f9c0eea5e94ce0120782581ea522f')\n version('1.23', sha256='c95e7a6be29c0d1ac1b673b0ba1d4e5781981722f93d0da99ae62ff3b5f35b5f')\n version('1.22', sha256='38e7d1ea3fa030f0169197aa96cde9f01caa595a590764ef1cb2ae07379cb711')\n version('1.21', sha256='5f8f0942d35ee4e418273e478e632210b3fa648dcb6a2e6a92c6ba4213cdc362')\n version('1.20-7', sha256='735e9d6fa572e239ccc73e11c84b4583338b24df0fa91c48e8bc038d882003f7')\n version('1.20-6', sha256='ba5a2e202f995cf7ae3bf87b451943733e760ede02ca172f712cbf2eea693222')\n version('1.20-5', sha256='1e41bc656abffb121145264bc898421c3f355d3be35f1711b7b5e3ffe7effdd9')\n version('1.20-4', sha256='0858a729068b272d4047d79f6a5187cdbd427bdfec64db4e143524b4789a06c5')\n version('1.20-3', sha256='61a8f28f452aef0e09d700dbaaffd91ae3855f7ac221c7ebe478a028df635e31')\n version('1.20-2', sha256='bf22a82ab3bfede780be3fb6c132cc354234f8d57d3cccd58fe594f074ed7f95')\n\n # libyogrt supports the following schedulers:\n # lcrm, lsf, moab, slurm, AIX+slurm\n\n # however, only slurm exists in spack\n # libyogrt's build system is smart enough to detect the system scheduler\n # the slurm option here connects to a spack-installed slurm\n # if/when other schedulers have spack packages, they can be added\n\n variant('scheduler', default='system',\n description=\"Select scheduler integration\",\n values=['system', 'slurm'], multi=False)\n depends_on('slurm', when='scheduler=slurm')\n\n conflicts('scheduler=lsf', when='@:1.22')\n\n variant('static', default='False',\n description=\"build static library\")\n\n def url_for_version(self, version):\n if version < Version(1.21):\n return \"https://github.com/LLNL/libyogrt/archive/%s.tar.gz\" % version\n else:\n return \"https://github.com/LLNL/libyogrt/releases/download/{0}/libyogrt-{0}.tar.gz\".format(version)\n\n def configure_args(self):\n args = []\n\n sched = self.spec.variants['scheduler'].value\n if sched != \"system\":\n args.append('--with-%s=%s' % (sched, self.spec[sched].prefix))\n\n if '+static' in self.spec:\n args.append('--enable-static=yes')\n\n return args\n", "path": "var/spack/repos/builtin/packages/libyogrt/package.py"}]}
| 2,506 | 217 |
gh_patches_debug_31328
|
rasdani/github-patches
|
git_diff
|
ResonantGeoData__ResonantGeoData-466
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
STAC serializer output Band info is incorrect
Figure out where this is coming from:
```
'assets': {
'image-15030': {
'href': 'http://storage.googleapis.com/gcp-public-data-sentinel-2/tiles/17/S/MS/S2A_MSIL1C_20210302T161201_N0209_R140_T17SMS_20210302T200521.SAFE/GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B01.jp2',
'title': 'GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B01.jp2',
'eo:bands': [{'name': 'B1'}],
'roles': ['data'],
},
'image-15041': {
'href': 'http://storage.googleapis.com/gcp-public-data-sentinel-2/tiles/17/S/MS/S2A_MSIL1C_20210302T161201_N0209_R140_T17SMS_20210302T200521.SAFE/GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B02.jp2',
'title': 'GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B02.jp2',
'eo:bands': [{'name': 'B1'}],
'roles': ['data'],
},
```
Note that both have `[{'name': 'B1'}]` which is incorrect.
First we need to make sure the `BandMeta` fields are correct then see where this breaks in the serializer
</issue>
<code>
[start of django-rgd-imagery/rgd_imagery/serializers/stac.py]
1 import json
2
3 import dateutil.parser
4 from django.contrib.gis.geos import Polygon
5 from django.db import transaction
6 from pyproj import CRS
7 import pystac
8 from rest_framework import serializers
9 from rgd.models import ChecksumFile, FileSourceType
10 from rgd.utility import get_or_create_no_commit
11
12 from .. import models
13
14
15 class STACRasterSerializer(serializers.BaseSerializer):
16 def to_internal_value(self, data):
17 # item = pystac.Item.from_dict(data)
18 # errors = item.validate()
19 # if errors:
20 # raise serializers.ValidationError(errors)
21 return data
22
23 def to_representation(self, instance: models.RasterMeta) -> dict:
24 item = pystac.Item(
25 id=instance.pk,
26 geometry=json.loads(instance.footprint.json),
27 bbox=instance.extent,
28 datetime=(instance.acquisition_date or instance.modified or instance.created),
29 properties=dict(
30 datetime=str(instance.acquisition_date),
31 platform=instance.instrumentation,
32 ),
33 )
34 # 'proj' extension
35 item.ext.enable('projection')
36 item.ext.projection.apply(
37 epsg=CRS.from_proj4(instance.crs).to_epsg(),
38 transform=instance.transform,
39 )
40 # 'eo' extension
41 item.ext.enable('eo')
42 item.ext.eo.apply(cloud_cover=instance.cloud_cover, bands=[])
43 # Add assets
44 for image in instance.parent_raster.image_set.images.all():
45 if image.file.type != FileSourceType.URL:
46 # TODO: we need fix this
47 raise ValueError('Files must point to valid URL resources, not internal storage.')
48 asset = pystac.Asset(
49 href=image.file.get_url(),
50 title=image.file.name,
51 roles=[
52 'data',
53 ],
54 )
55 item.ext.eo.set_bands(
56 bands=[
57 pystac.extensions.eo.Band.create(
58 name=f'B{bandmeta.band_number}',
59 description=bandmeta.description,
60 )
61 for bandmeta in image.bandmeta_set.all()
62 ],
63 asset=asset,
64 )
65 item.add_asset(f'image-{image.pk}', asset)
66
67 for ancillary_file in instance.parent_raster.ancillary_files.all():
68 asset = pystac.Asset(
69 href=ancillary_file.get_url(),
70 title=ancillary_file.name,
71 roles=[
72 'metadata',
73 ],
74 )
75 item.add_asset(f'ancillary-{ancillary_file.pk}', asset)
76
77 return item.to_dict()
78
79 @transaction.atomic
80 def create(self, data):
81 item = pystac.Item.from_dict(data)
82 image_ids, ancillary = [], []
83 single_asset = False
84 if len(item.assets) == 1:
85 single_asset = True
86 for name in item.assets:
87 asset = item.assets[name]
88 checksum_file, _ = ChecksumFile.objects.get_or_create(
89 type=FileSourceType.URL,
90 url=asset.href,
91 )
92 if single_asset or (asset.roles and 'data' in asset.roles):
93 image, _ = models.Image.objects.get_or_create(file=checksum_file)
94 image_ids.append(image.pk)
95 else:
96 ancillary.append(checksum_file)
97
98 image_set, image_set_created = models.get_or_create_image_set(
99 image_ids, defaults=dict(name=item.id)
100 )
101
102 raster, raster_created = get_or_create_no_commit(
103 models.Raster, image_set=image_set, defaults=dict(name=item.id)
104 )
105 raster.skip_signal = True
106 raster.save()
107 [raster.ancillary_files.add(af) for af in ancillary]
108 raster.save()
109
110 outline = Polygon(
111 (
112 [item.bbox[0], item.bbox[1]],
113 [item.bbox[0], item.bbox[3]],
114 [item.bbox[2], item.bbox[3]],
115 [item.bbox[2], item.bbox[1]],
116 [item.bbox[0], item.bbox[1]],
117 )
118 )
119
120 raster_meta = dict(
121 footprint=json.dumps(item.geometry),
122 crs=f'+init=epsg:{item.ext.projection.epsg}',
123 cloud_cover=item.ext.eo.cloud_cover,
124 transform=item.ext.projection.transform,
125 extent=item.bbox,
126 origin=(item.bbox[0], item.bbox[1]),
127 resolution=(0, 0), # TODO: fix
128 outline=outline,
129 acquisition_date=dateutil.parser.isoparser().isoparse(item.properties['datetime']),
130 instrumentation=item.properties['platform'],
131 )
132
133 if raster_created:
134 instance = models.RasterMeta(**raster_meta)
135 instance.parent_raster = raster
136 else:
137 models.RasterMeta.objects.filter(parent_raster=raster).update(**raster_meta)
138 instance = models.RasterMeta.objects.get(parent_raster=raster)
139 instance.save()
140
141 return instance
142
[end of django-rgd-imagery/rgd_imagery/serializers/stac.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/django-rgd-imagery/rgd_imagery/serializers/stac.py b/django-rgd-imagery/rgd_imagery/serializers/stac.py
--- a/django-rgd-imagery/rgd_imagery/serializers/stac.py
+++ b/django-rgd-imagery/rgd_imagery/serializers/stac.py
@@ -41,6 +41,7 @@
item.ext.enable('eo')
item.ext.eo.apply(cloud_cover=instance.cloud_cover, bands=[])
# Add assets
+ band_num = 0
for image in instance.parent_raster.image_set.images.all():
if image.file.type != FileSourceType.URL:
# TODO: we need fix this
@@ -52,17 +53,27 @@
'data',
],
)
- item.ext.eo.set_bands(
- bands=[
+ if image.imagemeta.number_of_bands == 1:
+ bands = [
+ pystac.extensions.eo.Band.create(
+ name=image.file.name,
+ description=image.bandmeta_set.first().description,
+ )
+ ]
+ else:
+ bands = [
pystac.extensions.eo.Band.create(
- name=f'B{bandmeta.band_number}',
+ name=f'B{bandmeta.band_number + band_num}',
description=bandmeta.description,
)
for bandmeta in image.bandmeta_set.all()
- ],
+ ]
+ item.ext.eo.set_bands(
+ bands=bands,
asset=asset,
)
item.add_asset(f'image-{image.pk}', asset)
+ band_num += image.imagemeta.number_of_bands
for ancillary_file in instance.parent_raster.ancillary_files.all():
asset = pystac.Asset(
|
{"golden_diff": "diff --git a/django-rgd-imagery/rgd_imagery/serializers/stac.py b/django-rgd-imagery/rgd_imagery/serializers/stac.py\n--- a/django-rgd-imagery/rgd_imagery/serializers/stac.py\n+++ b/django-rgd-imagery/rgd_imagery/serializers/stac.py\n@@ -41,6 +41,7 @@\n item.ext.enable('eo')\n item.ext.eo.apply(cloud_cover=instance.cloud_cover, bands=[])\n # Add assets\n+ band_num = 0\n for image in instance.parent_raster.image_set.images.all():\n if image.file.type != FileSourceType.URL:\n # TODO: we need fix this\n@@ -52,17 +53,27 @@\n 'data',\n ],\n )\n- item.ext.eo.set_bands(\n- bands=[\n+ if image.imagemeta.number_of_bands == 1:\n+ bands = [\n+ pystac.extensions.eo.Band.create(\n+ name=image.file.name,\n+ description=image.bandmeta_set.first().description,\n+ )\n+ ]\n+ else:\n+ bands = [\n pystac.extensions.eo.Band.create(\n- name=f'B{bandmeta.band_number}',\n+ name=f'B{bandmeta.band_number + band_num}',\n description=bandmeta.description,\n )\n for bandmeta in image.bandmeta_set.all()\n- ],\n+ ]\n+ item.ext.eo.set_bands(\n+ bands=bands,\n asset=asset,\n )\n item.add_asset(f'image-{image.pk}', asset)\n+ band_num += image.imagemeta.number_of_bands\n \n for ancillary_file in instance.parent_raster.ancillary_files.all():\n asset = pystac.Asset(\n", "issue": "STAC serializer output Band info is incorrect\nFigure out where this is coming from:\r\n\r\n```\r\n'assets': {\r\n 'image-15030': {\r\n 'href': 'http://storage.googleapis.com/gcp-public-data-sentinel-2/tiles/17/S/MS/S2A_MSIL1C_20210302T161201_N0209_R140_T17SMS_20210302T200521.SAFE/GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B01.jp2',\r\n 'title': 'GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B01.jp2',\r\n 'eo:bands': [{'name': 'B1'}],\r\n 'roles': ['data'],\r\n },\r\n 'image-15041': {\r\n 'href': 'http://storage.googleapis.com/gcp-public-data-sentinel-2/tiles/17/S/MS/S2A_MSIL1C_20210302T161201_N0209_R140_T17SMS_20210302T200521.SAFE/GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B02.jp2',\r\n 'title': 'GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B02.jp2',\r\n 'eo:bands': [{'name': 'B1'}],\r\n 'roles': ['data'],\r\n },\r\n```\r\n\r\nNote that both have `[{'name': 'B1'}]` which is incorrect.\r\n\r\nFirst we need to make sure the `BandMeta` fields are correct then see where this breaks in the serializer\n", "before_files": [{"content": "import json\n\nimport dateutil.parser\nfrom django.contrib.gis.geos import Polygon\nfrom django.db import transaction\nfrom pyproj import CRS\nimport pystac\nfrom rest_framework import serializers\nfrom rgd.models import ChecksumFile, FileSourceType\nfrom rgd.utility import get_or_create_no_commit\n\nfrom .. import models\n\n\nclass STACRasterSerializer(serializers.BaseSerializer):\n def to_internal_value(self, data):\n # item = pystac.Item.from_dict(data)\n # errors = item.validate()\n # if errors:\n # raise serializers.ValidationError(errors)\n return data\n\n def to_representation(self, instance: models.RasterMeta) -> dict:\n item = pystac.Item(\n id=instance.pk,\n geometry=json.loads(instance.footprint.json),\n bbox=instance.extent,\n datetime=(instance.acquisition_date or instance.modified or instance.created),\n properties=dict(\n datetime=str(instance.acquisition_date),\n platform=instance.instrumentation,\n ),\n )\n # 'proj' extension\n item.ext.enable('projection')\n item.ext.projection.apply(\n epsg=CRS.from_proj4(instance.crs).to_epsg(),\n transform=instance.transform,\n )\n # 'eo' extension\n item.ext.enable('eo')\n item.ext.eo.apply(cloud_cover=instance.cloud_cover, bands=[])\n # Add assets\n for image in instance.parent_raster.image_set.images.all():\n if image.file.type != FileSourceType.URL:\n # TODO: we need fix this\n raise ValueError('Files must point to valid URL resources, not internal storage.')\n asset = pystac.Asset(\n href=image.file.get_url(),\n title=image.file.name,\n roles=[\n 'data',\n ],\n )\n item.ext.eo.set_bands(\n bands=[\n pystac.extensions.eo.Band.create(\n name=f'B{bandmeta.band_number}',\n description=bandmeta.description,\n )\n for bandmeta in image.bandmeta_set.all()\n ],\n asset=asset,\n )\n item.add_asset(f'image-{image.pk}', asset)\n\n for ancillary_file in instance.parent_raster.ancillary_files.all():\n asset = pystac.Asset(\n href=ancillary_file.get_url(),\n title=ancillary_file.name,\n roles=[\n 'metadata',\n ],\n )\n item.add_asset(f'ancillary-{ancillary_file.pk}', asset)\n\n return item.to_dict()\n\n @transaction.atomic\n def create(self, data):\n item = pystac.Item.from_dict(data)\n image_ids, ancillary = [], []\n single_asset = False\n if len(item.assets) == 1:\n single_asset = True\n for name in item.assets:\n asset = item.assets[name]\n checksum_file, _ = ChecksumFile.objects.get_or_create(\n type=FileSourceType.URL,\n url=asset.href,\n )\n if single_asset or (asset.roles and 'data' in asset.roles):\n image, _ = models.Image.objects.get_or_create(file=checksum_file)\n image_ids.append(image.pk)\n else:\n ancillary.append(checksum_file)\n\n image_set, image_set_created = models.get_or_create_image_set(\n image_ids, defaults=dict(name=item.id)\n )\n\n raster, raster_created = get_or_create_no_commit(\n models.Raster, image_set=image_set, defaults=dict(name=item.id)\n )\n raster.skip_signal = True\n raster.save()\n [raster.ancillary_files.add(af) for af in ancillary]\n raster.save()\n\n outline = Polygon(\n (\n [item.bbox[0], item.bbox[1]],\n [item.bbox[0], item.bbox[3]],\n [item.bbox[2], item.bbox[3]],\n [item.bbox[2], item.bbox[1]],\n [item.bbox[0], item.bbox[1]],\n )\n )\n\n raster_meta = dict(\n footprint=json.dumps(item.geometry),\n crs=f'+init=epsg:{item.ext.projection.epsg}',\n cloud_cover=item.ext.eo.cloud_cover,\n transform=item.ext.projection.transform,\n extent=item.bbox,\n origin=(item.bbox[0], item.bbox[1]),\n resolution=(0, 0), # TODO: fix\n outline=outline,\n acquisition_date=dateutil.parser.isoparser().isoparse(item.properties['datetime']),\n instrumentation=item.properties['platform'],\n )\n\n if raster_created:\n instance = models.RasterMeta(**raster_meta)\n instance.parent_raster = raster\n else:\n models.RasterMeta.objects.filter(parent_raster=raster).update(**raster_meta)\n instance = models.RasterMeta.objects.get(parent_raster=raster)\n instance.save()\n\n return instance\n", "path": "django-rgd-imagery/rgd_imagery/serializers/stac.py"}]}
| 2,468 | 405 |
gh_patches_debug_25186
|
rasdani/github-patches
|
git_diff
|
deis__deis-347
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Vagrant provider repeatedly errors on formation if node dir is deleted
Needs to be more robust in some error cases such as this one:
1) Provision a controller but somehow forget to add _deis-controler_ to the admins group, despite all documentation and fuschia-colored warnings at the command-line
2) Create a formation and scale it upward, e.g. `deis nodes:scale form1 runtime=2`
3) Try to scale down the formation, get an appropriate error about "couldn't remove chef node"
4) All subsequent formation commands--including destroy!--will fail when trying to access the local vagrant node dir, which apparently was removed in step 3).
This shouldn't happen often, but it can and I think ignoring this error at least in the case of `deis formations:destroy` would provide a way out of this dead end.
</issue>
<code>
[start of provider/vagrant.py]
1 """
2 Deis cloud provider implementation for local vagrant setups.
3 """
4
5 from __future__ import unicode_literals
6
7 from api.ssh import exec_ssh, connect_ssh
8
9 import json
10 import logging
11 import string
12 import subprocess
13 import uuid
14
15 from api.models import Layer
16 from api.models import Node
17
18 logger = logging.getLogger(__name__)
19
20 # Collect details for connecting to the host machine
21 try:
22 HOST_NODES_DIR = open('/home/vagrant/.host_nodes_dir').read().strip()
23 PKEY = open('/home/vagrant/.ssh/id_rsa').read()
24 except IOError as err:
25 logger.warn(err)
26
27
28 def seed_flavors():
29 """Seed the database with default flavors for vagrant.
30
31 :rtype: list of dicts containing flavor data
32 """
33 flavors = []
34 for m in ['512', '1024', '2048']:
35 flavors.append({
36 'id': "vagrant-{}".format(m),
37 'provider': 'vagrant',
38 'params': json.dumps({
39 'memory': m
40 })
41 })
42 return flavors
43
44
45 def build_layer(layer):
46 """
47 Build a layer.
48
49 :param layer: a dict containing formation, id, params, and creds info
50 """
51
52 # This can also be done with `deis layers:update` now.
53 layer_ = Layer.objects.get(id=layer['id'], formation__id=layer['formation'])
54 layer_.ssh_username = 'vagrant'
55 layer_.save()
56
57
58 def destroy_layer(layer):
59 """
60 Destroy a layer.
61
62 :param layer: a dict containing formation, id, params, and creds info
63 """
64 pass
65
66
67 def build_node(node):
68 """
69 Build a node.
70
71 :param node: a dict containing formation, layer, params, and creds info.
72 :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)
73 """
74
75 # Can't use the vagrant UUID because it's not booted yet
76 uid = str(uuid.uuid1())
77
78 # Create a new Vagrantfile from a template
79 node['params'].setdefault('memory', '512')
80 template = open('/opt/deis/controller/contrib/vagrant/nodes_vagrantfile_template.rb')
81 raw = string.Template(template.read())
82 result = raw.substitute({
83 'id': uid,
84 'ipaddress': '192.168.61.' + str(Node.objects.all().count() + 100),
85 'memory': node['params']['memory']
86 })
87
88 # Make a folder for the VM with its own Vagrantfile. Vagrant will then create a .vagrant folder
89 # there too when it first gets booted.
90 node_dir = HOST_NODES_DIR + '/' + uid
91 mkdir = 'mkdir -p ' + node_dir
92 cp_tpl = 'echo "' + result.replace('"', '\\"') + '" > ' + node_dir + '/Vagrantfile'
93 _host_ssh(commands=[mkdir, cp_tpl], creds=node['creds'])
94
95 # Boot the VM
96 _run_vagrant_command(uid, args=['up'], creds=node['creds'])
97
98 # Copy the layer's public SSH key to the VM so that the Controller can access it.
99 _run_vagrant_command(
100 uid,
101 args=[
102 'ssh',
103 '-c',
104 '"echo \\"' + node['ssh_public_key'] + '\\" >> /home/vagrant/.ssh/authorized_keys"'
105 ],
106 creds=node['creds'],
107 )
108
109 provider_id = uid
110 fqdn = provider_id
111 if not fqdn.endswith('.local'):
112 fqdn += '.local' # hostname is broadcast via avahi-daemon
113 metadata = {
114 'id': uid,
115 'fqdn': fqdn,
116 'flavor': node['params']['memory']
117 }
118 return provider_id, fqdn, metadata
119
120
121 def destroy_node(node):
122 """
123 Destroy a node.
124
125 :param node: a dict containing a node's provider_id, params, and creds
126 """
127
128 # This is useful if node creation failed. So that there's a record in the DB, but it has no
129 # ID associated with it.
130 if node['provider_id'] is None:
131 return
132
133 # Shut the VM down and destroy it
134 _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])
135 node_dir = HOST_NODES_DIR + '/' + node['provider_id']
136
137 # Sanity check before `rm -rf`
138 if 'contrib/vagrant' not in node_dir:
139 raise RuntimeError("Aborted node destruction: attempting to 'rm -rf' unexpected directory")
140
141 # Completely remove the folder that contained the VM
142 rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'
143 rm_node_dir = 'rm -rf ' + node_dir
144 _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])
145
146
147 def _run_vagrant_command(node_id, args=[], creds={}):
148 """
149 args: A tuple of arguments to a vagrant command line.
150 e.g. ['up', 'my_vm_name', '--no-provision']
151 """
152
153 cd = 'cd ' + HOST_NODES_DIR + '/' + node_id
154 command = ['vagrant'] + [arg for arg in args if arg is not None]
155 return _host_ssh(commands=[cd, ' '.join(command)], creds=creds)
156
157
158 def _host_ssh(creds={}, commands=[]):
159 """
160 Connect to the host machine. Namely the user's local machine.
161 """
162 if creds == {}:
163 raise RuntimeError("No credentials provided to _host_ssh()")
164 command = ' && '.join(commands)
165
166 # First check if we can access the host machine. It's likely that their
167 # IP address changes every time they request a DHCP lease.
168 # TODO: Find a way of passing this error onto the CLI client.
169 try:
170 subprocess.check_call([
171 'nc', '-z', '-w2', creds['host'], '22'
172 ], stderr=subprocess.PIPE)
173 except subprocess.CalledProcessError:
174 raise RuntimeError("Couldn't ping port 22 at host with IP " + creds['host'])
175
176 ssh = connect_ssh(creds['user'], creds['host'], 22, PKEY, timeout=120)
177 result, status = exec_ssh(ssh, command)
178 if status > 0:
179 raise RuntimeError(
180 'SSH to Vagrant host error: ' + result.decode('utf-8') +
181 'Command: ' + command.decode('utf-8'))
182 return result
183
[end of provider/vagrant.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/provider/vagrant.py b/provider/vagrant.py
--- a/provider/vagrant.py
+++ b/provider/vagrant.py
@@ -131,17 +131,25 @@
return
# Shut the VM down and destroy it
- _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])
- node_dir = HOST_NODES_DIR + '/' + node['provider_id']
-
- # Sanity check before `rm -rf`
- if 'contrib/vagrant' not in node_dir:
- raise RuntimeError("Aborted node destruction: attempting to 'rm -rf' unexpected directory")
-
- # Completely remove the folder that contained the VM
- rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'
- rm_node_dir = 'rm -rf ' + node_dir
- _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])
+ try:
+ _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])
+ node_dir = HOST_NODES_DIR + '/' + node['provider_id']
+
+ # Sanity check before `rm -rf`
+ if 'contrib/vagrant' not in node_dir:
+ raise RuntimeError(
+ "Aborted node destruction: attempting to 'rm -rf' unexpected directory")
+
+ # Completely remove the folder that contained the VM
+ rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'
+ rm_node_dir = 'rm -rf ' + node_dir
+ _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])
+ except RuntimeError as err:
+ # If we couldn't cd to the node dir, just log that as a warning
+ if 'No such file or directory' in str(err):
+ logger.warn(err)
+ else:
+ raise
def _run_vagrant_command(node_id, args=[], creds={}):
|
{"golden_diff": "diff --git a/provider/vagrant.py b/provider/vagrant.py\n--- a/provider/vagrant.py\n+++ b/provider/vagrant.py\n@@ -131,17 +131,25 @@\n return\n \n # Shut the VM down and destroy it\n- _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])\n- node_dir = HOST_NODES_DIR + '/' + node['provider_id']\n-\n- # Sanity check before `rm -rf`\n- if 'contrib/vagrant' not in node_dir:\n- raise RuntimeError(\"Aborted node destruction: attempting to 'rm -rf' unexpected directory\")\n-\n- # Completely remove the folder that contained the VM\n- rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'\n- rm_node_dir = 'rm -rf ' + node_dir\n- _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])\n+ try:\n+ _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])\n+ node_dir = HOST_NODES_DIR + '/' + node['provider_id']\n+\n+ # Sanity check before `rm -rf`\n+ if 'contrib/vagrant' not in node_dir:\n+ raise RuntimeError(\n+ \"Aborted node destruction: attempting to 'rm -rf' unexpected directory\")\n+\n+ # Completely remove the folder that contained the VM\n+ rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'\n+ rm_node_dir = 'rm -rf ' + node_dir\n+ _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])\n+ except RuntimeError as err:\n+ # If we couldn't cd to the node dir, just log that as a warning\n+ if 'No such file or directory' in str(err):\n+ logger.warn(err)\n+ else:\n+ raise\n \n \n def _run_vagrant_command(node_id, args=[], creds={}):\n", "issue": "Vagrant provider repeatedly errors on formation if node dir is deleted\nNeeds to be more robust in some error cases such as this one:\n1) Provision a controller but somehow forget to add _deis-controler_ to the admins group, despite all documentation and fuschia-colored warnings at the command-line\n2) Create a formation and scale it upward, e.g. `deis nodes:scale form1 runtime=2`\n3) Try to scale down the formation, get an appropriate error about \"couldn't remove chef node\"\n4) All subsequent formation commands--including destroy!--will fail when trying to access the local vagrant node dir, which apparently was removed in step 3).\n\nThis shouldn't happen often, but it can and I think ignoring this error at least in the case of `deis formations:destroy` would provide a way out of this dead end.\n\n", "before_files": [{"content": "\"\"\"\nDeis cloud provider implementation for local vagrant setups.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nfrom api.ssh import exec_ssh, connect_ssh\n\nimport json\nimport logging\nimport string\nimport subprocess\nimport uuid\n\nfrom api.models import Layer\nfrom api.models import Node\n\nlogger = logging.getLogger(__name__)\n\n# Collect details for connecting to the host machine\ntry:\n HOST_NODES_DIR = open('/home/vagrant/.host_nodes_dir').read().strip()\n PKEY = open('/home/vagrant/.ssh/id_rsa').read()\nexcept IOError as err:\n logger.warn(err)\n\n\ndef seed_flavors():\n \"\"\"Seed the database with default flavors for vagrant.\n\n :rtype: list of dicts containing flavor data\n \"\"\"\n flavors = []\n for m in ['512', '1024', '2048']:\n flavors.append({\n 'id': \"vagrant-{}\".format(m),\n 'provider': 'vagrant',\n 'params': json.dumps({\n 'memory': m\n })\n })\n return flavors\n\n\ndef build_layer(layer):\n \"\"\"\n Build a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n\n # This can also be done with `deis layers:update` now.\n layer_ = Layer.objects.get(id=layer['id'], formation__id=layer['formation'])\n layer_.ssh_username = 'vagrant'\n layer_.save()\n\n\ndef destroy_layer(layer):\n \"\"\"\n Destroy a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n pass\n\n\ndef build_node(node):\n \"\"\"\n Build a node.\n\n :param node: a dict containing formation, layer, params, and creds info.\n :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)\n \"\"\"\n\n # Can't use the vagrant UUID because it's not booted yet\n uid = str(uuid.uuid1())\n\n # Create a new Vagrantfile from a template\n node['params'].setdefault('memory', '512')\n template = open('/opt/deis/controller/contrib/vagrant/nodes_vagrantfile_template.rb')\n raw = string.Template(template.read())\n result = raw.substitute({\n 'id': uid,\n 'ipaddress': '192.168.61.' + str(Node.objects.all().count() + 100),\n 'memory': node['params']['memory']\n })\n\n # Make a folder for the VM with its own Vagrantfile. Vagrant will then create a .vagrant folder\n # there too when it first gets booted.\n node_dir = HOST_NODES_DIR + '/' + uid\n mkdir = 'mkdir -p ' + node_dir\n cp_tpl = 'echo \"' + result.replace('\"', '\\\\\"') + '\" > ' + node_dir + '/Vagrantfile'\n _host_ssh(commands=[mkdir, cp_tpl], creds=node['creds'])\n\n # Boot the VM\n _run_vagrant_command(uid, args=['up'], creds=node['creds'])\n\n # Copy the layer's public SSH key to the VM so that the Controller can access it.\n _run_vagrant_command(\n uid,\n args=[\n 'ssh',\n '-c',\n '\"echo \\\\\"' + node['ssh_public_key'] + '\\\\\" >> /home/vagrant/.ssh/authorized_keys\"'\n ],\n creds=node['creds'],\n )\n\n provider_id = uid\n fqdn = provider_id\n if not fqdn.endswith('.local'):\n fqdn += '.local' # hostname is broadcast via avahi-daemon\n metadata = {\n 'id': uid,\n 'fqdn': fqdn,\n 'flavor': node['params']['memory']\n }\n return provider_id, fqdn, metadata\n\n\ndef destroy_node(node):\n \"\"\"\n Destroy a node.\n\n :param node: a dict containing a node's provider_id, params, and creds\n \"\"\"\n\n # This is useful if node creation failed. So that there's a record in the DB, but it has no\n # ID associated with it.\n if node['provider_id'] is None:\n return\n\n # Shut the VM down and destroy it\n _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])\n node_dir = HOST_NODES_DIR + '/' + node['provider_id']\n\n # Sanity check before `rm -rf`\n if 'contrib/vagrant' not in node_dir:\n raise RuntimeError(\"Aborted node destruction: attempting to 'rm -rf' unexpected directory\")\n\n # Completely remove the folder that contained the VM\n rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'\n rm_node_dir = 'rm -rf ' + node_dir\n _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])\n\n\ndef _run_vagrant_command(node_id, args=[], creds={}):\n \"\"\"\n args: A tuple of arguments to a vagrant command line.\n e.g. ['up', 'my_vm_name', '--no-provision']\n \"\"\"\n\n cd = 'cd ' + HOST_NODES_DIR + '/' + node_id\n command = ['vagrant'] + [arg for arg in args if arg is not None]\n return _host_ssh(commands=[cd, ' '.join(command)], creds=creds)\n\n\ndef _host_ssh(creds={}, commands=[]):\n \"\"\"\n Connect to the host machine. Namely the user's local machine.\n \"\"\"\n if creds == {}:\n raise RuntimeError(\"No credentials provided to _host_ssh()\")\n command = ' && '.join(commands)\n\n # First check if we can access the host machine. It's likely that their\n # IP address changes every time they request a DHCP lease.\n # TODO: Find a way of passing this error onto the CLI client.\n try:\n subprocess.check_call([\n 'nc', '-z', '-w2', creds['host'], '22'\n ], stderr=subprocess.PIPE)\n except subprocess.CalledProcessError:\n raise RuntimeError(\"Couldn't ping port 22 at host with IP \" + creds['host'])\n\n ssh = connect_ssh(creds['user'], creds['host'], 22, PKEY, timeout=120)\n result, status = exec_ssh(ssh, command)\n if status > 0:\n raise RuntimeError(\n 'SSH to Vagrant host error: ' + result.decode('utf-8') +\n 'Command: ' + command.decode('utf-8'))\n return result\n", "path": "provider/vagrant.py"}]}
| 2,592 | 440 |
gh_patches_debug_41065
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-1665
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] `test_fit_does_not_overwrite_hyper_params[FeatureUnion]` failing
Update: the failure has been silenced in the tests to enable refactor work on CI/CD, but the bug is still there.
To reproduce, the test should be run manually, or with the silencing disabled (`tests._config.EXCLUDED_TESTS`)
---
**Describe the bug**
In the refactored CI pipeline based on github actions #1620, and is blocking the PR.
The test `test_fit_does_not_overwrite_hyper_params[FeatureUnion]` from `tests/test_all_estimators.py` fails on linux with python3.6-3.9 and macos 3.6-3.9 with the error below.
Curiously the test are passing in CI pipelines currently on `main` branch.
```
____________ test_fit_does_not_overwrite_hyper_params[FeatureUnion] ____________
[gw0] darwin -- Python 3.7.12 /Users/runner/hostedtoolcache/Python/3.7.12/x64/bin/python
estimator_instance = FeatureUnion(n_jobs=None, preserve_dataframe=True,
transformer_list=[('transformer1',
... with_std=True)))],
transformer_weights=None)
def test_fit_does_not_overwrite_hyper_params(estimator_instance):
"""Check that we do not overwrite hyper-parameters in fit."""
estimator = estimator_instance
set_random_state(estimator)
# Make a physical copy of the original estimator parameters before fitting.
params = estimator.get_params()
original_params = deepcopy(params)
# Fit the model
fit_args = _make_args(estimator, "fit")
estimator.fit(*fit_args)
# Compare the state of the model parameters with the original parameters
new_params = estimator.get_params()
for param_name, original_value in original_params.items():
new_value = new_params[param_name]
# We should never change or mutate the internal state of input
# parameters by default. To check this we use the joblib.hash function
# that introspects recursively any subobjects to compute a checksum.
# The only exception to this rule of immutable constructor parameters
# is possible RandomState instance but in this check we explicitly
# fixed the random_state params recursively to be integer seeds.
> assert joblib.hash(new_value) == joblib.hash(original_value), (
"Estimator %s should not change or mutate "
" the parameter %s from %s to %s during fit."
% (estimator.__class__.__name__, param_name, original_value, new_value)
)
E AssertionError: Estimator FeatureUnion should not change or mutate the parameter transformer_list from [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,
E transformer=StandardScaler(copy=True,
E with_mean=True,
E with_std=True))), ('transformer2', SeriesToSeriesRowTransformer(check_transformer=False,
E transformer=StandardScaler(copy=True,
E with_mean=True,
E with_std=True)))] to [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,
E transformer=StandardScaler(copy=True,
E with_mean=True,
E with_std=True))), ('transformer2', SeriesToSeriesRowTransformer(check_transformer=False,
E transformer=StandardScaler(copy=True,
E with_mean=True,
E with_std=True)))] during fit.
E assert '7f94d1fc7e1f...888be251ce7b2' == 'b03f493febd2...c60681b4af6e4'
E - b03f493febd2f1d6da1c60681b4af6e4
E + 7f94d1fc7e1f285e1e5888be251ce7b2
estimator = FeatureUnion(n_jobs=None, preserve_dataframe=True,
transformer_list=[('transformer1',
... with_std=True)))],
transformer_weights=None)
estimator_instance = FeatureUnion(n_jobs=None, preserve_dataframe=True,
transformer_list=[('transformer1',
... with_std=True)))],
transformer_weights=None)
fit_args = ( var_0
0 0 -0.116020
1 0.343339
2 -0.464066
3...
1 0 ...0
7 1
8 0
9 0
10 0
11 0
12 1
13 1
14 1
15 1
16 0
17 0
18 1
19 1
dtype: int64)
new_params = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,
... with_std=True)), 'transformer1__check_transformer': False, ...}
new_value = [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,
transformer=Stand... with_mean=True,
with_std=True)))]
original_params = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,
... with_std=True)), 'transformer1__check_transformer': False, ...}
original_value = [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,
transformer=Stand... with_mean=True,
with_std=True)))]
param_name = 'transformer_list'
params = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,
... with_std=True)), 'transformer1__check_transformer': False, ...}
```
**To Reproduce**
Run the test with:
```bash
pytest sktime/tests/test_all_estimators.py
```
**Expected behavior**
Test passes
**Additional context**
**Versions**
See github actions under #1620
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
</details>
<!-- Thanks for contributing! -->
</issue>
<code>
[start of sktime/series_as_features/compose/_pipeline.py]
1 # -*- coding: utf-8 -*-
2 import numpy as np
3 import pandas as pd
4 from joblib import Parallel, delayed
5 from scipy import sparse
6 from sklearn.pipeline import FeatureUnion as _FeatureUnion
7 from sklearn.pipeline import _fit_transform_one, _transform_one
8
9 from sktime.transformations.base import _PanelToPanelTransformer
10
11 __all__ = ["FeatureUnion"]
12 __author__ = ["Markus Löning"]
13
14
15 class FeatureUnion(_FeatureUnion, _PanelToPanelTransformer):
16 """Concatenates results of multiple transformer objects.
17
18 This estimator applies a list of transformer objects in parallel to the
19 input data, then concatenates the results. This is useful to combine
20 several feature extraction mechanisms into a single transformer.
21 Parameters of the transformations may be set using its name and the
22 parameter name separated by a '__'. A transformer may be replaced entirely by
23 setting the parameter with its name to another transformer,
24 or removed by setting to 'drop' or ``None``.
25
26 Parameters
27 ----------
28 transformer_list : list of (string, transformer) tuples
29 List of transformer objects to be applied to the data. The first
30 half of each tuple is the name of the transformer.
31 n_jobs : int or None, optional (default=None)
32 Number of jobs to run in parallel.
33 ``None`` means 1 unless in a :obj:`joblib.parallel_backend`
34 context.
35 ``-1`` means using all processors.
36 transformer_weights : dict, optional
37 Multiplicative weights for features per transformer.
38 Keys are transformer names, values the weights.
39 preserve_dataframe : bool
40 Save constructed dataframe.
41 """
42
43 _required_parameters = ["transformer_list"]
44
45 def __init__(
46 self,
47 transformer_list,
48 n_jobs=None,
49 transformer_weights=None,
50 preserve_dataframe=True,
51 ):
52 self.preserve_dataframe = preserve_dataframe
53 super(FeatureUnion, self).__init__(
54 transformer_list, n_jobs=n_jobs, transformer_weights=transformer_weights
55 )
56
57 # We need to add is-fitted state when inheriting from scikit-learn
58 self._is_fitted = False
59
60 def fit_transform(self, X, y=None, **fit_params):
61 """Fit all transformations, transform the data and concatenate results.
62
63 Parameters
64 ----------
65 X : pandas DataFrame
66 Input data to be transformed.
67 y : pandas Series, shape (n_samples, ...), optional
68 Targets for supervised learning.
69
70 Returns
71 -------
72 Xt : pandas DataFrame
73 hstack of results of transformations. sum_n_components is the
74 sum of n_components (output dimension) over transformations.
75 """
76 self._validate_transformers()
77 result = Parallel(n_jobs=self.n_jobs)(
78 delayed(_fit_transform_one)(trans, X, y, weight, **fit_params)
79 for name, trans, weight in self._iter()
80 )
81
82 if not result:
83 # All transformations are None
84 return np.zeros((X.shape[0], 0))
85
86 Xs, transformers = zip(*result)
87 self._update_transformer_list(transformers)
88
89 Xs = self._hstack(list(Xs))
90 self._is_fitted = True
91 return Xs
92
93 def fit(self, X, y=None, **fit_params):
94 """Fit parameters."""
95 super(FeatureUnion, self).fit(X, y, **fit_params)
96 self._is_fitted = True
97 return self
98
99 def transform(self, X):
100 """Transform X separately by each transformer, concatenate results.
101
102 Parameters
103 ----------
104 X : pandas DataFrame
105 Input data to be transformed.
106
107 Returns
108 -------
109 Xt : pandas DataFrame
110 hstack of results of transformations. sum_n_components is the
111 sum of n_components (output dimension) over transformations.
112 """
113 self.check_is_fitted()
114 Xs = Parallel(n_jobs=self.n_jobs)(
115 delayed(_transform_one)(trans, X, None, weight)
116 for name, trans, weight in self._iter()
117 )
118
119 if not Xs:
120 # All transformations are None
121 return np.zeros((X.shape[0], 0))
122
123 else:
124 return self._hstack(list(Xs))
125
126 def _hstack(self, Xs):
127 """
128 Stacks X horizontally.
129
130 Supports input types (X): list of
131 numpy arrays, sparse arrays and DataFrames.
132 """
133 if any(sparse.issparse(f) for f in Xs):
134 Xs = sparse.hstack(Xs).tocsr()
135
136 types = set(type(X) for X in Xs)
137 if self.preserve_dataframe and (pd.Series in types or pd.DataFrame in types):
138 return pd.concat(Xs, axis=1)
139
140 else:
141 return np.hstack(Xs)
142
[end of sktime/series_as_features/compose/_pipeline.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sktime/series_as_features/compose/_pipeline.py b/sktime/series_as_features/compose/_pipeline.py
--- a/sktime/series_as_features/compose/_pipeline.py
+++ b/sktime/series_as_features/compose/_pipeline.py
@@ -1,10 +1,9 @@
# -*- coding: utf-8 -*-
+
import numpy as np
import pandas as pd
-from joblib import Parallel, delayed
from scipy import sparse
from sklearn.pipeline import FeatureUnion as _FeatureUnion
-from sklearn.pipeline import _fit_transform_one, _transform_one
from sktime.transformations.base import _PanelToPanelTransformer
@@ -57,71 +56,20 @@
# We need to add is-fitted state when inheriting from scikit-learn
self._is_fitted = False
- def fit_transform(self, X, y=None, **fit_params):
- """Fit all transformations, transform the data and concatenate results.
-
- Parameters
- ----------
- X : pandas DataFrame
- Input data to be transformed.
- y : pandas Series, shape (n_samples, ...), optional
- Targets for supervised learning.
-
- Returns
- -------
- Xt : pandas DataFrame
- hstack of results of transformations. sum_n_components is the
- sum of n_components (output dimension) over transformations.
- """
- self._validate_transformers()
- result = Parallel(n_jobs=self.n_jobs)(
- delayed(_fit_transform_one)(trans, X, y, weight, **fit_params)
- for name, trans, weight in self._iter()
- )
-
- if not result:
- # All transformations are None
- return np.zeros((X.shape[0], 0))
-
- Xs, transformers = zip(*result)
- self._update_transformer_list(transformers)
-
- Xs = self._hstack(list(Xs))
- self._is_fitted = True
- return Xs
-
def fit(self, X, y=None, **fit_params):
"""Fit parameters."""
- super(FeatureUnion, self).fit(X, y, **fit_params)
+ super().fit(X, y, **fit_params)
self._is_fitted = True
return self
def transform(self, X):
- """Transform X separately by each transformer, concatenate results.
-
- Parameters
- ----------
- X : pandas DataFrame
- Input data to be transformed.
-
- Returns
- -------
- Xt : pandas DataFrame
- hstack of results of transformations. sum_n_components is the
- sum of n_components (output dimension) over transformations.
- """
+ """Transform X separately by each transformer, concatenate results."""
self.check_is_fitted()
- Xs = Parallel(n_jobs=self.n_jobs)(
- delayed(_transform_one)(trans, X, None, weight)
- for name, trans, weight in self._iter()
- )
-
- if not Xs:
- # All transformations are None
- return np.zeros((X.shape[0], 0))
+ return super().transform(X)
- else:
- return self._hstack(list(Xs))
+ def fit_transform(self, X, y, **fit_params):
+ """Transform X separately by each transformer, concatenate results."""
+ return self.fit(X, y, **fit_params).transform(X)
def _hstack(self, Xs):
"""
@@ -133,7 +81,7 @@
if any(sparse.issparse(f) for f in Xs):
Xs = sparse.hstack(Xs).tocsr()
- types = set(type(X) for X in Xs)
+ types = {type(X) for X in Xs}
if self.preserve_dataframe and (pd.Series in types or pd.DataFrame in types):
return pd.concat(Xs, axis=1)
|
{"golden_diff": "diff --git a/sktime/series_as_features/compose/_pipeline.py b/sktime/series_as_features/compose/_pipeline.py\n--- a/sktime/series_as_features/compose/_pipeline.py\n+++ b/sktime/series_as_features/compose/_pipeline.py\n@@ -1,10 +1,9 @@\n # -*- coding: utf-8 -*-\n+\n import numpy as np\n import pandas as pd\n-from joblib import Parallel, delayed\n from scipy import sparse\n from sklearn.pipeline import FeatureUnion as _FeatureUnion\n-from sklearn.pipeline import _fit_transform_one, _transform_one\n \n from sktime.transformations.base import _PanelToPanelTransformer\n \n@@ -57,71 +56,20 @@\n # We need to add is-fitted state when inheriting from scikit-learn\n self._is_fitted = False\n \n- def fit_transform(self, X, y=None, **fit_params):\n- \"\"\"Fit all transformations, transform the data and concatenate results.\n-\n- Parameters\n- ----------\n- X : pandas DataFrame\n- Input data to be transformed.\n- y : pandas Series, shape (n_samples, ...), optional\n- Targets for supervised learning.\n-\n- Returns\n- -------\n- Xt : pandas DataFrame\n- hstack of results of transformations. sum_n_components is the\n- sum of n_components (output dimension) over transformations.\n- \"\"\"\n- self._validate_transformers()\n- result = Parallel(n_jobs=self.n_jobs)(\n- delayed(_fit_transform_one)(trans, X, y, weight, **fit_params)\n- for name, trans, weight in self._iter()\n- )\n-\n- if not result:\n- # All transformations are None\n- return np.zeros((X.shape[0], 0))\n-\n- Xs, transformers = zip(*result)\n- self._update_transformer_list(transformers)\n-\n- Xs = self._hstack(list(Xs))\n- self._is_fitted = True\n- return Xs\n-\n def fit(self, X, y=None, **fit_params):\n \"\"\"Fit parameters.\"\"\"\n- super(FeatureUnion, self).fit(X, y, **fit_params)\n+ super().fit(X, y, **fit_params)\n self._is_fitted = True\n return self\n \n def transform(self, X):\n- \"\"\"Transform X separately by each transformer, concatenate results.\n-\n- Parameters\n- ----------\n- X : pandas DataFrame\n- Input data to be transformed.\n-\n- Returns\n- -------\n- Xt : pandas DataFrame\n- hstack of results of transformations. sum_n_components is the\n- sum of n_components (output dimension) over transformations.\n- \"\"\"\n+ \"\"\"Transform X separately by each transformer, concatenate results.\"\"\"\n self.check_is_fitted()\n- Xs = Parallel(n_jobs=self.n_jobs)(\n- delayed(_transform_one)(trans, X, None, weight)\n- for name, trans, weight in self._iter()\n- )\n-\n- if not Xs:\n- # All transformations are None\n- return np.zeros((X.shape[0], 0))\n+ return super().transform(X)\n \n- else:\n- return self._hstack(list(Xs))\n+ def fit_transform(self, X, y, **fit_params):\n+ \"\"\"Transform X separately by each transformer, concatenate results.\"\"\"\n+ return self.fit(X, y, **fit_params).transform(X)\n \n def _hstack(self, Xs):\n \"\"\"\n@@ -133,7 +81,7 @@\n if any(sparse.issparse(f) for f in Xs):\n Xs = sparse.hstack(Xs).tocsr()\n \n- types = set(type(X) for X in Xs)\n+ types = {type(X) for X in Xs}\n if self.preserve_dataframe and (pd.Series in types or pd.DataFrame in types):\n return pd.concat(Xs, axis=1)\n", "issue": "[BUG] `test_fit_does_not_overwrite_hyper_params[FeatureUnion]` failing\nUpdate: the failure has been silenced in the tests to enable refactor work on CI/CD, but the bug is still there.\r\nTo reproduce, the test should be run manually, or with the silencing disabled (`tests._config.EXCLUDED_TESTS`)\r\n\r\n---\r\n\r\n**Describe the bug**\r\n\r\nIn the refactored CI pipeline based on github actions #1620, and is blocking the PR.\r\n\r\nThe test `test_fit_does_not_overwrite_hyper_params[FeatureUnion]` from `tests/test_all_estimators.py` fails on linux with python3.6-3.9 and macos 3.6-3.9 with the error below.\r\n\r\nCuriously the test are passing in CI pipelines currently on `main` branch.\r\n\r\n```\r\n____________ test_fit_does_not_overwrite_hyper_params[FeatureUnion] ____________\r\n[gw0] darwin -- Python 3.7.12 /Users/runner/hostedtoolcache/Python/3.7.12/x64/bin/python\r\n\r\nestimator_instance = FeatureUnion(n_jobs=None, preserve_dataframe=True,\r\n transformer_list=[('transformer1',\r\n ... with_std=True)))],\r\n transformer_weights=None)\r\n\r\n def test_fit_does_not_overwrite_hyper_params(estimator_instance):\r\n \"\"\"Check that we do not overwrite hyper-parameters in fit.\"\"\"\r\n estimator = estimator_instance\r\n set_random_state(estimator)\r\n \r\n # Make a physical copy of the original estimator parameters before fitting.\r\n params = estimator.get_params()\r\n original_params = deepcopy(params)\r\n \r\n # Fit the model\r\n fit_args = _make_args(estimator, \"fit\")\r\n estimator.fit(*fit_args)\r\n \r\n # Compare the state of the model parameters with the original parameters\r\n new_params = estimator.get_params()\r\n for param_name, original_value in original_params.items():\r\n new_value = new_params[param_name]\r\n \r\n # We should never change or mutate the internal state of input\r\n # parameters by default. To check this we use the joblib.hash function\r\n # that introspects recursively any subobjects to compute a checksum.\r\n # The only exception to this rule of immutable constructor parameters\r\n # is possible RandomState instance but in this check we explicitly\r\n # fixed the random_state params recursively to be integer seeds.\r\n> assert joblib.hash(new_value) == joblib.hash(original_value), (\r\n \"Estimator %s should not change or mutate \"\r\n \" the parameter %s from %s to %s during fit.\"\r\n % (estimator.__class__.__name__, param_name, original_value, new_value)\r\n )\r\nE AssertionError: Estimator FeatureUnion should not change or mutate the parameter transformer_list from [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,\r\nE transformer=StandardScaler(copy=True,\r\nE with_mean=True,\r\nE with_std=True))), ('transformer2', SeriesToSeriesRowTransformer(check_transformer=False,\r\nE transformer=StandardScaler(copy=True,\r\nE with_mean=True,\r\nE with_std=True)))] to [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,\r\nE transformer=StandardScaler(copy=True,\r\nE with_mean=True,\r\nE with_std=True))), ('transformer2', SeriesToSeriesRowTransformer(check_transformer=False,\r\nE transformer=StandardScaler(copy=True,\r\nE with_mean=True,\r\nE with_std=True)))] during fit.\r\nE assert '7f94d1fc7e1f...888be251ce7b2' == 'b03f493febd2...c60681b4af6e4'\r\nE - b03f493febd2f1d6da1c60681b4af6e4\r\nE + 7f94d1fc7e1f285e1e5888be251ce7b2\r\n\r\nestimator = FeatureUnion(n_jobs=None, preserve_dataframe=True,\r\n transformer_list=[('transformer1',\r\n ... with_std=True)))],\r\n transformer_weights=None)\r\nestimator_instance = FeatureUnion(n_jobs=None, preserve_dataframe=True,\r\n transformer_list=[('transformer1',\r\n ... with_std=True)))],\r\n transformer_weights=None)\r\nfit_args = ( var_0\r\n0 0 -0.116020\r\n1 0.343339\r\n2 -0.464066\r\n3...\r\n1 0 ...0\r\n7 1\r\n8 0\r\n9 0\r\n10 0\r\n11 0\r\n12 1\r\n13 1\r\n14 1\r\n15 1\r\n16 0\r\n17 0\r\n18 1\r\n19 1\r\ndtype: int64)\r\nnew_params = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,\r\n ... with_std=True)), 'transformer1__check_transformer': False, ...}\r\nnew_value = [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,\r\n transformer=Stand... with_mean=True,\r\n with_std=True)))]\r\noriginal_params = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,\r\n ... with_std=True)), 'transformer1__check_transformer': False, ...}\r\noriginal_value = [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,\r\n transformer=Stand... with_mean=True,\r\n with_std=True)))]\r\nparam_name = 'transformer_list'\r\nparams = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,\r\n ... with_std=True)), 'transformer1__check_transformer': False, ...}\r\n```\r\n\r\n**To Reproduce**\r\n\r\nRun the test with:\r\n\r\n```bash\r\npytest sktime/tests/test_all_estimators.py\r\n```\r\n\r\n**Expected behavior**\r\n\r\nTest passes\r\n\r\n**Additional context**\r\n\r\n**Versions**\r\n\r\nSee github actions under #1620 \r\n\r\n<!--\r\nPlease run the following code snippet and paste the output here:\r\n \r\nfrom sktime import show_versions; show_versions()\r\n-->\r\n\r\n</details>\r\n\r\n<!-- Thanks for contributing! -->\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport numpy as np\nimport pandas as pd\nfrom joblib import Parallel, delayed\nfrom scipy import sparse\nfrom sklearn.pipeline import FeatureUnion as _FeatureUnion\nfrom sklearn.pipeline import _fit_transform_one, _transform_one\n\nfrom sktime.transformations.base import _PanelToPanelTransformer\n\n__all__ = [\"FeatureUnion\"]\n__author__ = [\"Markus L\u00f6ning\"]\n\n\nclass FeatureUnion(_FeatureUnion, _PanelToPanelTransformer):\n \"\"\"Concatenates results of multiple transformer objects.\n\n This estimator applies a list of transformer objects in parallel to the\n input data, then concatenates the results. This is useful to combine\n several feature extraction mechanisms into a single transformer.\n Parameters of the transformations may be set using its name and the\n parameter name separated by a '__'. A transformer may be replaced entirely by\n setting the parameter with its name to another transformer,\n or removed by setting to 'drop' or ``None``.\n\n Parameters\n ----------\n transformer_list : list of (string, transformer) tuples\n List of transformer objects to be applied to the data. The first\n half of each tuple is the name of the transformer.\n n_jobs : int or None, optional (default=None)\n Number of jobs to run in parallel.\n ``None`` means 1 unless in a :obj:`joblib.parallel_backend`\n context.\n ``-1`` means using all processors.\n transformer_weights : dict, optional\n Multiplicative weights for features per transformer.\n Keys are transformer names, values the weights.\n preserve_dataframe : bool\n Save constructed dataframe.\n \"\"\"\n\n _required_parameters = [\"transformer_list\"]\n\n def __init__(\n self,\n transformer_list,\n n_jobs=None,\n transformer_weights=None,\n preserve_dataframe=True,\n ):\n self.preserve_dataframe = preserve_dataframe\n super(FeatureUnion, self).__init__(\n transformer_list, n_jobs=n_jobs, transformer_weights=transformer_weights\n )\n\n # We need to add is-fitted state when inheriting from scikit-learn\n self._is_fitted = False\n\n def fit_transform(self, X, y=None, **fit_params):\n \"\"\"Fit all transformations, transform the data and concatenate results.\n\n Parameters\n ----------\n X : pandas DataFrame\n Input data to be transformed.\n y : pandas Series, shape (n_samples, ...), optional\n Targets for supervised learning.\n\n Returns\n -------\n Xt : pandas DataFrame\n hstack of results of transformations. sum_n_components is the\n sum of n_components (output dimension) over transformations.\n \"\"\"\n self._validate_transformers()\n result = Parallel(n_jobs=self.n_jobs)(\n delayed(_fit_transform_one)(trans, X, y, weight, **fit_params)\n for name, trans, weight in self._iter()\n )\n\n if not result:\n # All transformations are None\n return np.zeros((X.shape[0], 0))\n\n Xs, transformers = zip(*result)\n self._update_transformer_list(transformers)\n\n Xs = self._hstack(list(Xs))\n self._is_fitted = True\n return Xs\n\n def fit(self, X, y=None, **fit_params):\n \"\"\"Fit parameters.\"\"\"\n super(FeatureUnion, self).fit(X, y, **fit_params)\n self._is_fitted = True\n return self\n\n def transform(self, X):\n \"\"\"Transform X separately by each transformer, concatenate results.\n\n Parameters\n ----------\n X : pandas DataFrame\n Input data to be transformed.\n\n Returns\n -------\n Xt : pandas DataFrame\n hstack of results of transformations. sum_n_components is the\n sum of n_components (output dimension) over transformations.\n \"\"\"\n self.check_is_fitted()\n Xs = Parallel(n_jobs=self.n_jobs)(\n delayed(_transform_one)(trans, X, None, weight)\n for name, trans, weight in self._iter()\n )\n\n if not Xs:\n # All transformations are None\n return np.zeros((X.shape[0], 0))\n\n else:\n return self._hstack(list(Xs))\n\n def _hstack(self, Xs):\n \"\"\"\n Stacks X horizontally.\n\n Supports input types (X): list of\n numpy arrays, sparse arrays and DataFrames.\n \"\"\"\n if any(sparse.issparse(f) for f in Xs):\n Xs = sparse.hstack(Xs).tocsr()\n\n types = set(type(X) for X in Xs)\n if self.preserve_dataframe and (pd.Series in types or pd.DataFrame in types):\n return pd.concat(Xs, axis=1)\n\n else:\n return np.hstack(Xs)\n", "path": "sktime/series_as_features/compose/_pipeline.py"}]}
| 3,270 | 868 |
gh_patches_debug_16788
|
rasdani/github-patches
|
git_diff
|
python-pillow__Pillow-5641
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect documentation for ImagePalette size parameter (and maybe not needed at all)
The documentation for the `ImagePalette` initializer in version 8.3.1 says the `palette` parameter must be "of length `size` times the number of colors in `mode`". Therefore, for an RGB image, I would expect `len(palette) == size * 3`. However, the code asserts that `len(palette) == size`, so I believe the code and documentation are inconsistent. (The same problem existed in 8.2.0 before some ImagePalette improvements were made, so this wasn't introduced with that change.)
Furthermore, it isn't clear to me that the `size` parameter is needed at all. It isn't stored on `self`, and the only place it's used in the initializer is to assert that its value is `0` or `len(palette)`, so it doesn't seem to provide any benefit. The only reason to keep it that I can think of is to maintain backwards compatibility with existing code that explicitly passes the parameter.
</issue>
<code>
[start of src/PIL/ImagePalette.py]
1 #
2 # The Python Imaging Library.
3 # $Id$
4 #
5 # image palette object
6 #
7 # History:
8 # 1996-03-11 fl Rewritten.
9 # 1997-01-03 fl Up and running.
10 # 1997-08-23 fl Added load hack
11 # 2001-04-16 fl Fixed randint shadow bug in random()
12 #
13 # Copyright (c) 1997-2001 by Secret Labs AB
14 # Copyright (c) 1996-1997 by Fredrik Lundh
15 #
16 # See the README file for information on usage and redistribution.
17 #
18
19 import array
20
21 from . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile
22
23
24 class ImagePalette:
25 """
26 Color palette for palette mapped images
27
28 :param mode: The mode to use for the Palette. See:
29 :ref:`concept-modes`. Defaults to "RGB"
30 :param palette: An optional palette. If given, it must be a bytearray,
31 an array or a list of ints between 0-255. The list must be aligned
32 by channel (All R values must be contiguous in the list before G
33 and B values.) Defaults to 0 through 255 per channel.
34 :param size: An optional palette size. If given, an error is raised
35 if ``palette`` is not of equal length.
36 """
37
38 def __init__(self, mode="RGB", palette=None, size=0):
39 self.mode = mode
40 self.rawmode = None # if set, palette contains raw data
41 self.palette = palette or bytearray()
42 self.dirty = None
43 if size != 0 and size != len(self.palette):
44 raise ValueError("wrong palette size")
45
46 @property
47 def palette(self):
48 return self._palette
49
50 @palette.setter
51 def palette(self, palette):
52 self._palette = palette
53
54 mode_len = len(self.mode)
55 self.colors = {}
56 for i in range(0, len(self.palette), mode_len):
57 color = tuple(self.palette[i : i + mode_len])
58 if color in self.colors:
59 continue
60 self.colors[color] = i // mode_len
61
62 def copy(self):
63 new = ImagePalette()
64
65 new.mode = self.mode
66 new.rawmode = self.rawmode
67 if self.palette is not None:
68 new.palette = self.palette[:]
69 new.dirty = self.dirty
70
71 return new
72
73 def getdata(self):
74 """
75 Get palette contents in format suitable for the low-level
76 ``im.putpalette`` primitive.
77
78 .. warning:: This method is experimental.
79 """
80 if self.rawmode:
81 return self.rawmode, self.palette
82 return self.mode, self.tobytes()
83
84 def tobytes(self):
85 """Convert palette to bytes.
86
87 .. warning:: This method is experimental.
88 """
89 if self.rawmode:
90 raise ValueError("palette contains raw palette data")
91 if isinstance(self.palette, bytes):
92 return self.palette
93 arr = array.array("B", self.palette)
94 return arr.tobytes()
95
96 # Declare tostring as an alias for tobytes
97 tostring = tobytes
98
99 def getcolor(self, color, image=None):
100 """Given an rgb tuple, allocate palette entry.
101
102 .. warning:: This method is experimental.
103 """
104 if self.rawmode:
105 raise ValueError("palette contains raw palette data")
106 if isinstance(color, tuple):
107 if self.mode == "RGB":
108 if len(color) == 4 and color[3] == 255:
109 color = color[:3]
110 elif self.mode == "RGBA":
111 if len(color) == 3:
112 color += (255,)
113 try:
114 return self.colors[color]
115 except KeyError as e:
116 # allocate new color slot
117 if not isinstance(self.palette, bytearray):
118 self._palette = bytearray(self.palette)
119 index = len(self.palette) // 3
120 special_colors = ()
121 if image:
122 special_colors = (
123 image.info.get("background"),
124 image.info.get("transparency"),
125 )
126 while index in special_colors:
127 index += 1
128 if index >= 256:
129 if image:
130 # Search for an unused index
131 for i, count in reversed(list(enumerate(image.histogram()))):
132 if count == 0 and i not in special_colors:
133 index = i
134 break
135 if index >= 256:
136 raise ValueError("cannot allocate more than 256 colors") from e
137 self.colors[color] = index
138 if index * 3 < len(self.palette):
139 self._palette = (
140 self.palette[: index * 3]
141 + bytes(color)
142 + self.palette[index * 3 + 3 :]
143 )
144 else:
145 self._palette += bytes(color)
146 self.dirty = 1
147 return index
148 else:
149 raise ValueError(f"unknown color specifier: {repr(color)}")
150
151 def save(self, fp):
152 """Save palette to text file.
153
154 .. warning:: This method is experimental.
155 """
156 if self.rawmode:
157 raise ValueError("palette contains raw palette data")
158 if isinstance(fp, str):
159 fp = open(fp, "w")
160 fp.write("# Palette\n")
161 fp.write(f"# Mode: {self.mode}\n")
162 for i in range(256):
163 fp.write(f"{i}")
164 for j in range(i * len(self.mode), (i + 1) * len(self.mode)):
165 try:
166 fp.write(f" {self.palette[j]}")
167 except IndexError:
168 fp.write(" 0")
169 fp.write("\n")
170 fp.close()
171
172
173 # --------------------------------------------------------------------
174 # Internal
175
176
177 def raw(rawmode, data):
178 palette = ImagePalette()
179 palette.rawmode = rawmode
180 palette.palette = data
181 palette.dirty = 1
182 return palette
183
184
185 # --------------------------------------------------------------------
186 # Factories
187
188
189 def make_linear_lut(black, white):
190 lut = []
191 if black == 0:
192 for i in range(256):
193 lut.append(white * i // 255)
194 else:
195 raise NotImplementedError # FIXME
196 return lut
197
198
199 def make_gamma_lut(exp):
200 lut = []
201 for i in range(256):
202 lut.append(int(((i / 255.0) ** exp) * 255.0 + 0.5))
203 return lut
204
205
206 def negative(mode="RGB"):
207 palette = list(range(256))
208 palette.reverse()
209 return ImagePalette(mode, palette * len(mode))
210
211
212 def random(mode="RGB"):
213 from random import randint
214
215 palette = []
216 for i in range(256 * len(mode)):
217 palette.append(randint(0, 255))
218 return ImagePalette(mode, palette)
219
220
221 def sepia(white="#fff0c0"):
222 r, g, b = ImageColor.getrgb(white)
223 r = make_linear_lut(0, r)
224 g = make_linear_lut(0, g)
225 b = make_linear_lut(0, b)
226 return ImagePalette("RGB", r + g + b)
227
228
229 def wedge(mode="RGB"):
230 return ImagePalette(mode, list(range(256)) * len(mode))
231
232
233 def load(filename):
234
235 # FIXME: supports GIMP gradients only
236
237 with open(filename, "rb") as fp:
238
239 for paletteHandler in [
240 GimpPaletteFile.GimpPaletteFile,
241 GimpGradientFile.GimpGradientFile,
242 PaletteFile.PaletteFile,
243 ]:
244 try:
245 fp.seek(0)
246 lut = paletteHandler(fp).getpalette()
247 if lut:
248 break
249 except (SyntaxError, ValueError):
250 # import traceback
251 # traceback.print_exc()
252 pass
253 else:
254 raise OSError("cannot load palette")
255
256 return lut # data, rawmode
257
[end of src/PIL/ImagePalette.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/PIL/ImagePalette.py b/src/PIL/ImagePalette.py
--- a/src/PIL/ImagePalette.py
+++ b/src/PIL/ImagePalette.py
@@ -17,6 +17,7 @@
#
import array
+import warnings
from . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile
@@ -40,8 +41,14 @@
self.rawmode = None # if set, palette contains raw data
self.palette = palette or bytearray()
self.dirty = None
- if size != 0 and size != len(self.palette):
- raise ValueError("wrong palette size")
+ if size != 0:
+ warnings.warn(
+ "The size parameter is deprecated and will be removed in Pillow 10 "
+ "(2023-01-02).",
+ DeprecationWarning,
+ )
+ if size != len(self.palette):
+ raise ValueError("wrong palette size")
@property
def palette(self):
|
{"golden_diff": "diff --git a/src/PIL/ImagePalette.py b/src/PIL/ImagePalette.py\n--- a/src/PIL/ImagePalette.py\n+++ b/src/PIL/ImagePalette.py\n@@ -17,6 +17,7 @@\n #\n \n import array\n+import warnings\n \n from . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile\n \n@@ -40,8 +41,14 @@\n self.rawmode = None # if set, palette contains raw data\n self.palette = palette or bytearray()\n self.dirty = None\n- if size != 0 and size != len(self.palette):\n- raise ValueError(\"wrong palette size\")\n+ if size != 0:\n+ warnings.warn(\n+ \"The size parameter is deprecated and will be removed in Pillow 10 \"\n+ \"(2023-01-02).\",\n+ DeprecationWarning,\n+ )\n+ if size != len(self.palette):\n+ raise ValueError(\"wrong palette size\")\n \n @property\n def palette(self):\n", "issue": "Incorrect documentation for ImagePalette size parameter (and maybe not needed at all)\nThe documentation for the `ImagePalette` initializer in version 8.3.1 says the `palette` parameter must be \"of length `size` times the number of colors in `mode`\". Therefore, for an RGB image, I would expect `len(palette) == size * 3`. However, the code asserts that `len(palette) == size`, so I believe the code and documentation are inconsistent. (The same problem existed in 8.2.0 before some ImagePalette improvements were made, so this wasn't introduced with that change.)\r\n\r\nFurthermore, it isn't clear to me that the `size` parameter is needed at all. It isn't stored on `self`, and the only place it's used in the initializer is to assert that its value is `0` or `len(palette)`, so it doesn't seem to provide any benefit. The only reason to keep it that I can think of is to maintain backwards compatibility with existing code that explicitly passes the parameter.\n", "before_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# image palette object\n#\n# History:\n# 1996-03-11 fl Rewritten.\n# 1997-01-03 fl Up and running.\n# 1997-08-23 fl Added load hack\n# 2001-04-16 fl Fixed randint shadow bug in random()\n#\n# Copyright (c) 1997-2001 by Secret Labs AB\n# Copyright (c) 1996-1997 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\nimport array\n\nfrom . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile\n\n\nclass ImagePalette:\n \"\"\"\n Color palette for palette mapped images\n\n :param mode: The mode to use for the Palette. See:\n :ref:`concept-modes`. Defaults to \"RGB\"\n :param palette: An optional palette. If given, it must be a bytearray,\n an array or a list of ints between 0-255. The list must be aligned\n by channel (All R values must be contiguous in the list before G\n and B values.) Defaults to 0 through 255 per channel.\n :param size: An optional palette size. If given, an error is raised\n if ``palette`` is not of equal length.\n \"\"\"\n\n def __init__(self, mode=\"RGB\", palette=None, size=0):\n self.mode = mode\n self.rawmode = None # if set, palette contains raw data\n self.palette = palette or bytearray()\n self.dirty = None\n if size != 0 and size != len(self.palette):\n raise ValueError(\"wrong palette size\")\n\n @property\n def palette(self):\n return self._palette\n\n @palette.setter\n def palette(self, palette):\n self._palette = palette\n\n mode_len = len(self.mode)\n self.colors = {}\n for i in range(0, len(self.palette), mode_len):\n color = tuple(self.palette[i : i + mode_len])\n if color in self.colors:\n continue\n self.colors[color] = i // mode_len\n\n def copy(self):\n new = ImagePalette()\n\n new.mode = self.mode\n new.rawmode = self.rawmode\n if self.palette is not None:\n new.palette = self.palette[:]\n new.dirty = self.dirty\n\n return new\n\n def getdata(self):\n \"\"\"\n Get palette contents in format suitable for the low-level\n ``im.putpalette`` primitive.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n return self.rawmode, self.palette\n return self.mode, self.tobytes()\n\n def tobytes(self):\n \"\"\"Convert palette to bytes.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n raise ValueError(\"palette contains raw palette data\")\n if isinstance(self.palette, bytes):\n return self.palette\n arr = array.array(\"B\", self.palette)\n return arr.tobytes()\n\n # Declare tostring as an alias for tobytes\n tostring = tobytes\n\n def getcolor(self, color, image=None):\n \"\"\"Given an rgb tuple, allocate palette entry.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n raise ValueError(\"palette contains raw palette data\")\n if isinstance(color, tuple):\n if self.mode == \"RGB\":\n if len(color) == 4 and color[3] == 255:\n color = color[:3]\n elif self.mode == \"RGBA\":\n if len(color) == 3:\n color += (255,)\n try:\n return self.colors[color]\n except KeyError as e:\n # allocate new color slot\n if not isinstance(self.palette, bytearray):\n self._palette = bytearray(self.palette)\n index = len(self.palette) // 3\n special_colors = ()\n if image:\n special_colors = (\n image.info.get(\"background\"),\n image.info.get(\"transparency\"),\n )\n while index in special_colors:\n index += 1\n if index >= 256:\n if image:\n # Search for an unused index\n for i, count in reversed(list(enumerate(image.histogram()))):\n if count == 0 and i not in special_colors:\n index = i\n break\n if index >= 256:\n raise ValueError(\"cannot allocate more than 256 colors\") from e\n self.colors[color] = index\n if index * 3 < len(self.palette):\n self._palette = (\n self.palette[: index * 3]\n + bytes(color)\n + self.palette[index * 3 + 3 :]\n )\n else:\n self._palette += bytes(color)\n self.dirty = 1\n return index\n else:\n raise ValueError(f\"unknown color specifier: {repr(color)}\")\n\n def save(self, fp):\n \"\"\"Save palette to text file.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n raise ValueError(\"palette contains raw palette data\")\n if isinstance(fp, str):\n fp = open(fp, \"w\")\n fp.write(\"# Palette\\n\")\n fp.write(f\"# Mode: {self.mode}\\n\")\n for i in range(256):\n fp.write(f\"{i}\")\n for j in range(i * len(self.mode), (i + 1) * len(self.mode)):\n try:\n fp.write(f\" {self.palette[j]}\")\n except IndexError:\n fp.write(\" 0\")\n fp.write(\"\\n\")\n fp.close()\n\n\n# --------------------------------------------------------------------\n# Internal\n\n\ndef raw(rawmode, data):\n palette = ImagePalette()\n palette.rawmode = rawmode\n palette.palette = data\n palette.dirty = 1\n return palette\n\n\n# --------------------------------------------------------------------\n# Factories\n\n\ndef make_linear_lut(black, white):\n lut = []\n if black == 0:\n for i in range(256):\n lut.append(white * i // 255)\n else:\n raise NotImplementedError # FIXME\n return lut\n\n\ndef make_gamma_lut(exp):\n lut = []\n for i in range(256):\n lut.append(int(((i / 255.0) ** exp) * 255.0 + 0.5))\n return lut\n\n\ndef negative(mode=\"RGB\"):\n palette = list(range(256))\n palette.reverse()\n return ImagePalette(mode, palette * len(mode))\n\n\ndef random(mode=\"RGB\"):\n from random import randint\n\n palette = []\n for i in range(256 * len(mode)):\n palette.append(randint(0, 255))\n return ImagePalette(mode, palette)\n\n\ndef sepia(white=\"#fff0c0\"):\n r, g, b = ImageColor.getrgb(white)\n r = make_linear_lut(0, r)\n g = make_linear_lut(0, g)\n b = make_linear_lut(0, b)\n return ImagePalette(\"RGB\", r + g + b)\n\n\ndef wedge(mode=\"RGB\"):\n return ImagePalette(mode, list(range(256)) * len(mode))\n\n\ndef load(filename):\n\n # FIXME: supports GIMP gradients only\n\n with open(filename, \"rb\") as fp:\n\n for paletteHandler in [\n GimpPaletteFile.GimpPaletteFile,\n GimpGradientFile.GimpGradientFile,\n PaletteFile.PaletteFile,\n ]:\n try:\n fp.seek(0)\n lut = paletteHandler(fp).getpalette()\n if lut:\n break\n except (SyntaxError, ValueError):\n # import traceback\n # traceback.print_exc()\n pass\n else:\n raise OSError(\"cannot load palette\")\n\n return lut # data, rawmode\n", "path": "src/PIL/ImagePalette.py"}]}
| 3,169 | 226 |
gh_patches_debug_14454
|
rasdani/github-patches
|
git_diff
|
microsoft__onnxscript-1472
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Optimizer fails on shape inference error over native_batch_norm
The optimizer fails for the attach model (so dort fails as well). It was obtained with the latest onnx, onnxscript and torch nightly.
[dump3bug.zip](https://github.com/microsoft/onnxscript/files/15106272/dump3bug.zip)
To replicate:
```python
import onnx
from onnxscript import optimizer
onx = onnx.load(model)
optimized = optimizer.optimize(onx)
```
It is coming from the following graph module.
```
graph():
%primals_7 : [num_users=1] = placeholder[target=primals_7]
%primals_1 : [num_users=1] = placeholder[target=primals_1]
%primals_2 : [num_users=1] = placeholder[target=primals_2]
%primals_3 : [num_users=1] = placeholder[target=primals_3]
%primals_4 : [num_users=1] = placeholder[target=primals_4]
%primals_5 : [num_users=1] = placeholder[target=primals_5]
%add : [num_users=2] = call_function[target=torch.ops.aten.add.Tensor](args = (%primals_7, %primals_1), kwargs = {})
%_native_batch_norm_legit_no_training : [num_users=1] = call_function[target=torch.ops.aten._native_batch_norm_legit_no_training.default](args = (%add, %primals_2, %primals_3, %primals_4, %primals_5, 0.1, 1e-05), kwargs = {})
%getitem : [num_users=1] = call_function[target=operator.getitem](args = (%_native_batch_norm_legit_no_training, 0), kwargs = {})
return (add, getitem)
```
Error:
```
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "check_model.py", line 43, in <module>
optimized = optimizer.optimize(onx)
File "onnxscript/onnxscript/optimizer/__init__.py", line 61, in optimize
model = onnx.shape_inference.infer_shapes(
File "onnx/onnx/shape_inference.py", line 46, in infer_shapes
inferred_model_str = C.infer_shapes(
onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:_aten_native_batch_norm_inference_onnx, node name: _aten_native_batch_norm_inference_onnx_2): [ShapeInferenceError] Inferred shape and existing shape differ in dimension 0: (2) vs (0)
```
</issue>
<code>
[start of onnxscript/optimizer/__init__.py]
1 import logging
2 from typing import Any
3
4 import onnx
5 import onnx.shape_inference
6
7 from onnxscript import rewriter
8 from onnxscript.optimizer.constant_folding import fold_constants
9 from onnxscript.optimizer.copy_propagation import (
10 do_copy_propagation,
11 do_sequence_simplification,
12 )
13 from onnxscript.optimizer.remove_unused import remove_unused_nodes
14 from onnxscript.optimizer.remove_unused_function import remove_unused_functions
15 from onnxscript.optimizer.simple_function_folding import (
16 inline_functions_with_unused_outputs,
17 inline_simple_functions,
18 )
19 from onnxscript.rewriter import (
20 broadcast_to_matmul,
21 cast_constant_of_shape,
22 gemm_to_matmul_add,
23 no_op,
24 )
25
26 logger = logging.getLogger(__name__)
27
28
29 def optimize(
30 model: onnx.ModelProto,
31 num_iterations: int = 2,
32 *,
33 onnx_shape_inference: bool = True,
34 stop_if_no_change: bool = True,
35 external_data_folder: str = "",
36 **kwargs: Any,
37 ) -> onnx.ModelProto:
38 """Optimize the model. Perform optimizations and clean-ups such as constant folding, dead code elimination, etc.
39
40 Args:
41 model (onnx.ModelProto): The model to optimize.
42 num_iterations (int, optional): Number of iterations to perform.
43 onnx_shape_inference (bool, optional): Whether to perform onnx shape inference on the model.
44 Set this to False to turn off onnx shape inference, and rely on model carried shapes and types.
45 This is useful for models produced by PyTorch 2.2+ dynamo onnx exporter, where the model carries
46 the symbolic shapes recorded from dynamo tracing.
47 stop_if_no_change (bool, optional): Whether to stop if no change is detected.
48 external_data_folder (str, optional): The folder to store external data.
49 **kwargs: Additional keyword arguments. For BC purposes.
50 """
51 if kwargs.pop("function_aware_folding", None) is not None:
52 logger.warning(
53 "'function_aware_folding' is deprecated. 'optimize' now supports both fully inlined models and models with functions. "
54 "To achieve the same behavior as 'function_aware_folding=True' before, set 'onnx_shape_inference=False'. "
55 "This would turn off incremental onnx shape inference and rely on model carried shapes and types. "
56 "See 'onnx_shape_inference' for more details."
57 )
58 for _ in range(num_iterations):
59 if onnx_shape_inference:
60 if model.ByteSize() < 1024 * 1024 * 1024 * 2:
61 model = onnx.shape_inference.infer_shapes(
62 model, check_type=True, strict_mode=True, data_prop=True
63 )
64 else:
65 logger.warning(
66 "The model size is too large for full model shape inference. "
67 "Skipping this step."
68 )
69
70 inline_simple_functions(model)
71 modified = fold_constants(
72 model, external_data_folder, onnx_shape_inference=onnx_shape_inference
73 )
74
75 remove_unused_nodes(model)
76 inline_simple_functions(model)
77 remove_unused_functions(model)
78 inline_functions_with_unused_outputs(model)
79 # NOTE: This is general rewrite rules
80 model = rewriter.rewrite(
81 model,
82 pattern_rewrite_rules=[
83 *no_op.rules.rules, # TODO: merge this rule into constant folding?
84 *broadcast_to_matmul.rules.rules,
85 gemm_to_matmul_add.rule,
86 *cast_constant_of_shape.rules.rules,
87 ],
88 )
89 if stop_if_no_change and not modified:
90 logger.debug("Stopping after %d iterations.", _)
91 break
92
93 for node in model.graph.node:
94 logger.debug("Node %s::%s name %s.", node.domain, node.op_type, node.name)
95
96 for function in model.functions:
97 for node in function.node:
98 logger.debug(
99 "Function %s::%s node %s::%s name %s.",
100 function.domain,
101 function.name,
102 node.domain,
103 node.op_type,
104 node.name,
105 )
106
107 # do_sequence_simplification(model)
108 return model
109
110
111 __all__ = [
112 "fold_constants",
113 "remove_unused_nodes",
114 "optimize",
115 "do_copy_propagation",
116 "do_sequence_simplification",
117 ]
118
[end of onnxscript/optimizer/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/onnxscript/optimizer/__init__.py b/onnxscript/optimizer/__init__.py
--- a/onnxscript/optimizer/__init__.py
+++ b/onnxscript/optimizer/__init__.py
@@ -58,8 +58,12 @@
for _ in range(num_iterations):
if onnx_shape_inference:
if model.ByteSize() < 1024 * 1024 * 1024 * 2:
+ # NOTE: strict mode is disabled because it crashes on the models
+ # that have different shapes inferred from the model carried shapes.
+ # The case can be found in:
+ # https://github.com/microsoft/onnxscript/issues/1443
model = onnx.shape_inference.infer_shapes(
- model, check_type=True, strict_mode=True, data_prop=True
+ model, check_type=True, strict_mode=False, data_prop=True
)
else:
logger.warning(
|
{"golden_diff": "diff --git a/onnxscript/optimizer/__init__.py b/onnxscript/optimizer/__init__.py\n--- a/onnxscript/optimizer/__init__.py\n+++ b/onnxscript/optimizer/__init__.py\n@@ -58,8 +58,12 @@\n for _ in range(num_iterations):\n if onnx_shape_inference:\n if model.ByteSize() < 1024 * 1024 * 1024 * 2:\n+ # NOTE: strict mode is disabled because it crashes on the models\n+ # that have different shapes inferred from the model carried shapes.\n+ # The case can be found in:\n+ # https://github.com/microsoft/onnxscript/issues/1443\n model = onnx.shape_inference.infer_shapes(\n- model, check_type=True, strict_mode=True, data_prop=True\n+ model, check_type=True, strict_mode=False, data_prop=True\n )\n else:\n logger.warning(\n", "issue": "Optimizer fails on shape inference error over native_batch_norm\nThe optimizer fails for the attach model (so dort fails as well). It was obtained with the latest onnx, onnxscript and torch nightly.\r\n\r\n[dump3bug.zip](https://github.com/microsoft/onnxscript/files/15106272/dump3bug.zip)\r\n\r\nTo replicate:\r\n\r\n```python\r\nimport onnx\r\nfrom onnxscript import optimizer\r\nonx = onnx.load(model)\r\noptimized = optimizer.optimize(onx)\r\n```\r\n\r\nIt is coming from the following graph module.\r\n\r\n```\r\ngraph():\r\n %primals_7 : [num_users=1] = placeholder[target=primals_7]\r\n %primals_1 : [num_users=1] = placeholder[target=primals_1]\r\n %primals_2 : [num_users=1] = placeholder[target=primals_2]\r\n %primals_3 : [num_users=1] = placeholder[target=primals_3]\r\n %primals_4 : [num_users=1] = placeholder[target=primals_4]\r\n %primals_5 : [num_users=1] = placeholder[target=primals_5]\r\n %add : [num_users=2] = call_function[target=torch.ops.aten.add.Tensor](args = (%primals_7, %primals_1), kwargs = {})\r\n %_native_batch_norm_legit_no_training : [num_users=1] = call_function[target=torch.ops.aten._native_batch_norm_legit_no_training.default](args = (%add, %primals_2, %primals_3, %primals_4, %primals_5, 0.1, 1e-05), kwargs = {})\r\n %getitem : [num_users=1] = call_function[target=operator.getitem](args = (%_native_batch_norm_legit_no_training, 0), kwargs = {})\r\n return (add, getitem)\r\n```\r\n\r\nError:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"check_model.py\", line 43, in <module>\r\n optimized = optimizer.optimize(onx)\r\n File \"onnxscript/onnxscript/optimizer/__init__.py\", line 61, in optimize\r\n model = onnx.shape_inference.infer_shapes(\r\n File \"onnx/onnx/shape_inference.py\", line 46, in infer_shapes\r\n inferred_model_str = C.infer_shapes(\r\nonnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:_aten_native_batch_norm_inference_onnx, node name: _aten_native_batch_norm_inference_onnx_2): [ShapeInferenceError] Inferred shape and existing shape differ in dimension 0: (2) vs (0)\r\n```\n", "before_files": [{"content": "import logging\nfrom typing import Any\n\nimport onnx\nimport onnx.shape_inference\n\nfrom onnxscript import rewriter\nfrom onnxscript.optimizer.constant_folding import fold_constants\nfrom onnxscript.optimizer.copy_propagation import (\n do_copy_propagation,\n do_sequence_simplification,\n)\nfrom onnxscript.optimizer.remove_unused import remove_unused_nodes\nfrom onnxscript.optimizer.remove_unused_function import remove_unused_functions\nfrom onnxscript.optimizer.simple_function_folding import (\n inline_functions_with_unused_outputs,\n inline_simple_functions,\n)\nfrom onnxscript.rewriter import (\n broadcast_to_matmul,\n cast_constant_of_shape,\n gemm_to_matmul_add,\n no_op,\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef optimize(\n model: onnx.ModelProto,\n num_iterations: int = 2,\n *,\n onnx_shape_inference: bool = True,\n stop_if_no_change: bool = True,\n external_data_folder: str = \"\",\n **kwargs: Any,\n) -> onnx.ModelProto:\n \"\"\"Optimize the model. Perform optimizations and clean-ups such as constant folding, dead code elimination, etc.\n\n Args:\n model (onnx.ModelProto): The model to optimize.\n num_iterations (int, optional): Number of iterations to perform.\n onnx_shape_inference (bool, optional): Whether to perform onnx shape inference on the model.\n Set this to False to turn off onnx shape inference, and rely on model carried shapes and types.\n This is useful for models produced by PyTorch 2.2+ dynamo onnx exporter, where the model carries\n the symbolic shapes recorded from dynamo tracing.\n stop_if_no_change (bool, optional): Whether to stop if no change is detected.\n external_data_folder (str, optional): The folder to store external data.\n **kwargs: Additional keyword arguments. For BC purposes.\n \"\"\"\n if kwargs.pop(\"function_aware_folding\", None) is not None:\n logger.warning(\n \"'function_aware_folding' is deprecated. 'optimize' now supports both fully inlined models and models with functions. \"\n \"To achieve the same behavior as 'function_aware_folding=True' before, set 'onnx_shape_inference=False'. \"\n \"This would turn off incremental onnx shape inference and rely on model carried shapes and types. \"\n \"See 'onnx_shape_inference' for more details.\"\n )\n for _ in range(num_iterations):\n if onnx_shape_inference:\n if model.ByteSize() < 1024 * 1024 * 1024 * 2:\n model = onnx.shape_inference.infer_shapes(\n model, check_type=True, strict_mode=True, data_prop=True\n )\n else:\n logger.warning(\n \"The model size is too large for full model shape inference. \"\n \"Skipping this step.\"\n )\n\n inline_simple_functions(model)\n modified = fold_constants(\n model, external_data_folder, onnx_shape_inference=onnx_shape_inference\n )\n\n remove_unused_nodes(model)\n inline_simple_functions(model)\n remove_unused_functions(model)\n inline_functions_with_unused_outputs(model)\n # NOTE: This is general rewrite rules\n model = rewriter.rewrite(\n model,\n pattern_rewrite_rules=[\n *no_op.rules.rules, # TODO: merge this rule into constant folding?\n *broadcast_to_matmul.rules.rules,\n gemm_to_matmul_add.rule,\n *cast_constant_of_shape.rules.rules,\n ],\n )\n if stop_if_no_change and not modified:\n logger.debug(\"Stopping after %d iterations.\", _)\n break\n\n for node in model.graph.node:\n logger.debug(\"Node %s::%s name %s.\", node.domain, node.op_type, node.name)\n\n for function in model.functions:\n for node in function.node:\n logger.debug(\n \"Function %s::%s node %s::%s name %s.\",\n function.domain,\n function.name,\n node.domain,\n node.op_type,\n node.name,\n )\n\n # do_sequence_simplification(model)\n return model\n\n\n__all__ = [\n \"fold_constants\",\n \"remove_unused_nodes\",\n \"optimize\",\n \"do_copy_propagation\",\n \"do_sequence_simplification\",\n]\n", "path": "onnxscript/optimizer/__init__.py"}]}
| 2,389 | 217 |
gh_patches_debug_25819
|
rasdani/github-patches
|
git_diff
|
yt-dlp__yt-dlp-4312
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bigo] Extractor returning invalid parameters
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Description
As of about 3 weeks ago, I now receive the following error on all live streams: `Bigo says: paramters invalid (code 1)`
### Verbose log
```shell
$ yt-dlp -vU -g https://www.bigo.tv/841947363
[debug] Command-line config: ['-vU', '-g', 'https://www.bigo.tv/841947363']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.06.22.1 [a86e01e]
[debug] Python version 3.10.4 (CPython 64bit) - macOS-12.4-arm64-arm-64bit
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg 5.0.1 (setts), ffprobe 5.0.1
[debug] Optional libraries: Cryptodome-3.14.1, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.06.22.1, Current version: 2022.06.22.1
yt-dlp is up to date (2022.06.22.1)
[debug] [Bigo] Extracting URL: https://www.bigo.tv/841947363
[Bigo] 841947363: Downloading JSON metadata
ERROR: [Bigo] 841947363: Bigo says: paramters invalid (code 1)
File "/opt/homebrew/Cellar/yt-dlp/2022.6.22.1/libexec/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 647, in extract
ie_result = self._real_extract(url)
File "/opt/homebrew/Cellar/yt-dlp/2022.6.22.1/libexec/lib/python3.10/site-packages/yt_dlp/extractor/bigo.py", line 37, in _real_extract
raise ExtractorError(
```
</issue>
<code>
[start of yt_dlp/extractor/bigo.py]
1 from .common import InfoExtractor
2 from ..utils import ExtractorError, urlencode_postdata
3
4
5 class BigoIE(InfoExtractor):
6 _VALID_URL = r'https?://(?:www\.)?bigo\.tv/(?:[a-z]{2,}/)?(?P<id>[^/]+)'
7
8 _TESTS = [{
9 'url': 'https://www.bigo.tv/ja/221338632',
10 'info_dict': {
11 'id': '6576287577575737440',
12 'title': '土よ〜💁♂️ 休憩室/REST room',
13 'thumbnail': r're:https?://.+',
14 'uploader': '✨Shin💫',
15 'uploader_id': '221338632',
16 'is_live': True,
17 },
18 'skip': 'livestream',
19 }, {
20 'url': 'https://www.bigo.tv/th/Tarlerm1304',
21 'only_matching': True,
22 }, {
23 'url': 'https://bigo.tv/115976881',
24 'only_matching': True,
25 }]
26
27 def _real_extract(self, url):
28 user_id = self._match_id(url)
29
30 info_raw = self._download_json(
31 'https://bigo.tv/studio/getInternalStudioInfo',
32 user_id, data=urlencode_postdata({'siteId': user_id}))
33
34 if not isinstance(info_raw, dict):
35 raise ExtractorError('Received invalid JSON data')
36 if info_raw.get('code'):
37 raise ExtractorError(
38 'Bigo says: %s (code %s)' % (info_raw.get('msg'), info_raw.get('code')), expected=True)
39 info = info_raw.get('data') or {}
40
41 if not info.get('alive'):
42 raise ExtractorError('This user is offline.', expected=True)
43
44 return {
45 'id': info.get('roomId') or user_id,
46 'title': info.get('roomTopic') or info.get('nick_name') or user_id,
47 'formats': [{
48 'url': info.get('hls_src'),
49 'ext': 'mp4',
50 'protocol': 'm3u8',
51 }],
52 'thumbnail': info.get('snapshot'),
53 'uploader': info.get('nick_name'),
54 'uploader_id': user_id,
55 'is_live': True,
56 }
57
[end of yt_dlp/extractor/bigo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/yt_dlp/extractor/bigo.py b/yt_dlp/extractor/bigo.py
--- a/yt_dlp/extractor/bigo.py
+++ b/yt_dlp/extractor/bigo.py
@@ -28,7 +28,7 @@
user_id = self._match_id(url)
info_raw = self._download_json(
- 'https://bigo.tv/studio/getInternalStudioInfo',
+ 'https://ta.bigo.tv/official_website/studio/getInternalStudioInfo',
user_id, data=urlencode_postdata({'siteId': user_id}))
if not isinstance(info_raw, dict):
@@ -41,14 +41,14 @@
if not info.get('alive'):
raise ExtractorError('This user is offline.', expected=True)
+ formats, subs = self._extract_m3u8_formats_and_subtitles(
+ info.get('hls_src'), user_id, 'mp4', 'm3u8')
+
return {
'id': info.get('roomId') or user_id,
'title': info.get('roomTopic') or info.get('nick_name') or user_id,
- 'formats': [{
- 'url': info.get('hls_src'),
- 'ext': 'mp4',
- 'protocol': 'm3u8',
- }],
+ 'formats': formats,
+ 'subtitles': subs,
'thumbnail': info.get('snapshot'),
'uploader': info.get('nick_name'),
'uploader_id': user_id,
|
{"golden_diff": "diff --git a/yt_dlp/extractor/bigo.py b/yt_dlp/extractor/bigo.py\n--- a/yt_dlp/extractor/bigo.py\n+++ b/yt_dlp/extractor/bigo.py\n@@ -28,7 +28,7 @@\n user_id = self._match_id(url)\n \n info_raw = self._download_json(\n- 'https://bigo.tv/studio/getInternalStudioInfo',\n+ 'https://ta.bigo.tv/official_website/studio/getInternalStudioInfo',\n user_id, data=urlencode_postdata({'siteId': user_id}))\n \n if not isinstance(info_raw, dict):\n@@ -41,14 +41,14 @@\n if not info.get('alive'):\n raise ExtractorError('This user is offline.', expected=True)\n \n+ formats, subs = self._extract_m3u8_formats_and_subtitles(\n+ info.get('hls_src'), user_id, 'mp4', 'm3u8')\n+\n return {\n 'id': info.get('roomId') or user_id,\n 'title': info.get('roomTopic') or info.get('nick_name') or user_id,\n- 'formats': [{\n- 'url': info.get('hls_src'),\n- 'ext': 'mp4',\n- 'protocol': 'm3u8',\n- }],\n+ 'formats': formats,\n+ 'subtitles': subs,\n 'thumbnail': info.get('snapshot'),\n 'uploader': info.get('nick_name'),\n 'uploader_id': user_id,\n", "issue": "[bigo] Extractor returning invalid parameters\n### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\n_No response_\n\n### Description\n\nAs of about 3 weeks ago, I now receive the following error on all live streams: `Bigo says: paramters invalid (code 1)`\n\n### Verbose log\n\n```shell\n$ yt-dlp -vU -g https://www.bigo.tv/841947363\r\n[debug] Command-line config: ['-vU', '-g', 'https://www.bigo.tv/841947363']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version 2022.06.22.1 [a86e01e]\r\n[debug] Python version 3.10.4 (CPython 64bit) - macOS-12.4-arm64-arm-64bit\r\n[debug] Checking exe version: ffmpeg -bsfs\r\n[debug] Checking exe version: ffprobe -bsfs\r\n[debug] exe versions: ffmpeg 5.0.1 (setts), ffprobe 5.0.1\r\n[debug] Optional libraries: Cryptodome-3.14.1, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3\r\n[debug] Proxy map: {}\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2022.06.22.1, Current version: 2022.06.22.1\r\nyt-dlp is up to date (2022.06.22.1)\r\n[debug] [Bigo] Extracting URL: https://www.bigo.tv/841947363\r\n[Bigo] 841947363: Downloading JSON metadata\r\nERROR: [Bigo] 841947363: Bigo says: paramters invalid (code 1)\r\n File \"/opt/homebrew/Cellar/yt-dlp/2022.6.22.1/libexec/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 647, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/opt/homebrew/Cellar/yt-dlp/2022.6.22.1/libexec/lib/python3.10/site-packages/yt_dlp/extractor/bigo.py\", line 37, in _real_extract\r\n raise ExtractorError(\n```\n\n", "before_files": [{"content": "from .common import InfoExtractor\nfrom ..utils import ExtractorError, urlencode_postdata\n\n\nclass BigoIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?bigo\\.tv/(?:[a-z]{2,}/)?(?P<id>[^/]+)'\n\n _TESTS = [{\n 'url': 'https://www.bigo.tv/ja/221338632',\n 'info_dict': {\n 'id': '6576287577575737440',\n 'title': '\u571f\u3088\u301c\ud83d\udc81\u200d\u2642\ufe0f \u4f11\u61a9\u5ba4/REST room',\n 'thumbnail': r're:https?://.+',\n 'uploader': '\u2728Shin\ud83d\udcab',\n 'uploader_id': '221338632',\n 'is_live': True,\n },\n 'skip': 'livestream',\n }, {\n 'url': 'https://www.bigo.tv/th/Tarlerm1304',\n 'only_matching': True,\n }, {\n 'url': 'https://bigo.tv/115976881',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n user_id = self._match_id(url)\n\n info_raw = self._download_json(\n 'https://bigo.tv/studio/getInternalStudioInfo',\n user_id, data=urlencode_postdata({'siteId': user_id}))\n\n if not isinstance(info_raw, dict):\n raise ExtractorError('Received invalid JSON data')\n if info_raw.get('code'):\n raise ExtractorError(\n 'Bigo says: %s (code %s)' % (info_raw.get('msg'), info_raw.get('code')), expected=True)\n info = info_raw.get('data') or {}\n\n if not info.get('alive'):\n raise ExtractorError('This user is offline.', expected=True)\n\n return {\n 'id': info.get('roomId') or user_id,\n 'title': info.get('roomTopic') or info.get('nick_name') or user_id,\n 'formats': [{\n 'url': info.get('hls_src'),\n 'ext': 'mp4',\n 'protocol': 'm3u8',\n }],\n 'thumbnail': info.get('snapshot'),\n 'uploader': info.get('nick_name'),\n 'uploader_id': user_id,\n 'is_live': True,\n }\n", "path": "yt_dlp/extractor/bigo.py"}]}
| 2,120 | 342 |
gh_patches_debug_28600
|
rasdani/github-patches
|
git_diff
|
zestedesavoir__zds-site-3822
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[beta][v20] Lire une notification n'invalide pas le cache
Serveur : Beta
Version : v20-RC2/99bee1d
Système : Mac OS X
Navigateur : 52.0.2743.116 (64-bit)
---
1. Générez une notification.
2. Lisez là depuis le site.
3. Récupérez la liste des notifications par l'API.
4. Si le timeout de 15 minutes n'est pas passé par là, la notification est toujours marquée comme non lue dans la réponse de l'API.
</issue>
<code>
[start of zds/notification/api/views.py]
1 # coding: utf-8
2 from dry_rest_permissions.generics import DRYPermissions
3 from rest_framework import filters
4 from rest_framework.generics import ListAPIView
5 from rest_framework.permissions import IsAuthenticated
6 from rest_framework_extensions.cache.decorators import cache_response
7 from rest_framework_extensions.etag.decorators import etag
8 from rest_framework_extensions.key_constructor import bits
9 from rest_framework_extensions.key_constructor.constructors import DefaultKeyConstructor
10
11 from zds.api.bits import DJRF3xPaginationKeyBit
12 from zds.notification.api.serializers import NotificationSerializer
13 from zds.notification.models import Notification
14
15
16 class PagingNotificationListKeyConstructor(DefaultKeyConstructor):
17 pagination = DJRF3xPaginationKeyBit()
18 search = bits.QueryParamsKeyBit(['search', 'ordering', 'type'])
19 list_sql_query = bits.ListSqlQueryKeyBit()
20 unique_view_id = bits.UniqueViewIdKeyBit()
21 user = bits.UserKeyBit()
22
23
24 class NotificationListAPI(ListAPIView):
25 """
26 List of notification.
27 """
28
29 filter_backends = (filters.SearchFilter, filters.OrderingFilter)
30 search_fields = ('title',)
31 ordering_fields = ('pubdate', 'title',)
32 list_key_func = PagingNotificationListKeyConstructor()
33 serializer_class = NotificationSerializer
34 permission_classes = (IsAuthenticated, DRYPermissions,)
35
36 @etag(list_key_func)
37 @cache_response(key_func=list_key_func)
38 def get(self, request, *args, **kwargs):
39 """
40 Lists all notifications of a user.
41 ---
42
43 parameters:
44 - name: Authorization
45 description: Bearer token to make an authenticated request.
46 required: true
47 paramType: header
48 - name: page
49 description: Restricts output to the given page number.
50 required: false
51 paramType: query
52 - name: page_size
53 description: Sets the number of notifications per page.
54 required: false
55 paramType: query
56 - name: search
57 description: Filters by title.
58 required: false
59 paramType: query
60 - name: ordering
61 description: Sorts the results. You can order by (-)pubdate or (-)title.
62 paramType: query
63 - name: type
64 description: Filters by notification type.
65 paramType: query
66 - name: subscription_type
67 description: Filters by subscription type.
68 paramType: query
69 - name: expand
70 description: Returns an object instead of an identifier representing the given field.
71 required: false
72 paramType: query
73 responseMessages:
74 - code: 401
75 message: Not Authenticated
76 - code: 404
77 message: Not Found
78 """
79 return self.list(request, *args, **kwargs)
80
81 def get_queryset(self):
82 queryset = Notification.objects.get_notifications_of(self.request.user)
83 subscription_type = self.request.query_params.get('subscription_type', None)
84 if subscription_type:
85 queryset = queryset.filter(subscription__content_type__model=subscription_type)
86 _type = self.request.query_params.get('type', None)
87 if _type:
88 queryset = queryset.filter(content_type__model=_type)
89 return queryset
90
[end of zds/notification/api/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zds/notification/api/views.py b/zds/notification/api/views.py
--- a/zds/notification/api/views.py
+++ b/zds/notification/api/views.py
@@ -1,4 +1,8 @@
# coding: utf-8
+import datetime
+from django.core.cache import cache
+from django.db.models.signals import post_delete
+from django.db.models.signals import post_save
from dry_rest_permissions.generics import DRYPermissions
from rest_framework import filters
from rest_framework.generics import ListAPIView
@@ -8,7 +12,7 @@
from rest_framework_extensions.key_constructor import bits
from rest_framework_extensions.key_constructor.constructors import DefaultKeyConstructor
-from zds.api.bits import DJRF3xPaginationKeyBit
+from zds.api.bits import DJRF3xPaginationKeyBit, UpdatedAtKeyBit
from zds.notification.api.serializers import NotificationSerializer
from zds.notification.models import Notification
@@ -19,6 +23,15 @@
list_sql_query = bits.ListSqlQueryKeyBit()
unique_view_id = bits.UniqueViewIdKeyBit()
user = bits.UserKeyBit()
+ updated_at = UpdatedAtKeyBit('api_updated_notification')
+
+
+def change_api_notification_updated_at(sender=None, instance=None, *args, **kwargs):
+ cache.set('api_updated_notification', datetime.datetime.utcnow())
+
+
+post_save.connect(receiver=change_api_notification_updated_at, sender=Notification)
+post_delete.connect(receiver=change_api_notification_updated_at, sender=Notification)
class NotificationListAPI(ListAPIView):
|
{"golden_diff": "diff --git a/zds/notification/api/views.py b/zds/notification/api/views.py\n--- a/zds/notification/api/views.py\n+++ b/zds/notification/api/views.py\n@@ -1,4 +1,8 @@\n # coding: utf-8\n+import datetime\n+from django.core.cache import cache\n+from django.db.models.signals import post_delete\n+from django.db.models.signals import post_save\n from dry_rest_permissions.generics import DRYPermissions\n from rest_framework import filters\n from rest_framework.generics import ListAPIView\n@@ -8,7 +12,7 @@\n from rest_framework_extensions.key_constructor import bits\n from rest_framework_extensions.key_constructor.constructors import DefaultKeyConstructor\n \n-from zds.api.bits import DJRF3xPaginationKeyBit\n+from zds.api.bits import DJRF3xPaginationKeyBit, UpdatedAtKeyBit\n from zds.notification.api.serializers import NotificationSerializer\n from zds.notification.models import Notification\n \n@@ -19,6 +23,15 @@\n list_sql_query = bits.ListSqlQueryKeyBit()\n unique_view_id = bits.UniqueViewIdKeyBit()\n user = bits.UserKeyBit()\n+ updated_at = UpdatedAtKeyBit('api_updated_notification')\n+\n+\n+def change_api_notification_updated_at(sender=None, instance=None, *args, **kwargs):\n+ cache.set('api_updated_notification', datetime.datetime.utcnow())\n+\n+\n+post_save.connect(receiver=change_api_notification_updated_at, sender=Notification)\n+post_delete.connect(receiver=change_api_notification_updated_at, sender=Notification)\n \n \n class NotificationListAPI(ListAPIView):\n", "issue": "[beta][v20] Lire une notification n'invalide pas le cache\nServeur : Beta\nVersion : v20-RC2/99bee1d\nSyst\u00e8me : Mac OS X\nNavigateur : 52.0.2743.116 (64-bit)\n\n---\n1. G\u00e9n\u00e9rez une notification.\n2. Lisez l\u00e0 depuis le site.\n3. R\u00e9cup\u00e9rez la liste des notifications par l'API.\n4. Si le timeout de 15 minutes n'est pas pass\u00e9 par l\u00e0, la notification est toujours marqu\u00e9e comme non lue dans la r\u00e9ponse de l'API.\n\n", "before_files": [{"content": "# coding: utf-8\nfrom dry_rest_permissions.generics import DRYPermissions\nfrom rest_framework import filters\nfrom rest_framework.generics import ListAPIView\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework_extensions.cache.decorators import cache_response\nfrom rest_framework_extensions.etag.decorators import etag\nfrom rest_framework_extensions.key_constructor import bits\nfrom rest_framework_extensions.key_constructor.constructors import DefaultKeyConstructor\n\nfrom zds.api.bits import DJRF3xPaginationKeyBit\nfrom zds.notification.api.serializers import NotificationSerializer\nfrom zds.notification.models import Notification\n\n\nclass PagingNotificationListKeyConstructor(DefaultKeyConstructor):\n pagination = DJRF3xPaginationKeyBit()\n search = bits.QueryParamsKeyBit(['search', 'ordering', 'type'])\n list_sql_query = bits.ListSqlQueryKeyBit()\n unique_view_id = bits.UniqueViewIdKeyBit()\n user = bits.UserKeyBit()\n\n\nclass NotificationListAPI(ListAPIView):\n \"\"\"\n List of notification.\n \"\"\"\n\n filter_backends = (filters.SearchFilter, filters.OrderingFilter)\n search_fields = ('title',)\n ordering_fields = ('pubdate', 'title',)\n list_key_func = PagingNotificationListKeyConstructor()\n serializer_class = NotificationSerializer\n permission_classes = (IsAuthenticated, DRYPermissions,)\n\n @etag(list_key_func)\n @cache_response(key_func=list_key_func)\n def get(self, request, *args, **kwargs):\n \"\"\"\n Lists all notifications of a user.\n ---\n\n parameters:\n - name: Authorization\n description: Bearer token to make an authenticated request.\n required: true\n paramType: header\n - name: page\n description: Restricts output to the given page number.\n required: false\n paramType: query\n - name: page_size\n description: Sets the number of notifications per page.\n required: false\n paramType: query\n - name: search\n description: Filters by title.\n required: false\n paramType: query\n - name: ordering\n description: Sorts the results. You can order by (-)pubdate or (-)title.\n paramType: query\n - name: type\n description: Filters by notification type.\n paramType: query\n - name: subscription_type\n description: Filters by subscription type.\n paramType: query\n - name: expand\n description: Returns an object instead of an identifier representing the given field.\n required: false\n paramType: query\n responseMessages:\n - code: 401\n message: Not Authenticated\n - code: 404\n message: Not Found\n \"\"\"\n return self.list(request, *args, **kwargs)\n\n def get_queryset(self):\n queryset = Notification.objects.get_notifications_of(self.request.user)\n subscription_type = self.request.query_params.get('subscription_type', None)\n if subscription_type:\n queryset = queryset.filter(subscription__content_type__model=subscription_type)\n _type = self.request.query_params.get('type', None)\n if _type:\n queryset = queryset.filter(content_type__model=_type)\n return queryset\n", "path": "zds/notification/api/views.py"}]}
| 1,523 | 332 |
gh_patches_debug_35738
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-1682
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Increase streaming unit tests
reach parity with C# unit tests
</issue>
<code>
[start of libraries/botframework-streaming/botframework/streaming/receive_request.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from typing import List
5
6 from botframework.streaming.payloads import ContentStream
7
8
9 class ReceiveRequest:
10 def __init__(
11 self, *, verb: str = None, path: str = None, streams: List[ContentStream]
12 ):
13 self.verb = verb
14 self.path = path
15 self.streams: List[ContentStream] = streams or []
16
17 async def read_body_as_str(self) -> str:
18 try:
19 content_stream = self.streams[0] if self.streams else None
20
21 if not content_stream:
22 # TODO: maybe raise an error
23 return ""
24
25 # TODO: encoding double check
26 stream = await content_stream.stream.read_until_end()
27 return bytes(stream).decode("utf-8-sig")
28 except Exception as error:
29 raise error
30
[end of libraries/botframework-streaming/botframework/streaming/receive_request.py]
[start of libraries/botframework-streaming/botframework/streaming/streaming_response.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import json
5 from uuid import UUID, uuid4
6 from typing import List, Union
7
8 from msrest.serialization import Model
9 from botframework.streaming.payloads import ResponseMessageStream
10 from botframework.streaming.payloads.models import Serializable
11
12
13 class StreamingResponse:
14 def __init__(
15 self, *, status_code: int = None, streams: List[ResponseMessageStream] = None
16 ):
17 self.status_code = status_code
18 self.streams = streams
19
20 def add_stream(self, content: object, identifier: UUID = None):
21 if not content:
22 raise TypeError("content can't be None")
23
24 if self.streams is None:
25 self.streams: List[ResponseMessageStream] = []
26
27 self.streams.append(
28 ResponseMessageStream(id=identifier or uuid4(), content=content)
29 )
30
31 def set_body(self, body: Union[str, Serializable, Model]):
32 # TODO: verify if msrest.serialization.Model is necessary
33 if not body:
34 return
35
36 if isinstance(body, Serializable):
37 body = body.to_json()
38 elif isinstance(body, Model):
39 body = json.dumps(body.as_dict())
40
41 self.add_stream(list(body.encode()))
42
43 @staticmethod
44 def create_response(status_code: int, body: object) -> "StreamingResponse":
45 response = StreamingResponse(status_code=status_code)
46
47 if body:
48 response.add_stream(body)
49
50 return response
51
[end of libraries/botframework-streaming/botframework/streaming/streaming_response.py]
[start of libraries/botframework-streaming/botframework/streaming/receive_response.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from typing import List, Union, Type
5
6 from msrest.serialization import Model
7 from botframework.streaming.payloads import ContentStream
8 from botframework.streaming.payloads.models import Serializable
9
10
11 class ReceiveResponse:
12 def __init__(self, status_code: int = None, streams: List[ContentStream] = None):
13 self.status_code = status_code
14 self.streams = streams
15
16 def read_body_as_json(
17 self, cls: Union[Type[Model], Type[Serializable]]
18 ) -> Union[Model, Serializable]:
19 try:
20 body_str = self.read_body_as_str()
21 body = None
22
23 if issubclass(cls, Serializable):
24 body = cls().from_json(body_str)
25 elif isinstance(cls, Model):
26 body = cls.deserialize(body_str)
27 return body
28 except Exception as error:
29 raise error
30
31 def read_body_as_str(self) -> str:
32 try:
33 content_stream = self.read_body()
34
35 if not content_stream:
36 return ""
37
38 # TODO: encoding double check
39 return content_stream.decode("utf8")
40 except Exception as error:
41 raise error
42
43 def read_body(self) -> bytes:
44 try:
45 content_stream = self.streams[0] if self.streams else None
46
47 if not content_stream:
48 return None
49
50 return bytes(content_stream.stream)
51 except Exception as error:
52 raise error
53
[end of libraries/botframework-streaming/botframework/streaming/receive_response.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libraries/botframework-streaming/botframework/streaming/receive_request.py b/libraries/botframework-streaming/botframework/streaming/receive_request.py
--- a/libraries/botframework-streaming/botframework/streaming/receive_request.py
+++ b/libraries/botframework-streaming/botframework/streaming/receive_request.py
@@ -8,7 +8,7 @@
class ReceiveRequest:
def __init__(
- self, *, verb: str = None, path: str = None, streams: List[ContentStream]
+ self, *, verb: str = None, path: str = None, streams: List[ContentStream] = None
):
self.verb = verb
self.path = path
diff --git a/libraries/botframework-streaming/botframework/streaming/receive_response.py b/libraries/botframework-streaming/botframework/streaming/receive_response.py
--- a/libraries/botframework-streaming/botframework/streaming/receive_response.py
+++ b/libraries/botframework-streaming/botframework/streaming/receive_response.py
@@ -9,9 +9,9 @@
class ReceiveResponse:
- def __init__(self, status_code: int = None, streams: List[ContentStream] = None):
+ def __init__(self, status_code: int = 0, streams: List[ContentStream] = None):
self.status_code = status_code
- self.streams = streams
+ self.streams = streams or []
def read_body_as_json(
self, cls: Union[Type[Model], Type[Serializable]]
diff --git a/libraries/botframework-streaming/botframework/streaming/streaming_response.py b/libraries/botframework-streaming/botframework/streaming/streaming_response.py
--- a/libraries/botframework-streaming/botframework/streaming/streaming_response.py
+++ b/libraries/botframework-streaming/botframework/streaming/streaming_response.py
@@ -2,6 +2,7 @@
# Licensed under the MIT License.
import json
+from http import HTTPStatus
from uuid import UUID, uuid4
from typing import List, Union
@@ -12,7 +13,7 @@
class StreamingResponse:
def __init__(
- self, *, status_code: int = None, streams: List[ResponseMessageStream] = None
+ self, *, status_code: int = 0, streams: List[ResponseMessageStream] = None
):
self.status_code = status_code
self.streams = streams
@@ -48,3 +49,20 @@
response.add_stream(body)
return response
+
+ @staticmethod
+ def not_found(body: object = None) -> "StreamingResponse":
+ return StreamingResponse.create_response(HTTPStatus.NOT_FOUND, body)
+
+ @staticmethod
+ def forbidden(body: object = None) -> "StreamingResponse":
+ return StreamingResponse.create_response(HTTPStatus.FORBIDDEN, body)
+
+ # pylint: disable=invalid-name
+ @staticmethod
+ def ok(body: object = None) -> "StreamingResponse":
+ return StreamingResponse.create_response(HTTPStatus.OK, body)
+
+ @staticmethod
+ def internal_server_error(body: object = None) -> "StreamingResponse":
+ return StreamingResponse.create_response(HTTPStatus.INTERNAL_SERVER_ERROR, body)
|
{"golden_diff": "diff --git a/libraries/botframework-streaming/botframework/streaming/receive_request.py b/libraries/botframework-streaming/botframework/streaming/receive_request.py\n--- a/libraries/botframework-streaming/botframework/streaming/receive_request.py\n+++ b/libraries/botframework-streaming/botframework/streaming/receive_request.py\n@@ -8,7 +8,7 @@\n \n class ReceiveRequest:\n def __init__(\n- self, *, verb: str = None, path: str = None, streams: List[ContentStream]\n+ self, *, verb: str = None, path: str = None, streams: List[ContentStream] = None\n ):\n self.verb = verb\n self.path = path\ndiff --git a/libraries/botframework-streaming/botframework/streaming/receive_response.py b/libraries/botframework-streaming/botframework/streaming/receive_response.py\n--- a/libraries/botframework-streaming/botframework/streaming/receive_response.py\n+++ b/libraries/botframework-streaming/botframework/streaming/receive_response.py\n@@ -9,9 +9,9 @@\n \n \n class ReceiveResponse:\n- def __init__(self, status_code: int = None, streams: List[ContentStream] = None):\n+ def __init__(self, status_code: int = 0, streams: List[ContentStream] = None):\n self.status_code = status_code\n- self.streams = streams\n+ self.streams = streams or []\n \n def read_body_as_json(\n self, cls: Union[Type[Model], Type[Serializable]]\ndiff --git a/libraries/botframework-streaming/botframework/streaming/streaming_response.py b/libraries/botframework-streaming/botframework/streaming/streaming_response.py\n--- a/libraries/botframework-streaming/botframework/streaming/streaming_response.py\n+++ b/libraries/botframework-streaming/botframework/streaming/streaming_response.py\n@@ -2,6 +2,7 @@\n # Licensed under the MIT License.\n \n import json\n+from http import HTTPStatus\n from uuid import UUID, uuid4\n from typing import List, Union\n \n@@ -12,7 +13,7 @@\n \n class StreamingResponse:\n def __init__(\n- self, *, status_code: int = None, streams: List[ResponseMessageStream] = None\n+ self, *, status_code: int = 0, streams: List[ResponseMessageStream] = None\n ):\n self.status_code = status_code\n self.streams = streams\n@@ -48,3 +49,20 @@\n response.add_stream(body)\n \n return response\n+\n+ @staticmethod\n+ def not_found(body: object = None) -> \"StreamingResponse\":\n+ return StreamingResponse.create_response(HTTPStatus.NOT_FOUND, body)\n+\n+ @staticmethod\n+ def forbidden(body: object = None) -> \"StreamingResponse\":\n+ return StreamingResponse.create_response(HTTPStatus.FORBIDDEN, body)\n+\n+ # pylint: disable=invalid-name\n+ @staticmethod\n+ def ok(body: object = None) -> \"StreamingResponse\":\n+ return StreamingResponse.create_response(HTTPStatus.OK, body)\n+\n+ @staticmethod\n+ def internal_server_error(body: object = None) -> \"StreamingResponse\":\n+ return StreamingResponse.create_response(HTTPStatus.INTERNAL_SERVER_ERROR, body)\n", "issue": "Increase streaming unit tests\nreach parity with C# unit tests\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom typing import List\n\nfrom botframework.streaming.payloads import ContentStream\n\n\nclass ReceiveRequest:\n def __init__(\n self, *, verb: str = None, path: str = None, streams: List[ContentStream]\n ):\n self.verb = verb\n self.path = path\n self.streams: List[ContentStream] = streams or []\n\n async def read_body_as_str(self) -> str:\n try:\n content_stream = self.streams[0] if self.streams else None\n\n if not content_stream:\n # TODO: maybe raise an error\n return \"\"\n\n # TODO: encoding double check\n stream = await content_stream.stream.read_until_end()\n return bytes(stream).decode(\"utf-8-sig\")\n except Exception as error:\n raise error\n", "path": "libraries/botframework-streaming/botframework/streaming/receive_request.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport json\nfrom uuid import UUID, uuid4\nfrom typing import List, Union\n\nfrom msrest.serialization import Model\nfrom botframework.streaming.payloads import ResponseMessageStream\nfrom botframework.streaming.payloads.models import Serializable\n\n\nclass StreamingResponse:\n def __init__(\n self, *, status_code: int = None, streams: List[ResponseMessageStream] = None\n ):\n self.status_code = status_code\n self.streams = streams\n\n def add_stream(self, content: object, identifier: UUID = None):\n if not content:\n raise TypeError(\"content can't be None\")\n\n if self.streams is None:\n self.streams: List[ResponseMessageStream] = []\n\n self.streams.append(\n ResponseMessageStream(id=identifier or uuid4(), content=content)\n )\n\n def set_body(self, body: Union[str, Serializable, Model]):\n # TODO: verify if msrest.serialization.Model is necessary\n if not body:\n return\n\n if isinstance(body, Serializable):\n body = body.to_json()\n elif isinstance(body, Model):\n body = json.dumps(body.as_dict())\n\n self.add_stream(list(body.encode()))\n\n @staticmethod\n def create_response(status_code: int, body: object) -> \"StreamingResponse\":\n response = StreamingResponse(status_code=status_code)\n\n if body:\n response.add_stream(body)\n\n return response\n", "path": "libraries/botframework-streaming/botframework/streaming/streaming_response.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom typing import List, Union, Type\n\nfrom msrest.serialization import Model\nfrom botframework.streaming.payloads import ContentStream\nfrom botframework.streaming.payloads.models import Serializable\n\n\nclass ReceiveResponse:\n def __init__(self, status_code: int = None, streams: List[ContentStream] = None):\n self.status_code = status_code\n self.streams = streams\n\n def read_body_as_json(\n self, cls: Union[Type[Model], Type[Serializable]]\n ) -> Union[Model, Serializable]:\n try:\n body_str = self.read_body_as_str()\n body = None\n\n if issubclass(cls, Serializable):\n body = cls().from_json(body_str)\n elif isinstance(cls, Model):\n body = cls.deserialize(body_str)\n return body\n except Exception as error:\n raise error\n\n def read_body_as_str(self) -> str:\n try:\n content_stream = self.read_body()\n\n if not content_stream:\n return \"\"\n\n # TODO: encoding double check\n return content_stream.decode(\"utf8\")\n except Exception as error:\n raise error\n\n def read_body(self) -> bytes:\n try:\n content_stream = self.streams[0] if self.streams else None\n\n if not content_stream:\n return None\n\n return bytes(content_stream.stream)\n except Exception as error:\n raise error\n", "path": "libraries/botframework-streaming/botframework/streaming/receive_response.py"}]}
| 1,718 | 735 |
gh_patches_debug_14007
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-1219
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CSV export broken
### Short description
Export CSV failed when the plot name has decode error characters.
### Code to reproduce
```python
from pyqtgraph.Qt import QtGui, QtCore
import numpy as np
import pyqtgraph as pg
#QtGui.QApplication.setGraphicsSystem('raster')
app = QtGui.QApplication([])
win = pg.GraphicsLayoutWidget(show=True, title="Basic plotting examples")
win.resize(1000,600)
win.setWindowTitle('pyqtgraph example: Plotting')
pg.setConfigOptions(antialias=True)
pw = win.addPlot(title="Scatter plot, axis labels, log scale")
pw.addLegend()
pw .plot(np.random.normal(size=100), pen=(255,0,0), name="\u00A0下加热体")
QtGui.QApplication.instance().exec_()
```
### Expected behavior
Export CSV Success
### Real behavior
Export CSV Failed
```
---------------------------------------------------------------------------
UnicodeEncodeError Traceback (most recent call last)
c:\program files\python37\lib\site-packages\pyqtgraph\exporters\Exporter.py in fileSaveFinished(self, fileName)
75 fileName = fileName + '.' + selectedExt.lstrip('.')
76
---> 77 self.export(fileName=fileName, **self.fileDialog.opts)
78
79 def getScene(self):
c:\program files\python37\lib\site-packages\pyqtgraph\exporters\CSVExporter.py in export(self, fileName)
58
59 with open(fileName, 'w') as fd:
---> 60 fd.write(sep.join(header) + '\n')
61 i = 0
62 numFormat = '%%0.%dg' % self.params['precision']
UnicodeEncodeError: 'gbk' codec can't encode character '\xa0' in position 1: illegal multibyte sequence
```
### Tested environment(s)
* PyQtGraph version: 0.11.0.dev0+g2203933
* Qt Python binding: PyQt5 5.13.2 Qt 5.13.2
* Python version: Python 3.7.5
* NumPy version: 1.17.4
* Operating system: Windows 7 X64
* Installation method: pip git+
### Additional context
I use "\u00A0" because i want to add some space before label name in the legend.
Could i use the csv export by "utf-8" but not "gbk" ?
</issue>
<code>
[start of pyqtgraph/exporters/CSVExporter.py]
1 # -*- coding: utf-8 -*-
2 from ..Qt import QtGui, QtCore
3 from .Exporter import Exporter
4 from ..parametertree import Parameter
5 from .. import PlotItem
6
7 __all__ = ['CSVExporter']
8
9
10 class CSVExporter(Exporter):
11 Name = "CSV from plot data"
12 windows = []
13 def __init__(self, item):
14 Exporter.__init__(self, item)
15 self.params = Parameter(name='params', type='group', children=[
16 {'name': 'separator', 'type': 'list', 'value': 'comma', 'values': ['comma', 'tab']},
17 {'name': 'precision', 'type': 'int', 'value': 10, 'limits': [0, None]},
18 {'name': 'columnMode', 'type': 'list', 'values': ['(x,y) per plot', '(x,y,y,y) for all plots']}
19 ])
20
21 def parameters(self):
22 return self.params
23
24 def export(self, fileName=None):
25
26 if not isinstance(self.item, PlotItem):
27 raise Exception("Must have a PlotItem selected for CSV export.")
28
29 if fileName is None:
30 self.fileSaveDialog(filter=["*.csv", "*.tsv"])
31 return
32
33 data = []
34 header = []
35
36 appendAllX = self.params['columnMode'] == '(x,y) per plot'
37
38 for i, c in enumerate(self.item.curves):
39 cd = c.getData()
40 if cd[0] is None:
41 continue
42 data.append(cd)
43 if hasattr(c, 'implements') and c.implements('plotData') and c.name() is not None:
44 name = c.name().replace('"', '""') + '_'
45 xName, yName = '"'+name+'x"', '"'+name+'y"'
46 else:
47 xName = 'x%04d' % i
48 yName = 'y%04d' % i
49 if appendAllX or i == 0:
50 header.extend([xName, yName])
51 else:
52 header.extend([yName])
53
54 if self.params['separator'] == 'comma':
55 sep = ','
56 else:
57 sep = '\t'
58
59 with open(fileName, 'w') as fd:
60 fd.write(sep.join(header) + '\n')
61 i = 0
62 numFormat = '%%0.%dg' % self.params['precision']
63 numRows = max([len(d[0]) for d in data])
64 for i in range(numRows):
65 for j, d in enumerate(data):
66 # write x value if this is the first column, or if we want
67 # x for all rows
68 if appendAllX or j == 0:
69 if d is not None and i < len(d[0]):
70 fd.write(numFormat % d[0][i] + sep)
71 else:
72 fd.write(' %s' % sep)
73
74 # write y value
75 if d is not None and i < len(d[1]):
76 fd.write(numFormat % d[1][i] + sep)
77 else:
78 fd.write(' %s' % sep)
79 fd.write('\n')
80
81
82 CSVExporter.register()
83
84
85
[end of pyqtgraph/exporters/CSVExporter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/exporters/CSVExporter.py b/pyqtgraph/exporters/CSVExporter.py
--- a/pyqtgraph/exporters/CSVExporter.py
+++ b/pyqtgraph/exporters/CSVExporter.py
@@ -3,6 +3,7 @@
from .Exporter import Exporter
from ..parametertree import Parameter
from .. import PlotItem
+from ..python2_3 import asUnicode
__all__ = ['CSVExporter']
@@ -57,7 +58,7 @@
sep = '\t'
with open(fileName, 'w') as fd:
- fd.write(sep.join(header) + '\n')
+ fd.write(sep.join(map(asUnicode, header)) + '\n')
i = 0
numFormat = '%%0.%dg' % self.params['precision']
numRows = max([len(d[0]) for d in data])
|
{"golden_diff": "diff --git a/pyqtgraph/exporters/CSVExporter.py b/pyqtgraph/exporters/CSVExporter.py\n--- a/pyqtgraph/exporters/CSVExporter.py\n+++ b/pyqtgraph/exporters/CSVExporter.py\n@@ -3,6 +3,7 @@\n from .Exporter import Exporter\n from ..parametertree import Parameter\n from .. import PlotItem\n+from ..python2_3 import asUnicode\n \n __all__ = ['CSVExporter']\n \n@@ -57,7 +58,7 @@\n sep = '\\t'\n \n with open(fileName, 'w') as fd:\n- fd.write(sep.join(header) + '\\n')\n+ fd.write(sep.join(map(asUnicode, header)) + '\\n')\n i = 0\n numFormat = '%%0.%dg' % self.params['precision']\n numRows = max([len(d[0]) for d in data])\n", "issue": "CSV export broken\n### Short description\r\nExport CSV failed when the plot name has decode error characters.\r\n\r\n### Code to reproduce\r\n```python\r\nfrom pyqtgraph.Qt import QtGui, QtCore\r\nimport numpy as np\r\nimport pyqtgraph as pg\r\n\r\n#QtGui.QApplication.setGraphicsSystem('raster')\r\napp = QtGui.QApplication([])\r\nwin = pg.GraphicsLayoutWidget(show=True, title=\"Basic plotting examples\")\r\nwin.resize(1000,600)\r\nwin.setWindowTitle('pyqtgraph example: Plotting')\r\n\r\n\r\npg.setConfigOptions(antialias=True)\r\n\r\npw = win.addPlot(title=\"Scatter plot, axis labels, log scale\")\r\npw.addLegend()\r\npw .plot(np.random.normal(size=100), pen=(255,0,0), name=\"\\u00A0\u4e0b\u52a0\u70ed\u4f53\")\r\n\r\nQtGui.QApplication.instance().exec_()\r\n```\r\n\r\n### Expected behavior\r\nExport CSV Success\r\n\r\n### Real behavior\r\nExport CSV Failed\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnicodeEncodeError Traceback (most recent call last)\r\nc:\\program files\\python37\\lib\\site-packages\\pyqtgraph\\exporters\\Exporter.py in fileSaveFinished(self, fileName)\r\n 75 fileName = fileName + '.' + selectedExt.lstrip('.')\r\n 76\r\n---> 77 self.export(fileName=fileName, **self.fileDialog.opts)\r\n 78\r\n 79 def getScene(self):\r\n\r\nc:\\program files\\python37\\lib\\site-packages\\pyqtgraph\\exporters\\CSVExporter.py in export(self, fileName)\r\n 58\r\n 59 with open(fileName, 'w') as fd:\r\n---> 60 fd.write(sep.join(header) + '\\n')\r\n 61 i = 0\r\n 62 numFormat = '%%0.%dg' % self.params['precision']\r\n\r\nUnicodeEncodeError: 'gbk' codec can't encode character '\\xa0' in position 1: illegal multibyte sequence\r\n```\r\n\r\n### Tested environment(s)\r\n\r\n * PyQtGraph version: 0.11.0.dev0+g2203933\r\n * Qt Python binding: PyQt5 5.13.2 Qt 5.13.2\r\n * Python version: Python 3.7.5 \r\n * NumPy version: 1.17.4\r\n * Operating system: Windows 7 X64\r\n * Installation method: pip git+\r\n\r\n### Additional context\r\nI use \"\\u00A0\" because i want to add some space before label name in the legend.\r\nCould i use the csv export by \"utf-8\" but not \"gbk\" ?\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom ..Qt import QtGui, QtCore\nfrom .Exporter import Exporter\nfrom ..parametertree import Parameter\nfrom .. import PlotItem\n\n__all__ = ['CSVExporter']\n \n \nclass CSVExporter(Exporter):\n Name = \"CSV from plot data\"\n windows = []\n def __init__(self, item):\n Exporter.__init__(self, item)\n self.params = Parameter(name='params', type='group', children=[\n {'name': 'separator', 'type': 'list', 'value': 'comma', 'values': ['comma', 'tab']},\n {'name': 'precision', 'type': 'int', 'value': 10, 'limits': [0, None]},\n {'name': 'columnMode', 'type': 'list', 'values': ['(x,y) per plot', '(x,y,y,y) for all plots']}\n ])\n \n def parameters(self):\n return self.params\n \n def export(self, fileName=None):\n \n if not isinstance(self.item, PlotItem):\n raise Exception(\"Must have a PlotItem selected for CSV export.\")\n \n if fileName is None:\n self.fileSaveDialog(filter=[\"*.csv\", \"*.tsv\"])\n return\n\n data = []\n header = []\n\n appendAllX = self.params['columnMode'] == '(x,y) per plot'\n\n for i, c in enumerate(self.item.curves):\n cd = c.getData()\n if cd[0] is None:\n continue\n data.append(cd)\n if hasattr(c, 'implements') and c.implements('plotData') and c.name() is not None:\n name = c.name().replace('\"', '\"\"') + '_'\n xName, yName = '\"'+name+'x\"', '\"'+name+'y\"'\n else:\n xName = 'x%04d' % i\n yName = 'y%04d' % i\n if appendAllX or i == 0:\n header.extend([xName, yName])\n else:\n header.extend([yName])\n\n if self.params['separator'] == 'comma':\n sep = ','\n else:\n sep = '\\t'\n\n with open(fileName, 'w') as fd:\n fd.write(sep.join(header) + '\\n')\n i = 0\n numFormat = '%%0.%dg' % self.params['precision']\n numRows = max([len(d[0]) for d in data])\n for i in range(numRows):\n for j, d in enumerate(data):\n # write x value if this is the first column, or if we want\n # x for all rows\n if appendAllX or j == 0:\n if d is not None and i < len(d[0]):\n fd.write(numFormat % d[0][i] + sep)\n else:\n fd.write(' %s' % sep)\n\n # write y value\n if d is not None and i < len(d[1]):\n fd.write(numFormat % d[1][i] + sep)\n else:\n fd.write(' %s' % sep)\n fd.write('\\n')\n\n\nCSVExporter.register() \n \n \n", "path": "pyqtgraph/exporters/CSVExporter.py"}]}
| 1,970 | 197 |
gh_patches_debug_36559
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-2930
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add API endpoint for event slugs
### Is your feature request related to a problem? Please describe.
For the app we want to get events based on their slug, this is currently not possible.
### Describe the solution you'd like
Add an API endpoint for event slugs.
</issue>
<code>
[start of website/events/api/v2/urls.py]
1 """Events app API v2 urls."""
2 from django.urls import path
3
4 from events.api.v2.views import (
5 EventDetailView,
6 EventListView,
7 EventRegistrationDetailView,
8 EventRegistrationFieldsView,
9 EventRegistrationsView,
10 ExternalEventDetailView,
11 ExternalEventListView,
12 MarkPresentAPIView,
13 )
14
15 app_name = "events"
16
17 urlpatterns = [
18 path("events/", EventListView.as_view(), name="events-list"),
19 path(
20 "events/<int:pk>/",
21 EventDetailView.as_view(),
22 name="event-detail",
23 ),
24 path(
25 "events/<int:pk>/registrations/",
26 EventRegistrationsView.as_view(),
27 name="event-registrations",
28 ),
29 path(
30 "events/<int:event_id>/registrations/<int:pk>/",
31 EventRegistrationDetailView.as_view(),
32 name="event-registration-detail",
33 ),
34 path(
35 "events/<int:event_id>/registrations/<int:registration_id>/fields/",
36 EventRegistrationFieldsView.as_view(),
37 name="event-registration-fields",
38 ),
39 path(
40 "events/<int:pk>/mark-present/<uuid:token>/",
41 MarkPresentAPIView.as_view(),
42 name="mark-present",
43 ),
44 path(
45 "events/external/", ExternalEventListView.as_view(), name="external-events-list"
46 ),
47 path(
48 "events/external/<int:pk>/",
49 ExternalEventDetailView.as_view(),
50 name="external-event-detail",
51 ),
52 ]
53
[end of website/events/api/v2/urls.py]
[start of website/events/api/v2/serializers/event.py]
1 from rest_framework import serializers
2
3 from activemembers.api.v2.serializers.member_group import MemberGroupSerializer
4 from documents.api.v2.serializers.document import DocumentSerializer
5 from events import services
6 from events.api.v2.serializers.event_registration import EventRegistrationSerializer
7 from events.models import Event
8 from payments.api.v2.serializers.payment_amount import PaymentAmountSerializer
9 from thaliawebsite.api.v2.serializers import CleanedHTMLSerializer
10 from thaliawebsite.api.v2.serializers.cleaned_model_serializer import (
11 CleanedModelSerializer,
12 )
13 from utils.snippets import create_google_maps_url
14
15
16 class EventSerializer(CleanedModelSerializer):
17 """Serializer for events."""
18
19 class Meta:
20 model = Event
21 fields = (
22 "pk",
23 "title",
24 "description",
25 "caption",
26 "start",
27 "end",
28 "category",
29 "registration_start",
30 "registration_end",
31 "cancel_deadline",
32 "optional_registrations",
33 "location",
34 "price",
35 "fine",
36 "num_participants",
37 "max_participants",
38 "no_registration_message",
39 "registration_status",
40 "cancel_too_late_message",
41 "has_fields",
42 "food_event",
43 "maps_url",
44 "user_permissions",
45 "user_registration",
46 "organisers",
47 "documents",
48 )
49
50 description = CleanedHTMLSerializer()
51 organisers = MemberGroupSerializer(many=True)
52 user_registration = serializers.SerializerMethodField("_user_registration")
53 num_participants = serializers.SerializerMethodField("_num_participants")
54 maps_url = serializers.SerializerMethodField("_maps_url")
55 registration_status = serializers.SerializerMethodField("_registration_status")
56 price = PaymentAmountSerializer()
57 fine = PaymentAmountSerializer()
58 documents = DocumentSerializer(many=True)
59 user_permissions = serializers.SerializerMethodField("_user_permissions")
60
61 def _user_registration(self, instance: Event):
62 if self.context["request"].member and len(instance.member_registration) > 0:
63 registration = instance.member_registration[-1]
64 return EventRegistrationSerializer(
65 registration,
66 context=self.context,
67 fields=(
68 "pk",
69 "present",
70 "queue_position",
71 "is_cancelled",
72 "is_late_cancellation",
73 "date",
74 "payment",
75 ),
76 ).data
77 return None
78
79 def _registration_status(self, instance: Event):
80 if self.context["request"].member and len(instance.member_registration) > 0:
81 registration = instance.member_registration[-1]
82 else:
83 registration = None
84 status = services.registration_status(
85 instance, registration, self.context["request"].member
86 )
87 cancel_status = services.cancel_status(instance, registration)
88
89 status_str = services.registration_status_string(status, instance, registration)
90 cancel_str = services.cancel_info_string(instance, cancel_status, status)
91 if services.show_cancel_status(status) and cancel_str != "":
92 return f"{status_str} {cancel_str}"
93 return f"{status_str}"
94
95 def _num_participants(self, instance: Event):
96 if instance.max_participants:
97 return min(instance.participant_count, instance.max_participants)
98 return instance.participant_count
99
100 def _user_permissions(self, instance):
101 member = self.context["request"].member
102 return services.event_permissions(member, instance, registration_prefetch=True)
103
104 def _maps_url(self, instance):
105 return create_google_maps_url(instance.map_location, zoom=13, size="450x250")
106
[end of website/events/api/v2/serializers/event.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/events/api/v2/serializers/event.py b/website/events/api/v2/serializers/event.py
--- a/website/events/api/v2/serializers/event.py
+++ b/website/events/api/v2/serializers/event.py
@@ -1,4 +1,5 @@
from rest_framework import serializers
+from rest_framework.reverse import reverse
from activemembers.api.v2.serializers.member_group import MemberGroupSerializer
from documents.api.v2.serializers.document import DocumentSerializer
@@ -20,6 +21,8 @@
model = Event
fields = (
"pk",
+ "slug",
+ "url",
"title",
"description",
"caption",
@@ -57,6 +60,7 @@
fine = PaymentAmountSerializer()
documents = DocumentSerializer(many=True)
user_permissions = serializers.SerializerMethodField("_user_permissions")
+ url = serializers.SerializerMethodField("_url")
def _user_registration(self, instance: Event):
if self.context["request"].member and len(instance.member_registration) > 0:
@@ -101,5 +105,18 @@
member = self.context["request"].member
return services.event_permissions(member, instance, registration_prefetch=True)
+ def _url(self, instance: Event):
+ if instance.slug is None:
+ return reverse(
+ "events:event",
+ kwargs={"pk": instance.pk},
+ request=self.context["request"],
+ )
+ return reverse(
+ "events:event",
+ kwargs={"slug": instance.slug},
+ request=self.context["request"],
+ )
+
def _maps_url(self, instance):
return create_google_maps_url(instance.map_location, zoom=13, size="450x250")
diff --git a/website/events/api/v2/urls.py b/website/events/api/v2/urls.py
--- a/website/events/api/v2/urls.py
+++ b/website/events/api/v2/urls.py
@@ -21,6 +21,11 @@
EventDetailView.as_view(),
name="event-detail",
),
+ path(
+ "events/<slug:slug>/",
+ EventDetailView.as_view(lookup_field="slug"),
+ name="event-detail",
+ ),
path(
"events/<int:pk>/registrations/",
EventRegistrationsView.as_view(),
|
{"golden_diff": "diff --git a/website/events/api/v2/serializers/event.py b/website/events/api/v2/serializers/event.py\n--- a/website/events/api/v2/serializers/event.py\n+++ b/website/events/api/v2/serializers/event.py\n@@ -1,4 +1,5 @@\n from rest_framework import serializers\n+from rest_framework.reverse import reverse\n \n from activemembers.api.v2.serializers.member_group import MemberGroupSerializer\n from documents.api.v2.serializers.document import DocumentSerializer\n@@ -20,6 +21,8 @@\n model = Event\n fields = (\n \"pk\",\n+ \"slug\",\n+ \"url\",\n \"title\",\n \"description\",\n \"caption\",\n@@ -57,6 +60,7 @@\n fine = PaymentAmountSerializer()\n documents = DocumentSerializer(many=True)\n user_permissions = serializers.SerializerMethodField(\"_user_permissions\")\n+ url = serializers.SerializerMethodField(\"_url\")\n \n def _user_registration(self, instance: Event):\n if self.context[\"request\"].member and len(instance.member_registration) > 0:\n@@ -101,5 +105,18 @@\n member = self.context[\"request\"].member\n return services.event_permissions(member, instance, registration_prefetch=True)\n \n+ def _url(self, instance: Event):\n+ if instance.slug is None:\n+ return reverse(\n+ \"events:event\",\n+ kwargs={\"pk\": instance.pk},\n+ request=self.context[\"request\"],\n+ )\n+ return reverse(\n+ \"events:event\",\n+ kwargs={\"slug\": instance.slug},\n+ request=self.context[\"request\"],\n+ )\n+\n def _maps_url(self, instance):\n return create_google_maps_url(instance.map_location, zoom=13, size=\"450x250\")\ndiff --git a/website/events/api/v2/urls.py b/website/events/api/v2/urls.py\n--- a/website/events/api/v2/urls.py\n+++ b/website/events/api/v2/urls.py\n@@ -21,6 +21,11 @@\n EventDetailView.as_view(),\n name=\"event-detail\",\n ),\n+ path(\n+ \"events/<slug:slug>/\",\n+ EventDetailView.as_view(lookup_field=\"slug\"),\n+ name=\"event-detail\",\n+ ),\n path(\n \"events/<int:pk>/registrations/\",\n EventRegistrationsView.as_view(),\n", "issue": "Add API endpoint for event slugs\n### Is your feature request related to a problem? Please describe.\r\nFor the app we want to get events based on their slug, this is currently not possible.\r\n\r\n### Describe the solution you'd like\r\nAdd an API endpoint for event slugs.\r\n\n", "before_files": [{"content": "\"\"\"Events app API v2 urls.\"\"\"\nfrom django.urls import path\n\nfrom events.api.v2.views import (\n EventDetailView,\n EventListView,\n EventRegistrationDetailView,\n EventRegistrationFieldsView,\n EventRegistrationsView,\n ExternalEventDetailView,\n ExternalEventListView,\n MarkPresentAPIView,\n)\n\napp_name = \"events\"\n\nurlpatterns = [\n path(\"events/\", EventListView.as_view(), name=\"events-list\"),\n path(\n \"events/<int:pk>/\",\n EventDetailView.as_view(),\n name=\"event-detail\",\n ),\n path(\n \"events/<int:pk>/registrations/\",\n EventRegistrationsView.as_view(),\n name=\"event-registrations\",\n ),\n path(\n \"events/<int:event_id>/registrations/<int:pk>/\",\n EventRegistrationDetailView.as_view(),\n name=\"event-registration-detail\",\n ),\n path(\n \"events/<int:event_id>/registrations/<int:registration_id>/fields/\",\n EventRegistrationFieldsView.as_view(),\n name=\"event-registration-fields\",\n ),\n path(\n \"events/<int:pk>/mark-present/<uuid:token>/\",\n MarkPresentAPIView.as_view(),\n name=\"mark-present\",\n ),\n path(\n \"events/external/\", ExternalEventListView.as_view(), name=\"external-events-list\"\n ),\n path(\n \"events/external/<int:pk>/\",\n ExternalEventDetailView.as_view(),\n name=\"external-event-detail\",\n ),\n]\n", "path": "website/events/api/v2/urls.py"}, {"content": "from rest_framework import serializers\n\nfrom activemembers.api.v2.serializers.member_group import MemberGroupSerializer\nfrom documents.api.v2.serializers.document import DocumentSerializer\nfrom events import services\nfrom events.api.v2.serializers.event_registration import EventRegistrationSerializer\nfrom events.models import Event\nfrom payments.api.v2.serializers.payment_amount import PaymentAmountSerializer\nfrom thaliawebsite.api.v2.serializers import CleanedHTMLSerializer\nfrom thaliawebsite.api.v2.serializers.cleaned_model_serializer import (\n CleanedModelSerializer,\n)\nfrom utils.snippets import create_google_maps_url\n\n\nclass EventSerializer(CleanedModelSerializer):\n \"\"\"Serializer for events.\"\"\"\n\n class Meta:\n model = Event\n fields = (\n \"pk\",\n \"title\",\n \"description\",\n \"caption\",\n \"start\",\n \"end\",\n \"category\",\n \"registration_start\",\n \"registration_end\",\n \"cancel_deadline\",\n \"optional_registrations\",\n \"location\",\n \"price\",\n \"fine\",\n \"num_participants\",\n \"max_participants\",\n \"no_registration_message\",\n \"registration_status\",\n \"cancel_too_late_message\",\n \"has_fields\",\n \"food_event\",\n \"maps_url\",\n \"user_permissions\",\n \"user_registration\",\n \"organisers\",\n \"documents\",\n )\n\n description = CleanedHTMLSerializer()\n organisers = MemberGroupSerializer(many=True)\n user_registration = serializers.SerializerMethodField(\"_user_registration\")\n num_participants = serializers.SerializerMethodField(\"_num_participants\")\n maps_url = serializers.SerializerMethodField(\"_maps_url\")\n registration_status = serializers.SerializerMethodField(\"_registration_status\")\n price = PaymentAmountSerializer()\n fine = PaymentAmountSerializer()\n documents = DocumentSerializer(many=True)\n user_permissions = serializers.SerializerMethodField(\"_user_permissions\")\n\n def _user_registration(self, instance: Event):\n if self.context[\"request\"].member and len(instance.member_registration) > 0:\n registration = instance.member_registration[-1]\n return EventRegistrationSerializer(\n registration,\n context=self.context,\n fields=(\n \"pk\",\n \"present\",\n \"queue_position\",\n \"is_cancelled\",\n \"is_late_cancellation\",\n \"date\",\n \"payment\",\n ),\n ).data\n return None\n\n def _registration_status(self, instance: Event):\n if self.context[\"request\"].member and len(instance.member_registration) > 0:\n registration = instance.member_registration[-1]\n else:\n registration = None\n status = services.registration_status(\n instance, registration, self.context[\"request\"].member\n )\n cancel_status = services.cancel_status(instance, registration)\n\n status_str = services.registration_status_string(status, instance, registration)\n cancel_str = services.cancel_info_string(instance, cancel_status, status)\n if services.show_cancel_status(status) and cancel_str != \"\":\n return f\"{status_str} {cancel_str}\"\n return f\"{status_str}\"\n\n def _num_participants(self, instance: Event):\n if instance.max_participants:\n return min(instance.participant_count, instance.max_participants)\n return instance.participant_count\n\n def _user_permissions(self, instance):\n member = self.context[\"request\"].member\n return services.event_permissions(member, instance, registration_prefetch=True)\n\n def _maps_url(self, instance):\n return create_google_maps_url(instance.map_location, zoom=13, size=\"450x250\")\n", "path": "website/events/api/v2/serializers/event.py"}]}
| 1,976 | 524 |
gh_patches_debug_34713
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-2971
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
FillBetweenItem has no way to change FillRule
<!-- In the following, please describe your issue in detail! -->
<!-- If some of the sections do not apply, just remove them. -->
### Short description
There is currently no way (at least that I have found) to change the fillrule for the painterpath in the FillBetweenItem. being able to set it to winding would be very useful for certain cases.
### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
```python
import pyqtgraph as pg
from PySide2.QtWidgets import QApplication
win = pg.plot()
win.setWindowTitle('pyqtgraph example: FillBetweenItem')
win.setXRange(0, 1.5)
win.setYRange(0, 1.5)
x1=[0,1,1,0,0]
y1=[0,0,1,1,0]
x2=[0.5,1.5,1.5,0.5,0.5]
y2=[0.5,0.5,1.5,1.5,0.5]
curve1 = win.plot(x=x1, y=y1, pen='k')
curve2 = win.plot(x=x2, y=y2, pen='k')
brushes = [0.5, (100, 100, 255), 0.5]
fill = pg.FillBetweenItem(curve1, curve2,brush=(100,100,255))
win.addItem(fill)
## Start Qt event loop unless running in interactive mode or using pyside.
if __name__ == '__main__':
QApplication.instance().exec_()
```
### Expected behavior
Fill in the overlap
### Real behavior
Hole in the middle.
### Tested environment(s)
* PyQtGraph version: 0.12.1
* Qt Python binding: PyQt5 5.15.4 Qt 5.15.2
* Python version: 3.7.7
* NumPy version: 1.20.2
* Operating system: Windows 10
* Installation method: PIP
</issue>
<code>
[start of pyqtgraph/examples/FillBetweenItem.py]
1 """
2 Demonstrates use of FillBetweenItem to fill the space between two plot curves.
3 """
4
5 import numpy as np
6
7 import pyqtgraph as pg
8 from pyqtgraph.Qt import QtCore
9
10 #FIXME: When running on Qt5, not as perfect as on Qt4
11
12 win = pg.plot()
13 win.setWindowTitle('pyqtgraph example: FillBetweenItem')
14 win.setXRange(-10, 10)
15 win.setYRange(-10, 10)
16
17 N = 200
18 x = np.linspace(-10, 10, N)
19 gauss = np.exp(-x**2 / 20.)
20 mn = mx = np.zeros(len(x))
21 curves = [win.plot(x=x, y=np.zeros(len(x)), pen='k') for i in range(4)]
22 brushes = [0.5, (100, 100, 255), 0.5]
23 fills = [pg.FillBetweenItem(curves[i], curves[i+1], brushes[i]) for i in range(3)]
24 for f in fills:
25 win.addItem(f)
26
27 def update():
28 global mx, mn, curves, gauss, x
29 a = 5 / abs(np.random.normal(loc=1, scale=0.2))
30 y1 = -np.abs(a*gauss + np.random.normal(size=len(x)))
31 y2 = np.abs(a*gauss + np.random.normal(size=len(x)))
32
33 s = 0.01
34 mn = np.where(y1<mn, y1, mn) * (1-s) + y1 * s
35 mx = np.where(y2>mx, y2, mx) * (1-s) + y2 * s
36 curves[0].setData(x, mn)
37 curves[1].setData(x, y1)
38 curves[2].setData(x, y2)
39 curves[3].setData(x, mx)
40
41
42 timer = QtCore.QTimer()
43 timer.timeout.connect(update)
44 timer.start(30)
45
46
47 if __name__ == '__main__':
48 pg.exec()
49
[end of pyqtgraph/examples/FillBetweenItem.py]
[start of pyqtgraph/graphicsItems/FillBetweenItem.py]
1 from .. import functions as fn
2 from ..Qt import QtGui, QtWidgets
3 from .PlotCurveItem import PlotCurveItem
4 from .PlotDataItem import PlotDataItem
5
6 __all__ = ['FillBetweenItem']
7
8 class FillBetweenItem(QtWidgets.QGraphicsPathItem):
9 """
10 GraphicsItem filling the space between two PlotDataItems.
11 """
12 def __init__(self, curve1=None, curve2=None, brush=None, pen=None):
13 QtWidgets.QGraphicsPathItem.__init__(self)
14 self.curves = None
15 if curve1 is not None and curve2 is not None:
16 self.setCurves(curve1, curve2)
17 elif curve1 is not None or curve2 is not None:
18 raise Exception("Must specify two curves to fill between.")
19
20 if brush is not None:
21 self.setBrush(brush)
22 self.setPen(pen)
23 self.updatePath()
24
25 def setBrush(self, *args, **kwds):
26 """Change the fill brush. Acceps the same arguments as pg.mkBrush()"""
27 QtWidgets.QGraphicsPathItem.setBrush(self, fn.mkBrush(*args, **kwds))
28
29 def setPen(self, *args, **kwds):
30 QtWidgets.QGraphicsPathItem.setPen(self, fn.mkPen(*args, **kwds))
31
32 def setCurves(self, curve1, curve2):
33 """Set the curves to fill between.
34
35 Arguments must be instances of PlotDataItem or PlotCurveItem.
36
37 Added in version 0.9.9
38 """
39 if self.curves is not None:
40 for c in self.curves:
41 try:
42 c.sigPlotChanged.disconnect(self.curveChanged)
43 except (TypeError, RuntimeError):
44 pass
45
46 curves = [curve1, curve2]
47 for c in curves:
48 if not isinstance(c, PlotDataItem) and not isinstance(c, PlotCurveItem):
49 raise TypeError("Curves must be PlotDataItem or PlotCurveItem.")
50 self.curves = curves
51 curve1.sigPlotChanged.connect(self.curveChanged)
52 curve2.sigPlotChanged.connect(self.curveChanged)
53 self.setZValue(min(curve1.zValue(), curve2.zValue())-1)
54 self.curveChanged()
55
56 def curveChanged(self):
57 self.updatePath()
58
59 def updatePath(self):
60 if self.curves is None:
61 self.setPath(QtGui.QPainterPath())
62 return
63 paths = []
64 for c in self.curves:
65 if isinstance(c, PlotDataItem):
66 paths.append(c.curve.getPath())
67 elif isinstance(c, PlotCurveItem):
68 paths.append(c.getPath())
69
70 path = QtGui.QPainterPath()
71 transform = QtGui.QTransform()
72 ps1 = paths[0].toSubpathPolygons(transform)
73 ps2 = paths[1].toReversed().toSubpathPolygons(transform)
74 ps2.reverse()
75 if len(ps1) == 0 or len(ps2) == 0:
76 self.setPath(QtGui.QPainterPath())
77 return
78
79
80 for p1, p2 in zip(ps1, ps2):
81 path.addPolygon(p1 + p2)
82 self.setPath(path)
83
[end of pyqtgraph/graphicsItems/FillBetweenItem.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/examples/FillBetweenItem.py b/pyqtgraph/examples/FillBetweenItem.py
--- a/pyqtgraph/examples/FillBetweenItem.py
+++ b/pyqtgraph/examples/FillBetweenItem.py
@@ -20,7 +20,8 @@
mn = mx = np.zeros(len(x))
curves = [win.plot(x=x, y=np.zeros(len(x)), pen='k') for i in range(4)]
brushes = [0.5, (100, 100, 255), 0.5]
-fills = [pg.FillBetweenItem(curves[i], curves[i+1], brushes[i]) for i in range(3)]
+fills = [pg.FillBetweenItem(curves[0], curves[3], brushes[0]),
+ pg.FillBetweenItem(curves[1], curves[2], brushes[1])]
for f in fills:
win.addItem(f)
diff --git a/pyqtgraph/graphicsItems/FillBetweenItem.py b/pyqtgraph/graphicsItems/FillBetweenItem.py
--- a/pyqtgraph/graphicsItems/FillBetweenItem.py
+++ b/pyqtgraph/graphicsItems/FillBetweenItem.py
@@ -23,7 +23,7 @@
self.updatePath()
def setBrush(self, *args, **kwds):
- """Change the fill brush. Acceps the same arguments as pg.mkBrush()"""
+ """Change the fill brush. Accepts the same arguments as pg.mkBrush()"""
QtWidgets.QGraphicsPathItem.setBrush(self, fn.mkBrush(*args, **kwds))
def setPen(self, *args, **kwds):
@@ -55,7 +55,6 @@
def curveChanged(self):
self.updatePath()
-
def updatePath(self):
if self.curves is None:
self.setPath(QtGui.QPainterPath())
@@ -69,14 +68,18 @@
path = QtGui.QPainterPath()
transform = QtGui.QTransform()
+
ps1 = paths[0].toSubpathPolygons(transform)
ps2 = paths[1].toReversed().toSubpathPolygons(transform)
ps2.reverse()
+
if len(ps1) == 0 or len(ps2) == 0:
self.setPath(QtGui.QPainterPath())
return
-
for p1, p2 in zip(ps1, ps2):
- path.addPolygon(p1 + p2)
+ intersection = p1.intersected(p2)
+ if not intersection.isEmpty():
+ path.addPolygon(intersection)
+ path.addPolygon(p1 + p2)
self.setPath(path)
|
{"golden_diff": "diff --git a/pyqtgraph/examples/FillBetweenItem.py b/pyqtgraph/examples/FillBetweenItem.py\n--- a/pyqtgraph/examples/FillBetweenItem.py\n+++ b/pyqtgraph/examples/FillBetweenItem.py\n@@ -20,7 +20,8 @@\n mn = mx = np.zeros(len(x))\n curves = [win.plot(x=x, y=np.zeros(len(x)), pen='k') for i in range(4)]\n brushes = [0.5, (100, 100, 255), 0.5]\n-fills = [pg.FillBetweenItem(curves[i], curves[i+1], brushes[i]) for i in range(3)]\n+fills = [pg.FillBetweenItem(curves[0], curves[3], brushes[0]),\n+ pg.FillBetweenItem(curves[1], curves[2], brushes[1])]\n for f in fills:\n win.addItem(f)\n \ndiff --git a/pyqtgraph/graphicsItems/FillBetweenItem.py b/pyqtgraph/graphicsItems/FillBetweenItem.py\n--- a/pyqtgraph/graphicsItems/FillBetweenItem.py\n+++ b/pyqtgraph/graphicsItems/FillBetweenItem.py\n@@ -23,7 +23,7 @@\n self.updatePath()\n \n def setBrush(self, *args, **kwds):\n- \"\"\"Change the fill brush. Acceps the same arguments as pg.mkBrush()\"\"\"\n+ \"\"\"Change the fill brush. Accepts the same arguments as pg.mkBrush()\"\"\"\n QtWidgets.QGraphicsPathItem.setBrush(self, fn.mkBrush(*args, **kwds))\n \n def setPen(self, *args, **kwds):\n@@ -55,7 +55,6 @@\n \n def curveChanged(self):\n self.updatePath()\n-\n def updatePath(self):\n if self.curves is None:\n self.setPath(QtGui.QPainterPath())\n@@ -69,14 +68,18 @@\n \n path = QtGui.QPainterPath()\n transform = QtGui.QTransform()\n+\n ps1 = paths[0].toSubpathPolygons(transform)\n ps2 = paths[1].toReversed().toSubpathPolygons(transform)\n ps2.reverse()\n+\n if len(ps1) == 0 or len(ps2) == 0:\n self.setPath(QtGui.QPainterPath())\n return\n \n- \n for p1, p2 in zip(ps1, ps2):\n- path.addPolygon(p1 + p2)\n+ intersection = p1.intersected(p2)\n+ if not intersection.isEmpty():\n+ path.addPolygon(intersection)\n+ path.addPolygon(p1 + p2) \n self.setPath(path)\n", "issue": "FillBetweenItem has no way to change FillRule\n<!-- In the following, please describe your issue in detail! -->\r\n<!-- If some of the sections do not apply, just remove them. -->\r\n\r\n### Short description\r\nThere is currently no way (at least that I have found) to change the fillrule for the painterpath in the FillBetweenItem. being able to set it to winding would be very useful for certain cases.\r\n\r\n### Code to reproduce\r\n<!-- Please provide a minimal working example that reproduces the issue in the code block below.\r\n Ideally, this should be a full example someone else could run without additional setup. -->\r\n```python\r\nimport pyqtgraph as pg\r\nfrom PySide2.QtWidgets import QApplication\r\n\r\nwin = pg.plot()\r\nwin.setWindowTitle('pyqtgraph example: FillBetweenItem')\r\nwin.setXRange(0, 1.5)\r\nwin.setYRange(0, 1.5)\r\n\r\nx1=[0,1,1,0,0]\r\ny1=[0,0,1,1,0]\r\nx2=[0.5,1.5,1.5,0.5,0.5]\r\ny2=[0.5,0.5,1.5,1.5,0.5]\r\ncurve1 = win.plot(x=x1, y=y1, pen='k')\r\ncurve2 = win.plot(x=x2, y=y2, pen='k')\r\nbrushes = [0.5, (100, 100, 255), 0.5]\r\nfill = pg.FillBetweenItem(curve1, curve2,brush=(100,100,255))\r\nwin.addItem(fill)\r\n\r\n## Start Qt event loop unless running in interactive mode or using pyside.\r\nif __name__ == '__main__':\r\n QApplication.instance().exec_()\r\n```\r\n\r\n### Expected behavior\r\nFill in the overlap\r\n\r\n### Real behavior\r\nHole in the middle.\r\n\r\n\r\n### Tested environment(s)\r\n\r\n * PyQtGraph version: 0.12.1\r\n * Qt Python binding: PyQt5 5.15.4 Qt 5.15.2\r\n * Python version: 3.7.7\r\n * NumPy version: 1.20.2\r\n * Operating system: Windows 10\r\n * Installation method: PIP\n", "before_files": [{"content": "\"\"\"\nDemonstrates use of FillBetweenItem to fill the space between two plot curves.\n\"\"\"\n\nimport numpy as np\n\nimport pyqtgraph as pg\nfrom pyqtgraph.Qt import QtCore\n\n#FIXME: When running on Qt5, not as perfect as on Qt4\n\nwin = pg.plot()\nwin.setWindowTitle('pyqtgraph example: FillBetweenItem')\nwin.setXRange(-10, 10)\nwin.setYRange(-10, 10)\n\nN = 200\nx = np.linspace(-10, 10, N)\ngauss = np.exp(-x**2 / 20.)\nmn = mx = np.zeros(len(x))\ncurves = [win.plot(x=x, y=np.zeros(len(x)), pen='k') for i in range(4)]\nbrushes = [0.5, (100, 100, 255), 0.5]\nfills = [pg.FillBetweenItem(curves[i], curves[i+1], brushes[i]) for i in range(3)]\nfor f in fills:\n win.addItem(f)\n\ndef update():\n global mx, mn, curves, gauss, x\n a = 5 / abs(np.random.normal(loc=1, scale=0.2))\n y1 = -np.abs(a*gauss + np.random.normal(size=len(x)))\n y2 = np.abs(a*gauss + np.random.normal(size=len(x)))\n \n s = 0.01\n mn = np.where(y1<mn, y1, mn) * (1-s) + y1 * s\n mx = np.where(y2>mx, y2, mx) * (1-s) + y2 * s\n curves[0].setData(x, mn)\n curves[1].setData(x, y1)\n curves[2].setData(x, y2)\n curves[3].setData(x, mx)\n \n\ntimer = QtCore.QTimer()\ntimer.timeout.connect(update)\ntimer.start(30)\n\n\nif __name__ == '__main__':\n pg.exec()\n", "path": "pyqtgraph/examples/FillBetweenItem.py"}, {"content": "from .. import functions as fn\nfrom ..Qt import QtGui, QtWidgets\nfrom .PlotCurveItem import PlotCurveItem\nfrom .PlotDataItem import PlotDataItem\n\n__all__ = ['FillBetweenItem']\n\nclass FillBetweenItem(QtWidgets.QGraphicsPathItem):\n \"\"\"\n GraphicsItem filling the space between two PlotDataItems.\n \"\"\"\n def __init__(self, curve1=None, curve2=None, brush=None, pen=None):\n QtWidgets.QGraphicsPathItem.__init__(self)\n self.curves = None\n if curve1 is not None and curve2 is not None:\n self.setCurves(curve1, curve2)\n elif curve1 is not None or curve2 is not None:\n raise Exception(\"Must specify two curves to fill between.\")\n\n if brush is not None:\n self.setBrush(brush)\n self.setPen(pen)\n self.updatePath()\n \n def setBrush(self, *args, **kwds):\n \"\"\"Change the fill brush. Acceps the same arguments as pg.mkBrush()\"\"\"\n QtWidgets.QGraphicsPathItem.setBrush(self, fn.mkBrush(*args, **kwds))\n \n def setPen(self, *args, **kwds):\n QtWidgets.QGraphicsPathItem.setPen(self, fn.mkPen(*args, **kwds))\n\n def setCurves(self, curve1, curve2):\n \"\"\"Set the curves to fill between.\n \n Arguments must be instances of PlotDataItem or PlotCurveItem.\n \n Added in version 0.9.9\n \"\"\"\n if self.curves is not None:\n for c in self.curves:\n try:\n c.sigPlotChanged.disconnect(self.curveChanged)\n except (TypeError, RuntimeError):\n pass\n\n curves = [curve1, curve2]\n for c in curves:\n if not isinstance(c, PlotDataItem) and not isinstance(c, PlotCurveItem):\n raise TypeError(\"Curves must be PlotDataItem or PlotCurveItem.\")\n self.curves = curves\n curve1.sigPlotChanged.connect(self.curveChanged)\n curve2.sigPlotChanged.connect(self.curveChanged)\n self.setZValue(min(curve1.zValue(), curve2.zValue())-1)\n self.curveChanged()\n\n def curveChanged(self):\n self.updatePath()\n\n def updatePath(self):\n if self.curves is None:\n self.setPath(QtGui.QPainterPath())\n return\n paths = []\n for c in self.curves:\n if isinstance(c, PlotDataItem):\n paths.append(c.curve.getPath())\n elif isinstance(c, PlotCurveItem):\n paths.append(c.getPath())\n\n path = QtGui.QPainterPath()\n transform = QtGui.QTransform()\n ps1 = paths[0].toSubpathPolygons(transform)\n ps2 = paths[1].toReversed().toSubpathPolygons(transform)\n ps2.reverse()\n if len(ps1) == 0 or len(ps2) == 0:\n self.setPath(QtGui.QPainterPath())\n return\n \n \n for p1, p2 in zip(ps1, ps2):\n path.addPolygon(p1 + p2)\n self.setPath(path)\n", "path": "pyqtgraph/graphicsItems/FillBetweenItem.py"}]}
| 2,418 | 582 |
gh_patches_debug_36712
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-4769
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DNS Resolution in Expressions
**Is your feature request related to a problem? Please describe.**
I would like to be able to resolve a hostname in expressions. In particular, my docker container resolves `host.docker.internal` to network address of the machine the container is running on. I would like to be able to check for requests from this hostname, but there is no way to detect this in expressions.
**Describe the solution you'd like**
I would like a function to be exposed in expressions that resolves a hostname to an address or list of addresses. Such a function could be implemented in `authentik/lib/expression/evaluator.py` like this:
```py
def expr_resolve_host(host: str) -> List[str]:
"""Resolve a hostname to a list of IP addresses."""
return [
sockaddr[0]
for family, type, proto, canonname, sockaddr in
socket.getaddrinfo(host, None)
]
```
**Describe alternatives you've considered**
It would be possible for me to set up the docker network statically, and then hardcode the address, but this is fragile, and I would prefer for Docker to manage its network allocation.
**Additional context**
I currently have an expression that checks for requests for my local/trusted networks. This works well, but if I access Authentik directly from the docker host, the source address is the "host gateway" address of the docker container's network, which is not in my trusted networks. (The host address of the docker internal network.) I can imagine this functionality would also be useful for getting the addresses other containers the docker container is connected to.
At my scale and use case, it doesn't really matter, but it may be worth caching this information for larger installations. I'm not sure if `socket.getaddrinfo` does any caching of it's own, but if it decides to do real DNS resolution it could be quite slow. If an expression is getting hit tens or hundreds of times a minute this could be a serious issue. (This could alternatively be resolved with a local DNS caching server on the host.)
Any caching should be conservatively short, at most 60 seconds or so, since an change in address could also cause serious problems. A timeout may also be in order, since some use cases would prefer a fast empty response over an always correct answer. Ideally, these would be configurable with function parameters so users can determine their own caching and timeout needs.
</issue>
<code>
[start of authentik/lib/expression/evaluator.py]
1 """authentik expression policy evaluator"""
2 import re
3 from ipaddress import ip_address, ip_network
4 from textwrap import indent
5 from typing import Any, Iterable, Optional
6
7 from django.core.exceptions import FieldError
8 from django_otp import devices_for_user
9 from rest_framework.serializers import ValidationError
10 from sentry_sdk.hub import Hub
11 from sentry_sdk.tracing import Span
12 from structlog.stdlib import get_logger
13
14 from authentik.core.models import User
15 from authentik.events.models import Event
16 from authentik.lib.utils.http import get_http_session
17 from authentik.policies.types import PolicyRequest
18
19 LOGGER = get_logger()
20
21
22 class BaseEvaluator:
23 """Validate and evaluate python-based expressions"""
24
25 # Globals that can be used by function
26 _globals: dict[str, Any]
27 # Context passed as locals to exec()
28 _context: dict[str, Any]
29
30 # Filename used for exec
31 _filename: str
32
33 def __init__(self, filename: Optional[str] = None):
34 self._filename = filename if filename else "BaseEvaluator"
35 # update website/docs/expressions/_objects.md
36 # update website/docs/expressions/_functions.md
37 self._globals = {
38 "regex_match": BaseEvaluator.expr_regex_match,
39 "regex_replace": BaseEvaluator.expr_regex_replace,
40 "list_flatten": BaseEvaluator.expr_flatten,
41 "ak_is_group_member": BaseEvaluator.expr_is_group_member,
42 "ak_user_by": BaseEvaluator.expr_user_by,
43 "ak_user_has_authenticator": BaseEvaluator.expr_func_user_has_authenticator,
44 "ak_create_event": self.expr_event_create,
45 "ak_logger": get_logger(self._filename).bind(),
46 "requests": get_http_session(),
47 "ip_address": ip_address,
48 "ip_network": ip_network,
49 }
50 self._context = {}
51
52 @staticmethod
53 def expr_flatten(value: list[Any] | Any) -> Optional[Any]:
54 """Flatten `value` if its a list"""
55 if isinstance(value, list):
56 if len(value) < 1:
57 return None
58 return value[0]
59 return value
60
61 @staticmethod
62 def expr_regex_match(value: Any, regex: str) -> bool:
63 """Expression Filter to run re.search"""
64 return re.search(regex, value) is not None
65
66 @staticmethod
67 def expr_regex_replace(value: Any, regex: str, repl: str) -> str:
68 """Expression Filter to run re.sub"""
69 return re.sub(regex, repl, value)
70
71 @staticmethod
72 def expr_is_group_member(user: User, **group_filters) -> bool:
73 """Check if `user` is member of group with name `group_name`"""
74 return user.ak_groups.filter(**group_filters).exists()
75
76 @staticmethod
77 def expr_user_by(**filters) -> Optional[User]:
78 """Get user by filters"""
79 try:
80 users = User.objects.filter(**filters)
81 if users:
82 return users.first()
83 return None
84 except FieldError:
85 return None
86
87 @staticmethod
88 def expr_func_user_has_authenticator(user: User, device_type: Optional[str] = None) -> bool:
89 """Check if a user has any authenticator devices, optionally matching *device_type*"""
90 user_devices = devices_for_user(user)
91 if device_type:
92 for device in user_devices:
93 device_class = device.__class__.__name__.lower().replace("device", "")
94 if device_class == device_type:
95 return True
96 return False
97 return len(list(user_devices)) > 0
98
99 def expr_event_create(self, action: str, **kwargs):
100 """Create event with supplied data and try to extract as much relevant data
101 from the context"""
102 # If the result was a complex variable, we don't want to re-use it
103 self._context.pop("result", None)
104 self._context.pop("handler", None)
105 kwargs["context"] = self._context
106 event = Event.new(
107 action,
108 app=self._filename,
109 **kwargs,
110 )
111 if "request" in self._context and isinstance(self._context["request"], PolicyRequest):
112 policy_request: PolicyRequest = self._context["request"]
113 if policy_request.http_request:
114 event.from_http(policy_request)
115 return
116 event.save()
117
118 def wrap_expression(self, expression: str, params: Iterable[str]) -> str:
119 """Wrap expression in a function, call it, and save the result as `result`"""
120 handler_signature = ",".join(params)
121 full_expression = ""
122 full_expression += f"def handler({handler_signature}):\n"
123 full_expression += indent(expression, " ")
124 full_expression += f"\nresult = handler({handler_signature})"
125 return full_expression
126
127 def evaluate(self, expression_source: str) -> Any:
128 """Parse and evaluate expression. If the syntax is incorrect, a SyntaxError is raised.
129 If any exception is raised during execution, it is raised.
130 The result is returned without any type-checking."""
131 with Hub.current.start_span(op="authentik.lib.evaluator.evaluate") as span:
132 span: Span
133 span.description = self._filename
134 span.set_data("expression", expression_source)
135 param_keys = self._context.keys()
136 try:
137 ast_obj = compile(
138 self.wrap_expression(expression_source, param_keys),
139 self._filename,
140 "exec",
141 )
142 except (SyntaxError, ValueError) as exc:
143 self.handle_error(exc, expression_source)
144 raise exc
145 try:
146 _locals = self._context
147 # Yes this is an exec, yes it is potentially bad. Since we limit what variables are
148 # available here, and these policies can only be edited by admins, this is a risk
149 # we're willing to take.
150 # pylint: disable=exec-used
151 exec(ast_obj, self._globals, _locals) # nosec # noqa
152 result = _locals["result"]
153 except Exception as exc:
154 # So, this is a bit questionable. Essentially, we are edit the stacktrace
155 # so the user only sees information relevant to them
156 # and none of our surrounding error handling
157 exc.__traceback__ = exc.__traceback__.tb_next
158 self.handle_error(exc, expression_source)
159 raise exc
160 return result
161
162 def handle_error(self, exc: Exception, expression_source: str): # pragma: no cover
163 """Exception Handler"""
164 LOGGER.warning("Expression error", exc=exc)
165
166 def validate(self, expression: str) -> bool:
167 """Validate expression's syntax, raise ValidationError if Syntax is invalid"""
168 param_keys = self._context.keys()
169 try:
170 compile(
171 self.wrap_expression(expression, param_keys),
172 self._filename,
173 "exec",
174 )
175 return True
176 except (ValueError, SyntaxError) as exc:
177 raise ValidationError(f"Expression Syntax Error: {str(exc)}") from exc
178
[end of authentik/lib/expression/evaluator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/lib/expression/evaluator.py b/authentik/lib/expression/evaluator.py
--- a/authentik/lib/expression/evaluator.py
+++ b/authentik/lib/expression/evaluator.py
@@ -1,9 +1,11 @@
"""authentik expression policy evaluator"""
import re
+import socket
from ipaddress import ip_address, ip_network
from textwrap import indent
from typing import Any, Iterable, Optional
+from cachetools import TLRUCache, cached
from django.core.exceptions import FieldError
from django_otp import devices_for_user
from rest_framework.serializers import ValidationError
@@ -41,6 +43,8 @@
"ak_is_group_member": BaseEvaluator.expr_is_group_member,
"ak_user_by": BaseEvaluator.expr_user_by,
"ak_user_has_authenticator": BaseEvaluator.expr_func_user_has_authenticator,
+ "resolve_dns": BaseEvaluator.expr_resolve_dns,
+ "reverse_dns": BaseEvaluator.expr_reverse_dns,
"ak_create_event": self.expr_event_create,
"ak_logger": get_logger(self._filename).bind(),
"requests": get_http_session(),
@@ -49,6 +53,39 @@
}
self._context = {}
+ @cached(cache=TLRUCache(maxsize=32, ttu=lambda key, value, now: now + 180))
+ @staticmethod
+ def expr_resolve_dns(host: str, ip_version: Optional[int] = None) -> list[str]:
+ """Resolve host to a list of IPv4 and/or IPv6 addresses."""
+ # Although it seems to be fine (raising OSError), docs warn
+ # against passing `None` for both the host and the port
+ # https://docs.python.org/3/library/socket.html#socket.getaddrinfo
+ host = host or ""
+
+ ip_list = []
+
+ family = 0
+ if ip_version == 4:
+ family = socket.AF_INET
+ if ip_version == 6:
+ family = socket.AF_INET6
+
+ try:
+ for ip_addr in socket.getaddrinfo(host, None, family=family):
+ ip_list.append(str(ip_addr[4][0]))
+ except OSError:
+ pass
+ return list(set(ip_list))
+
+ @cached(cache=TLRUCache(maxsize=32, ttu=lambda key, value, now: now + 180))
+ @staticmethod
+ def expr_reverse_dns(ip_addr: str) -> str:
+ """Perform a reverse DNS lookup."""
+ try:
+ return socket.getfqdn(ip_addr)
+ except OSError:
+ return ip_addr
+
@staticmethod
def expr_flatten(value: list[Any] | Any) -> Optional[Any]:
"""Flatten `value` if its a list"""
|
{"golden_diff": "diff --git a/authentik/lib/expression/evaluator.py b/authentik/lib/expression/evaluator.py\n--- a/authentik/lib/expression/evaluator.py\n+++ b/authentik/lib/expression/evaluator.py\n@@ -1,9 +1,11 @@\n \"\"\"authentik expression policy evaluator\"\"\"\n import re\n+import socket\n from ipaddress import ip_address, ip_network\n from textwrap import indent\n from typing import Any, Iterable, Optional\n \n+from cachetools import TLRUCache, cached\n from django.core.exceptions import FieldError\n from django_otp import devices_for_user\n from rest_framework.serializers import ValidationError\n@@ -41,6 +43,8 @@\n \"ak_is_group_member\": BaseEvaluator.expr_is_group_member,\n \"ak_user_by\": BaseEvaluator.expr_user_by,\n \"ak_user_has_authenticator\": BaseEvaluator.expr_func_user_has_authenticator,\n+ \"resolve_dns\": BaseEvaluator.expr_resolve_dns,\n+ \"reverse_dns\": BaseEvaluator.expr_reverse_dns,\n \"ak_create_event\": self.expr_event_create,\n \"ak_logger\": get_logger(self._filename).bind(),\n \"requests\": get_http_session(),\n@@ -49,6 +53,39 @@\n }\n self._context = {}\n \n+ @cached(cache=TLRUCache(maxsize=32, ttu=lambda key, value, now: now + 180))\n+ @staticmethod\n+ def expr_resolve_dns(host: str, ip_version: Optional[int] = None) -> list[str]:\n+ \"\"\"Resolve host to a list of IPv4 and/or IPv6 addresses.\"\"\"\n+ # Although it seems to be fine (raising OSError), docs warn\n+ # against passing `None` for both the host and the port\n+ # https://docs.python.org/3/library/socket.html#socket.getaddrinfo\n+ host = host or \"\"\n+\n+ ip_list = []\n+\n+ family = 0\n+ if ip_version == 4:\n+ family = socket.AF_INET\n+ if ip_version == 6:\n+ family = socket.AF_INET6\n+\n+ try:\n+ for ip_addr in socket.getaddrinfo(host, None, family=family):\n+ ip_list.append(str(ip_addr[4][0]))\n+ except OSError:\n+ pass\n+ return list(set(ip_list))\n+\n+ @cached(cache=TLRUCache(maxsize=32, ttu=lambda key, value, now: now + 180))\n+ @staticmethod\n+ def expr_reverse_dns(ip_addr: str) -> str:\n+ \"\"\"Perform a reverse DNS lookup.\"\"\"\n+ try:\n+ return socket.getfqdn(ip_addr)\n+ except OSError:\n+ return ip_addr\n+\n @staticmethod\n def expr_flatten(value: list[Any] | Any) -> Optional[Any]:\n \"\"\"Flatten `value` if its a list\"\"\"\n", "issue": "DNS Resolution in Expressions\n**Is your feature request related to a problem? Please describe.**\r\nI would like to be able to resolve a hostname in expressions. In particular, my docker container resolves `host.docker.internal` to network address of the machine the container is running on. I would like to be able to check for requests from this hostname, but there is no way to detect this in expressions.\r\n\r\n**Describe the solution you'd like**\r\nI would like a function to be exposed in expressions that resolves a hostname to an address or list of addresses. Such a function could be implemented in `authentik/lib/expression/evaluator.py` like this:\r\n```py\r\n def expr_resolve_host(host: str) -> List[str]:\r\n \"\"\"Resolve a hostname to a list of IP addresses.\"\"\"\r\n \r\n return [\r\n sockaddr[0]\r\n for family, type, proto, canonname, sockaddr in \r\n socket.getaddrinfo(host, None)\r\n ]\r\n```\r\n\r\n**Describe alternatives you've considered**\r\nIt would be possible for me to set up the docker network statically, and then hardcode the address, but this is fragile, and I would prefer for Docker to manage its network allocation.\r\n\r\n**Additional context**\r\nI currently have an expression that checks for requests for my local/trusted networks. This works well, but if I access Authentik directly from the docker host, the source address is the \"host gateway\" address of the docker container's network, which is not in my trusted networks. (The host address of the docker internal network.) I can imagine this functionality would also be useful for getting the addresses other containers the docker container is connected to.\r\n\r\nAt my scale and use case, it doesn't really matter, but it may be worth caching this information for larger installations. I'm not sure if `socket.getaddrinfo` does any caching of it's own, but if it decides to do real DNS resolution it could be quite slow. If an expression is getting hit tens or hundreds of times a minute this could be a serious issue. (This could alternatively be resolved with a local DNS caching server on the host.)\r\n\r\nAny caching should be conservatively short, at most 60 seconds or so, since an change in address could also cause serious problems. A timeout may also be in order, since some use cases would prefer a fast empty response over an always correct answer. Ideally, these would be configurable with function parameters so users can determine their own caching and timeout needs.\n", "before_files": [{"content": "\"\"\"authentik expression policy evaluator\"\"\"\nimport re\nfrom ipaddress import ip_address, ip_network\nfrom textwrap import indent\nfrom typing import Any, Iterable, Optional\n\nfrom django.core.exceptions import FieldError\nfrom django_otp import devices_for_user\nfrom rest_framework.serializers import ValidationError\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.tracing import Span\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.models import User\nfrom authentik.events.models import Event\nfrom authentik.lib.utils.http import get_http_session\nfrom authentik.policies.types import PolicyRequest\n\nLOGGER = get_logger()\n\n\nclass BaseEvaluator:\n \"\"\"Validate and evaluate python-based expressions\"\"\"\n\n # Globals that can be used by function\n _globals: dict[str, Any]\n # Context passed as locals to exec()\n _context: dict[str, Any]\n\n # Filename used for exec\n _filename: str\n\n def __init__(self, filename: Optional[str] = None):\n self._filename = filename if filename else \"BaseEvaluator\"\n # update website/docs/expressions/_objects.md\n # update website/docs/expressions/_functions.md\n self._globals = {\n \"regex_match\": BaseEvaluator.expr_regex_match,\n \"regex_replace\": BaseEvaluator.expr_regex_replace,\n \"list_flatten\": BaseEvaluator.expr_flatten,\n \"ak_is_group_member\": BaseEvaluator.expr_is_group_member,\n \"ak_user_by\": BaseEvaluator.expr_user_by,\n \"ak_user_has_authenticator\": BaseEvaluator.expr_func_user_has_authenticator,\n \"ak_create_event\": self.expr_event_create,\n \"ak_logger\": get_logger(self._filename).bind(),\n \"requests\": get_http_session(),\n \"ip_address\": ip_address,\n \"ip_network\": ip_network,\n }\n self._context = {}\n\n @staticmethod\n def expr_flatten(value: list[Any] | Any) -> Optional[Any]:\n \"\"\"Flatten `value` if its a list\"\"\"\n if isinstance(value, list):\n if len(value) < 1:\n return None\n return value[0]\n return value\n\n @staticmethod\n def expr_regex_match(value: Any, regex: str) -> bool:\n \"\"\"Expression Filter to run re.search\"\"\"\n return re.search(regex, value) is not None\n\n @staticmethod\n def expr_regex_replace(value: Any, regex: str, repl: str) -> str:\n \"\"\"Expression Filter to run re.sub\"\"\"\n return re.sub(regex, repl, value)\n\n @staticmethod\n def expr_is_group_member(user: User, **group_filters) -> bool:\n \"\"\"Check if `user` is member of group with name `group_name`\"\"\"\n return user.ak_groups.filter(**group_filters).exists()\n\n @staticmethod\n def expr_user_by(**filters) -> Optional[User]:\n \"\"\"Get user by filters\"\"\"\n try:\n users = User.objects.filter(**filters)\n if users:\n return users.first()\n return None\n except FieldError:\n return None\n\n @staticmethod\n def expr_func_user_has_authenticator(user: User, device_type: Optional[str] = None) -> bool:\n \"\"\"Check if a user has any authenticator devices, optionally matching *device_type*\"\"\"\n user_devices = devices_for_user(user)\n if device_type:\n for device in user_devices:\n device_class = device.__class__.__name__.lower().replace(\"device\", \"\")\n if device_class == device_type:\n return True\n return False\n return len(list(user_devices)) > 0\n\n def expr_event_create(self, action: str, **kwargs):\n \"\"\"Create event with supplied data and try to extract as much relevant data\n from the context\"\"\"\n # If the result was a complex variable, we don't want to re-use it\n self._context.pop(\"result\", None)\n self._context.pop(\"handler\", None)\n kwargs[\"context\"] = self._context\n event = Event.new(\n action,\n app=self._filename,\n **kwargs,\n )\n if \"request\" in self._context and isinstance(self._context[\"request\"], PolicyRequest):\n policy_request: PolicyRequest = self._context[\"request\"]\n if policy_request.http_request:\n event.from_http(policy_request)\n return\n event.save()\n\n def wrap_expression(self, expression: str, params: Iterable[str]) -> str:\n \"\"\"Wrap expression in a function, call it, and save the result as `result`\"\"\"\n handler_signature = \",\".join(params)\n full_expression = \"\"\n full_expression += f\"def handler({handler_signature}):\\n\"\n full_expression += indent(expression, \" \")\n full_expression += f\"\\nresult = handler({handler_signature})\"\n return full_expression\n\n def evaluate(self, expression_source: str) -> Any:\n \"\"\"Parse and evaluate expression. If the syntax is incorrect, a SyntaxError is raised.\n If any exception is raised during execution, it is raised.\n The result is returned without any type-checking.\"\"\"\n with Hub.current.start_span(op=\"authentik.lib.evaluator.evaluate\") as span:\n span: Span\n span.description = self._filename\n span.set_data(\"expression\", expression_source)\n param_keys = self._context.keys()\n try:\n ast_obj = compile(\n self.wrap_expression(expression_source, param_keys),\n self._filename,\n \"exec\",\n )\n except (SyntaxError, ValueError) as exc:\n self.handle_error(exc, expression_source)\n raise exc\n try:\n _locals = self._context\n # Yes this is an exec, yes it is potentially bad. Since we limit what variables are\n # available here, and these policies can only be edited by admins, this is a risk\n # we're willing to take.\n # pylint: disable=exec-used\n exec(ast_obj, self._globals, _locals) # nosec # noqa\n result = _locals[\"result\"]\n except Exception as exc:\n # So, this is a bit questionable. Essentially, we are edit the stacktrace\n # so the user only sees information relevant to them\n # and none of our surrounding error handling\n exc.__traceback__ = exc.__traceback__.tb_next\n self.handle_error(exc, expression_source)\n raise exc\n return result\n\n def handle_error(self, exc: Exception, expression_source: str): # pragma: no cover\n \"\"\"Exception Handler\"\"\"\n LOGGER.warning(\"Expression error\", exc=exc)\n\n def validate(self, expression: str) -> bool:\n \"\"\"Validate expression's syntax, raise ValidationError if Syntax is invalid\"\"\"\n param_keys = self._context.keys()\n try:\n compile(\n self.wrap_expression(expression, param_keys),\n self._filename,\n \"exec\",\n )\n return True\n except (ValueError, SyntaxError) as exc:\n raise ValidationError(f\"Expression Syntax Error: {str(exc)}\") from exc\n", "path": "authentik/lib/expression/evaluator.py"}]}
| 2,975 | 632 |
gh_patches_debug_6981
|
rasdani/github-patches
|
git_diff
|
celery__celery-5752
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DatabaseBackend._update_result() have an error property.
python 3.7
celery 4.4.0rc3
The result has an error value NULL for the name in my backend(mysql), but it's work well when I use redis as my backend.
After I change this error in `backends/database/__init__.py` [135], alter 'task_name' to 'task', I get the correct task_name.
The 'name' in `backends/base.py` [706,717]
```
if self.app.conf.find_value_for_key('extended', 'result'):
if request:
request_meta = {
-> 'name': getattr(request, 'task', None),
'args': getattr(request, 'args', None),
'kwargs': getattr(request, 'kwargs', None),
'worker': getattr(request, 'hostname', None),
'retries': getattr(request, 'retries', None),
'queue': request.delivery_info.get('routing_key')
if hasattr(request, 'delivery_info') and
request.delivery_info else None
}
```
The 'name' in `backends/database/__init__.py` [129,148]
```
def _update_result(self, task, result, state, traceback=None,
request=None):
task.result = result
task.status = state
task.traceback = traceback
if self.app.conf.find_value_for_key('extended', 'result'):
- task.name = getattr(request, 'task_name', None)
+ task.name = getattr(request, 'task', None)
task.args = ensure_bytes(
self.encode(getattr(request, 'args', None))
)
task.kwargs = ensure_bytes(
self.encode(getattr(request, 'kwargs', None))
)
task.worker = getattr(request, 'hostname', None)
task.retries = getattr(request, 'retries', None)
task.queue = (
request.delivery_info.get("routing_key")
if hasattr(request, "delivery_info") and request.delivery_info
else None
)
```
</issue>
<code>
[start of celery/backends/database/__init__.py]
1 # -*- coding: utf-8 -*-
2 """SQLAlchemy result store backend."""
3 from __future__ import absolute_import, unicode_literals
4
5 import logging
6 from contextlib import contextmanager
7
8 from kombu.utils.encoding import ensure_bytes
9 from vine.utils import wraps
10
11 from celery import states
12 from celery.backends.base import BaseBackend
13 from celery.exceptions import ImproperlyConfigured
14 from celery.five import range
15 from celery.utils.time import maybe_timedelta
16
17 from .models import Task, TaskExtended, TaskSet
18 from .session import SessionManager
19
20 try:
21 from sqlalchemy.exc import DatabaseError, InvalidRequestError
22 from sqlalchemy.orm.exc import StaleDataError
23 except ImportError: # pragma: no cover
24 raise ImproperlyConfigured(
25 'The database result backend requires SQLAlchemy to be installed.'
26 'See https://pypi.org/project/SQLAlchemy/')
27
28 logger = logging.getLogger(__name__)
29
30 __all__ = ('DatabaseBackend',)
31
32
33 @contextmanager
34 def session_cleanup(session):
35 try:
36 yield
37 except Exception:
38 session.rollback()
39 raise
40 finally:
41 session.close()
42
43
44 def retry(fun):
45
46 @wraps(fun)
47 def _inner(*args, **kwargs):
48 max_retries = kwargs.pop('max_retries', 3)
49
50 for retries in range(max_retries):
51 try:
52 return fun(*args, **kwargs)
53 except (DatabaseError, InvalidRequestError, StaleDataError):
54 logger.warning(
55 'Failed operation %s. Retrying %s more times.',
56 fun.__name__, max_retries - retries - 1,
57 exc_info=True)
58 if retries + 1 >= max_retries:
59 raise
60
61 return _inner
62
63
64 class DatabaseBackend(BaseBackend):
65 """The database result backend."""
66
67 # ResultSet.iterate should sleep this much between each pool,
68 # to not bombard the database with queries.
69 subpolling_interval = 0.5
70
71 task_cls = Task
72 taskset_cls = TaskSet
73
74 def __init__(self, dburi=None, engine_options=None, url=None, **kwargs):
75 # The `url` argument was added later and is used by
76 # the app to set backend by url (celery.app.backends.by_url)
77 super(DatabaseBackend, self).__init__(expires_type=maybe_timedelta,
78 url=url, **kwargs)
79 conf = self.app.conf
80
81 if self.extended_result:
82 self.task_cls = TaskExtended
83
84 self.url = url or dburi or conf.database_url
85 self.engine_options = dict(
86 engine_options or {},
87 **conf.database_engine_options or {})
88 self.short_lived_sessions = kwargs.get(
89 'short_lived_sessions',
90 conf.database_short_lived_sessions)
91
92 tablenames = conf.database_table_names or {}
93 self.task_cls.__table__.name = tablenames.get('task',
94 'celery_taskmeta')
95 self.taskset_cls.__table__.name = tablenames.get('group',
96 'celery_tasksetmeta')
97
98 if not self.url:
99 raise ImproperlyConfigured(
100 'Missing connection string! Do you have the'
101 ' database_url setting set to a real value?')
102
103 @property
104 def extended_result(self):
105 return self.app.conf.find_value_for_key('extended', 'result')
106
107 def ResultSession(self, session_manager=SessionManager()):
108 return session_manager.session_factory(
109 dburi=self.url,
110 short_lived_sessions=self.short_lived_sessions,
111 **self.engine_options)
112
113 @retry
114 def _store_result(self, task_id, result, state, traceback=None,
115 request=None, **kwargs):
116 """Store return value and state of an executed task."""
117 session = self.ResultSession()
118 with session_cleanup(session):
119 task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))
120 task = task and task[0]
121 if not task:
122 task = self.task_cls(task_id)
123 session.add(task)
124 session.flush()
125
126 self._update_result(task, result, state, traceback=traceback, request=request)
127 session.commit()
128
129 def _update_result(self, task, result, state, traceback=None,
130 request=None):
131 task.result = result
132 task.status = state
133 task.traceback = traceback
134 if self.app.conf.find_value_for_key('extended', 'result'):
135 task.name = getattr(request, 'task_name', None)
136 task.args = ensure_bytes(
137 self.encode(getattr(request, 'args', None))
138 )
139 task.kwargs = ensure_bytes(
140 self.encode(getattr(request, 'kwargs', None))
141 )
142 task.worker = getattr(request, 'hostname', None)
143 task.retries = getattr(request, 'retries', None)
144 task.queue = (
145 request.delivery_info.get("routing_key")
146 if hasattr(request, "delivery_info") and request.delivery_info
147 else None
148 )
149
150 @retry
151 def _get_task_meta_for(self, task_id):
152 """Get task meta-data for a task by id."""
153 session = self.ResultSession()
154 with session_cleanup(session):
155 task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))
156 task = task and task[0]
157 if not task:
158 task = self.task_cls(task_id)
159 task.status = states.PENDING
160 task.result = None
161 data = task.to_dict()
162 if 'args' in data:
163 data['args'] = self.decode(data['args'])
164 if 'kwargs' in data:
165 data['kwargs'] = self.decode(data['kwargs'])
166 return self.meta_from_decoded(data)
167
168 @retry
169 def _save_group(self, group_id, result):
170 """Store the result of an executed group."""
171 session = self.ResultSession()
172 with session_cleanup(session):
173 group = self.taskset_cls(group_id, result)
174 session.add(group)
175 session.flush()
176 session.commit()
177 return result
178
179 @retry
180 def _restore_group(self, group_id):
181 """Get meta-data for group by id."""
182 session = self.ResultSession()
183 with session_cleanup(session):
184 group = session.query(self.taskset_cls).filter(
185 self.taskset_cls.taskset_id == group_id).first()
186 if group:
187 return group.to_dict()
188
189 @retry
190 def _delete_group(self, group_id):
191 """Delete meta-data for group by id."""
192 session = self.ResultSession()
193 with session_cleanup(session):
194 session.query(self.taskset_cls).filter(
195 self.taskset_cls.taskset_id == group_id).delete()
196 session.flush()
197 session.commit()
198
199 @retry
200 def _forget(self, task_id):
201 """Forget about result."""
202 session = self.ResultSession()
203 with session_cleanup(session):
204 session.query(self.task_cls).filter(self.task_cls.task_id == task_id).delete()
205 session.commit()
206
207 def cleanup(self):
208 """Delete expired meta-data."""
209 session = self.ResultSession()
210 expires = self.expires
211 now = self.app.now()
212 with session_cleanup(session):
213 session.query(self.task_cls).filter(
214 self.task_cls.date_done < (now - expires)).delete()
215 session.query(self.taskset_cls).filter(
216 self.taskset_cls.date_done < (now - expires)).delete()
217 session.commit()
218
219 def __reduce__(self, args=(), kwargs=None):
220 kwargs = {} if not kwargs else kwargs
221 kwargs.update(
222 {'dburi': self.url,
223 'expires': self.expires,
224 'engine_options': self.engine_options})
225 return super(DatabaseBackend, self).__reduce__(args, kwargs)
226
[end of celery/backends/database/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/celery/backends/database/__init__.py b/celery/backends/database/__init__.py
--- a/celery/backends/database/__init__.py
+++ b/celery/backends/database/__init__.py
@@ -132,7 +132,7 @@
task.status = state
task.traceback = traceback
if self.app.conf.find_value_for_key('extended', 'result'):
- task.name = getattr(request, 'task_name', None)
+ task.name = getattr(request, 'task', None)
task.args = ensure_bytes(
self.encode(getattr(request, 'args', None))
)
|
{"golden_diff": "diff --git a/celery/backends/database/__init__.py b/celery/backends/database/__init__.py\n--- a/celery/backends/database/__init__.py\n+++ b/celery/backends/database/__init__.py\n@@ -132,7 +132,7 @@\n task.status = state\n task.traceback = traceback\n if self.app.conf.find_value_for_key('extended', 'result'):\n- task.name = getattr(request, 'task_name', None)\n+ task.name = getattr(request, 'task', None)\n task.args = ensure_bytes(\n self.encode(getattr(request, 'args', None))\n )\n", "issue": "DatabaseBackend._update_result() have an error property.\npython 3.7\r\ncelery 4.4.0rc3 \r\n\r\nThe result has an error value NULL for the name in my backend(mysql), but it's work well when I use redis as my backend.\r\n\r\nAfter I change this error in `backends/database/__init__.py` [135], alter 'task_name' to 'task', I get the correct task_name.\r\n\r\nThe 'name' in `backends/base.py` [706,717]\r\n```\r\n if self.app.conf.find_value_for_key('extended', 'result'):\r\n if request:\r\n request_meta = {\r\n-> 'name': getattr(request, 'task', None),\r\n 'args': getattr(request, 'args', None),\r\n 'kwargs': getattr(request, 'kwargs', None),\r\n 'worker': getattr(request, 'hostname', None),\r\n 'retries': getattr(request, 'retries', None),\r\n 'queue': request.delivery_info.get('routing_key')\r\n if hasattr(request, 'delivery_info') and\r\n request.delivery_info else None\r\n }\r\n```\r\nThe 'name' in `backends/database/__init__.py` [129,148]\r\n```\r\n def _update_result(self, task, result, state, traceback=None,\r\n request=None):\r\n task.result = result\r\n task.status = state\r\n task.traceback = traceback\r\n if self.app.conf.find_value_for_key('extended', 'result'):\r\n- task.name = getattr(request, 'task_name', None)\r\n+ task.name = getattr(request, 'task', None)\r\n task.args = ensure_bytes(\r\n self.encode(getattr(request, 'args', None))\r\n )\r\n task.kwargs = ensure_bytes(\r\n self.encode(getattr(request, 'kwargs', None))\r\n )\r\n task.worker = getattr(request, 'hostname', None)\r\n task.retries = getattr(request, 'retries', None)\r\n task.queue = (\r\n request.delivery_info.get(\"routing_key\")\r\n if hasattr(request, \"delivery_info\") and request.delivery_info\r\n else None\r\n )\r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"SQLAlchemy result store backend.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\nfrom contextlib import contextmanager\n\nfrom kombu.utils.encoding import ensure_bytes\nfrom vine.utils import wraps\n\nfrom celery import states\nfrom celery.backends.base import BaseBackend\nfrom celery.exceptions import ImproperlyConfigured\nfrom celery.five import range\nfrom celery.utils.time import maybe_timedelta\n\nfrom .models import Task, TaskExtended, TaskSet\nfrom .session import SessionManager\n\ntry:\n from sqlalchemy.exc import DatabaseError, InvalidRequestError\n from sqlalchemy.orm.exc import StaleDataError\nexcept ImportError: # pragma: no cover\n raise ImproperlyConfigured(\n 'The database result backend requires SQLAlchemy to be installed.'\n 'See https://pypi.org/project/SQLAlchemy/')\n\nlogger = logging.getLogger(__name__)\n\n__all__ = ('DatabaseBackend',)\n\n\n@contextmanager\ndef session_cleanup(session):\n try:\n yield\n except Exception:\n session.rollback()\n raise\n finally:\n session.close()\n\n\ndef retry(fun):\n\n @wraps(fun)\n def _inner(*args, **kwargs):\n max_retries = kwargs.pop('max_retries', 3)\n\n for retries in range(max_retries):\n try:\n return fun(*args, **kwargs)\n except (DatabaseError, InvalidRequestError, StaleDataError):\n logger.warning(\n 'Failed operation %s. Retrying %s more times.',\n fun.__name__, max_retries - retries - 1,\n exc_info=True)\n if retries + 1 >= max_retries:\n raise\n\n return _inner\n\n\nclass DatabaseBackend(BaseBackend):\n \"\"\"The database result backend.\"\"\"\n\n # ResultSet.iterate should sleep this much between each pool,\n # to not bombard the database with queries.\n subpolling_interval = 0.5\n\n task_cls = Task\n taskset_cls = TaskSet\n\n def __init__(self, dburi=None, engine_options=None, url=None, **kwargs):\n # The `url` argument was added later and is used by\n # the app to set backend by url (celery.app.backends.by_url)\n super(DatabaseBackend, self).__init__(expires_type=maybe_timedelta,\n url=url, **kwargs)\n conf = self.app.conf\n\n if self.extended_result:\n self.task_cls = TaskExtended\n\n self.url = url or dburi or conf.database_url\n self.engine_options = dict(\n engine_options or {},\n **conf.database_engine_options or {})\n self.short_lived_sessions = kwargs.get(\n 'short_lived_sessions',\n conf.database_short_lived_sessions)\n\n tablenames = conf.database_table_names or {}\n self.task_cls.__table__.name = tablenames.get('task',\n 'celery_taskmeta')\n self.taskset_cls.__table__.name = tablenames.get('group',\n 'celery_tasksetmeta')\n\n if not self.url:\n raise ImproperlyConfigured(\n 'Missing connection string! Do you have the'\n ' database_url setting set to a real value?')\n\n @property\n def extended_result(self):\n return self.app.conf.find_value_for_key('extended', 'result')\n\n def ResultSession(self, session_manager=SessionManager()):\n return session_manager.session_factory(\n dburi=self.url,\n short_lived_sessions=self.short_lived_sessions,\n **self.engine_options)\n\n @retry\n def _store_result(self, task_id, result, state, traceback=None,\n request=None, **kwargs):\n \"\"\"Store return value and state of an executed task.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))\n task = task and task[0]\n if not task:\n task = self.task_cls(task_id)\n session.add(task)\n session.flush()\n\n self._update_result(task, result, state, traceback=traceback, request=request)\n session.commit()\n\n def _update_result(self, task, result, state, traceback=None,\n request=None):\n task.result = result\n task.status = state\n task.traceback = traceback\n if self.app.conf.find_value_for_key('extended', 'result'):\n task.name = getattr(request, 'task_name', None)\n task.args = ensure_bytes(\n self.encode(getattr(request, 'args', None))\n )\n task.kwargs = ensure_bytes(\n self.encode(getattr(request, 'kwargs', None))\n )\n task.worker = getattr(request, 'hostname', None)\n task.retries = getattr(request, 'retries', None)\n task.queue = (\n request.delivery_info.get(\"routing_key\")\n if hasattr(request, \"delivery_info\") and request.delivery_info\n else None\n )\n\n @retry\n def _get_task_meta_for(self, task_id):\n \"\"\"Get task meta-data for a task by id.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))\n task = task and task[0]\n if not task:\n task = self.task_cls(task_id)\n task.status = states.PENDING\n task.result = None\n data = task.to_dict()\n if 'args' in data:\n data['args'] = self.decode(data['args'])\n if 'kwargs' in data:\n data['kwargs'] = self.decode(data['kwargs'])\n return self.meta_from_decoded(data)\n\n @retry\n def _save_group(self, group_id, result):\n \"\"\"Store the result of an executed group.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n group = self.taskset_cls(group_id, result)\n session.add(group)\n session.flush()\n session.commit()\n return result\n\n @retry\n def _restore_group(self, group_id):\n \"\"\"Get meta-data for group by id.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n group = session.query(self.taskset_cls).filter(\n self.taskset_cls.taskset_id == group_id).first()\n if group:\n return group.to_dict()\n\n @retry\n def _delete_group(self, group_id):\n \"\"\"Delete meta-data for group by id.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n session.query(self.taskset_cls).filter(\n self.taskset_cls.taskset_id == group_id).delete()\n session.flush()\n session.commit()\n\n @retry\n def _forget(self, task_id):\n \"\"\"Forget about result.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n session.query(self.task_cls).filter(self.task_cls.task_id == task_id).delete()\n session.commit()\n\n def cleanup(self):\n \"\"\"Delete expired meta-data.\"\"\"\n session = self.ResultSession()\n expires = self.expires\n now = self.app.now()\n with session_cleanup(session):\n session.query(self.task_cls).filter(\n self.task_cls.date_done < (now - expires)).delete()\n session.query(self.taskset_cls).filter(\n self.taskset_cls.date_done < (now - expires)).delete()\n session.commit()\n\n def __reduce__(self, args=(), kwargs=None):\n kwargs = {} if not kwargs else kwargs\n kwargs.update(\n {'dburi': self.url,\n 'expires': self.expires,\n 'engine_options': self.engine_options})\n return super(DatabaseBackend, self).__reduce__(args, kwargs)\n", "path": "celery/backends/database/__init__.py"}]}
| 3,197 | 142 |
gh_patches_debug_33090
|
rasdani/github-patches
|
git_diff
|
psychopy__psychopy-947
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Polygon setEdges does not update the ShapeStim vertices
If I make a polygon object:
``` python
poly = visual.Polygon(win, edges=3, lineWidth=3, radius=3)
poly.draw()
win.flip()
```
and then want to change the shape on the fly in code, I would have though I would do:
``` python
poly.setEdges(5)
poly.draw()
win.flip()
```
This doesn't actually change the shape that gets shown though, but the following code does:
``` python
poly.setEdges(5)
poly.setVertices(poly.vertices)
poly.draw()
win.flip()
```
I think this is because `poly.setEdges` calls `poly._calcVertices` which sets the `poly.vertices` attribute, but `poly.setEdges` doesn't pass the new array to the `poly.setVertices` method, which I gather is inherited from `ShapeStim`.
</issue>
<code>
[start of psychopy/visual/polygon.py]
1
2 #!/usr/bin/env python2
3
4 '''Creates a regular polygon (triangles, pentagrams, ...)
5 as a special case of a :class:`~psychopy.visual.ShapeStim`'''
6
7 # Part of the PsychoPy library
8 # Copyright (C) 2015 Jonathan Peirce
9 # Distributed under the terms of the GNU General Public License (GPL).
10
11 import psychopy # so we can get the __path__
12
13 from psychopy.visual.shape import ShapeStim
14 from psychopy.tools.attributetools import attributeSetter, setAttribute
15
16 import numpy
17
18
19 class Polygon(ShapeStim):
20 """Creates a regular polygon (triangles, pentagrams, ...) as a special case of a :class:`~psychopy.visual.ShapeStim`
21
22 (New in version 1.72.00)
23 """
24 def __init__(self, win, edges=3, radius=.5, **kwargs):
25 """
26 Polygon accepts all input parameters that :class:`~psychopy.visual.ShapeStim` accepts, except for vertices and closeShape.
27 """
28 #what local vars are defined (these are the init params) for use by __repr__
29 self._initParams = dir()
30 self._initParams.remove('self')
31 #kwargs isn't a parameter, but a list of params
32 self._initParams.remove('kwargs')
33 self._initParams.extend(kwargs)
34 self.autoLog = False #but will be changed if needed at end of init
35 self.__dict__['edges'] = edges
36 self.radius = numpy.asarray(radius)
37 self._calcVertices()
38 kwargs['closeShape'] = True # Make sure nobody messes around here
39 kwargs['vertices'] = self.vertices
40 super(Polygon, self).__init__(win, **kwargs)
41
42 def _calcVertices(self):
43 d = numpy.pi*2/ self.edges
44 self.vertices = numpy.asarray([
45 numpy.asarray(
46 (numpy.sin(e*d), numpy.cos(e*d))
47 ) * self.radius
48 for e in xrange(int(round(self.edges)))
49 ])
50
51 @attributeSetter
52 def edges(self, edges):
53 """Int or float. Number of edges of the polygon. Floats are rounded to int.
54 :ref:`Operations <attrib-operations>` supported."""
55 self.__dict__['edges'] = edges
56 self._calcVertices()
57 def setEdges(self, edges, operation='', log=None):
58 """Usually you can use 'stim.attribute = value' syntax instead,
59 but use this method if you need to suppress the log message"""
60 setAttribute(self, 'edges', edges, log, operation)
61
62 @attributeSetter
63 def radius(self, radius):
64 """float, int, tuple, list or 2x1 array
65 Radius of the Polygon (distance from the center to the corners).
66 May be a -2tuple or list to stretch the polygon asymmetrically.
67
68 :ref:`Operations <attrib-operations>` supported.
69
70 Usually there's a setAttribute(value, log=False) method for each attribute. Use this if you want to disable logging."""
71 self.__dict__['radius'] = numpy.array(radius)
72 self._calcVertices()
73 self.setVertices(self.vertices, log=False)
74 def setRadius(self, radius, operation='', log=None):
75 """Usually you can use 'stim.attribute = value' syntax instead,
76 but use this method if you need to suppress the log message"""
77 setAttribute(self, 'radius', radius, log, operation)
[end of psychopy/visual/polygon.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/psychopy/visual/polygon.py b/psychopy/visual/polygon.py
--- a/psychopy/visual/polygon.py
+++ b/psychopy/visual/polygon.py
@@ -47,13 +47,14 @@
) * self.radius
for e in xrange(int(round(self.edges)))
])
-
+
@attributeSetter
def edges(self, edges):
"""Int or float. Number of edges of the polygon. Floats are rounded to int.
:ref:`Operations <attrib-operations>` supported."""
self.__dict__['edges'] = edges
self._calcVertices()
+ self.setVertices(self.vertices, log=False)
def setEdges(self, edges, operation='', log=None):
"""Usually you can use 'stim.attribute = value' syntax instead,
but use this method if you need to suppress the log message"""
@@ -66,7 +67,7 @@
May be a -2tuple or list to stretch the polygon asymmetrically.
:ref:`Operations <attrib-operations>` supported.
-
+
Usually there's a setAttribute(value, log=False) method for each attribute. Use this if you want to disable logging."""
self.__dict__['radius'] = numpy.array(radius)
self._calcVertices()
@@ -74,4 +75,4 @@
def setRadius(self, radius, operation='', log=None):
"""Usually you can use 'stim.attribute = value' syntax instead,
but use this method if you need to suppress the log message"""
- setAttribute(self, 'radius', radius, log, operation)
\ No newline at end of file
+ setAttribute(self, 'radius', radius, log, operation)
|
{"golden_diff": "diff --git a/psychopy/visual/polygon.py b/psychopy/visual/polygon.py\n--- a/psychopy/visual/polygon.py\n+++ b/psychopy/visual/polygon.py\n@@ -47,13 +47,14 @@\n ) * self.radius\n for e in xrange(int(round(self.edges)))\n ])\n- \n+\n @attributeSetter\n def edges(self, edges):\n \"\"\"Int or float. Number of edges of the polygon. Floats are rounded to int.\n :ref:`Operations <attrib-operations>` supported.\"\"\"\n self.__dict__['edges'] = edges\n self._calcVertices()\n+ self.setVertices(self.vertices, log=False)\n def setEdges(self, edges, operation='', log=None):\n \"\"\"Usually you can use 'stim.attribute = value' syntax instead,\n but use this method if you need to suppress the log message\"\"\"\n@@ -66,7 +67,7 @@\n May be a -2tuple or list to stretch the polygon asymmetrically.\n \n :ref:`Operations <attrib-operations>` supported.\n- \n+\n Usually there's a setAttribute(value, log=False) method for each attribute. Use this if you want to disable logging.\"\"\"\n self.__dict__['radius'] = numpy.array(radius)\n self._calcVertices()\n@@ -74,4 +75,4 @@\n def setRadius(self, radius, operation='', log=None):\n \"\"\"Usually you can use 'stim.attribute = value' syntax instead,\n but use this method if you need to suppress the log message\"\"\"\n- setAttribute(self, 'radius', radius, log, operation)\n\\ No newline at end of file\n+ setAttribute(self, 'radius', radius, log, operation)\n", "issue": "Polygon setEdges does not update the ShapeStim vertices\nIf I make a polygon object:\n\n``` python\npoly = visual.Polygon(win, edges=3, lineWidth=3, radius=3)\npoly.draw()\nwin.flip()\n```\n\nand then want to change the shape on the fly in code, I would have though I would do:\n\n``` python\npoly.setEdges(5)\npoly.draw()\nwin.flip()\n```\n\nThis doesn't actually change the shape that gets shown though, but the following code does:\n\n``` python\npoly.setEdges(5)\npoly.setVertices(poly.vertices)\npoly.draw()\nwin.flip()\n```\n\nI think this is because `poly.setEdges` calls `poly._calcVertices` which sets the `poly.vertices` attribute, but `poly.setEdges` doesn't pass the new array to the `poly.setVertices` method, which I gather is inherited from `ShapeStim`.\n\n", "before_files": [{"content": "\n#!/usr/bin/env python2\n\n'''Creates a regular polygon (triangles, pentagrams, ...)\nas a special case of a :class:`~psychopy.visual.ShapeStim`'''\n\n# Part of the PsychoPy library\n# Copyright (C) 2015 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\nimport psychopy # so we can get the __path__\n\nfrom psychopy.visual.shape import ShapeStim\nfrom psychopy.tools.attributetools import attributeSetter, setAttribute\n\nimport numpy\n\n\nclass Polygon(ShapeStim):\n \"\"\"Creates a regular polygon (triangles, pentagrams, ...) as a special case of a :class:`~psychopy.visual.ShapeStim`\n\n (New in version 1.72.00)\n \"\"\"\n def __init__(self, win, edges=3, radius=.5, **kwargs):\n \"\"\"\n Polygon accepts all input parameters that :class:`~psychopy.visual.ShapeStim` accepts, except for vertices and closeShape.\n \"\"\"\n #what local vars are defined (these are the init params) for use by __repr__\n self._initParams = dir()\n self._initParams.remove('self')\n #kwargs isn't a parameter, but a list of params\n self._initParams.remove('kwargs')\n self._initParams.extend(kwargs)\n self.autoLog = False #but will be changed if needed at end of init\n self.__dict__['edges'] = edges\n self.radius = numpy.asarray(radius)\n self._calcVertices()\n kwargs['closeShape'] = True # Make sure nobody messes around here\n kwargs['vertices'] = self.vertices\n super(Polygon, self).__init__(win, **kwargs)\n\n def _calcVertices(self):\n d = numpy.pi*2/ self.edges\n self.vertices = numpy.asarray([\n numpy.asarray(\n (numpy.sin(e*d), numpy.cos(e*d))\n ) * self.radius\n for e in xrange(int(round(self.edges)))\n ])\n \n @attributeSetter\n def edges(self, edges):\n \"\"\"Int or float. Number of edges of the polygon. Floats are rounded to int.\n :ref:`Operations <attrib-operations>` supported.\"\"\"\n self.__dict__['edges'] = edges\n self._calcVertices()\n def setEdges(self, edges, operation='', log=None):\n \"\"\"Usually you can use 'stim.attribute = value' syntax instead,\n but use this method if you need to suppress the log message\"\"\"\n setAttribute(self, 'edges', edges, log, operation)\n\n @attributeSetter\n def radius(self, radius):\n \"\"\"float, int, tuple, list or 2x1 array\n Radius of the Polygon (distance from the center to the corners).\n May be a -2tuple or list to stretch the polygon asymmetrically.\n\n :ref:`Operations <attrib-operations>` supported.\n \n Usually there's a setAttribute(value, log=False) method for each attribute. Use this if you want to disable logging.\"\"\"\n self.__dict__['radius'] = numpy.array(radius)\n self._calcVertices()\n self.setVertices(self.vertices, log=False)\n def setRadius(self, radius, operation='', log=None):\n \"\"\"Usually you can use 'stim.attribute = value' syntax instead,\n but use this method if you need to suppress the log message\"\"\"\n setAttribute(self, 'radius', radius, log, operation)", "path": "psychopy/visual/polygon.py"}]}
| 1,611 | 374 |
gh_patches_debug_21543
|
rasdani/github-patches
|
git_diff
|
dj-stripe__dj-stripe-859
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Test webhook with 1.2.3 vs 2018.11.08 returns 500 error 'No such event: account.external_00000000000000'
Hello, I am using a Stripe account in TEST mode with API version 2018.11.08 and Django Stripe version 1.2.3. When sending a test request to test the webhook, the view returns 500 instead of 200. But when triggering real events, they are correctly handled
In django admin or in stripe logging I can see test events have event ids such as "account.external_00000000000000" or "balance.available_00000000000000" etc but the library is checking TEST_EVENT_ID = "evt_00000000000000"
</issue>
<code>
[start of djstripe/models/webhooks.py]
1 import json
2 import warnings
3 from traceback import format_exc
4
5 import stripe
6 from django.db import models
7 from django.utils.functional import cached_property
8
9 from .. import settings as djstripe_settings
10 from .. import webhooks
11 from ..context_managers import stripe_temporary_api_version
12 from ..fields import JSONField
13 from ..signals import webhook_processing_error
14 from ..utils import fix_django_headers
15 from .base import logger
16 from .core import Event
17
18
19 def _get_version():
20 from .. import __version__
21
22 return __version__
23
24
25 class WebhookEventTrigger(models.Model):
26 """
27 An instance of a request that reached the server endpoint for Stripe webhooks.
28
29 Webhook Events are initially **UNTRUSTED**, as it is possible for any web entity to
30 post any data to our webhook url. Data posted may be valid Stripe information, garbage, or even malicious.
31 The 'valid' flag in this model monitors this.
32 """
33
34 id = models.BigAutoField(primary_key=True)
35 remote_ip = models.GenericIPAddressField(help_text="IP address of the request client.")
36 headers = JSONField()
37 body = models.TextField(blank=True)
38 valid = models.BooleanField(
39 default=False, help_text="Whether or not the webhook event has passed validation"
40 )
41 processed = models.BooleanField(
42 default=False,
43 help_text="Whether or not the webhook event has been successfully processed",
44 )
45 exception = models.CharField(max_length=128, blank=True)
46 traceback = models.TextField(
47 blank=True, help_text="Traceback if an exception was thrown during processing"
48 )
49 event = models.ForeignKey(
50 "Event",
51 on_delete=models.SET_NULL,
52 null=True,
53 blank=True,
54 help_text="Event object contained in the (valid) Webhook",
55 )
56 djstripe_version = models.CharField(
57 max_length=32,
58 default=_get_version, # Needs to be a callable, otherwise it's a db default.
59 help_text="The version of dj-stripe when the webhook was received",
60 )
61 created = models.DateTimeField(auto_now_add=True)
62 updated = models.DateTimeField(auto_now=True)
63
64 @classmethod
65 def from_request(cls, request):
66 """
67 Create, validate and process a WebhookEventTrigger given a Django
68 request object.
69
70 The process is three-fold:
71 1. Create a WebhookEventTrigger object from a Django request.
72 2. Validate the WebhookEventTrigger as a Stripe event using the API.
73 3. If valid, process it into an Event object (and child resource).
74 """
75
76 headers = fix_django_headers(request.META)
77 assert headers
78 try:
79 body = request.body.decode(request.encoding or "utf-8")
80 except Exception:
81 body = "(error decoding body)"
82
83 ip = request.META.get("REMOTE_ADDR")
84 if ip is None:
85 warnings.warn(
86 "Could not determine remote IP (missing REMOTE_ADDR). "
87 "This is likely an issue with your wsgi/server setup."
88 )
89 ip = "0.0.0.0"
90 obj = cls.objects.create(headers=headers, body=body, remote_ip=ip)
91
92 try:
93 obj.valid = obj.validate()
94 if obj.valid:
95 if djstripe_settings.WEBHOOK_EVENT_CALLBACK:
96 # If WEBHOOK_EVENT_CALLBACK, pass it for processing
97 djstripe_settings.WEBHOOK_EVENT_CALLBACK(obj)
98 else:
99 # Process the item (do not save it, it'll get saved below)
100 obj.process(save=False)
101 except Exception as e:
102 max_length = WebhookEventTrigger._meta.get_field("exception").max_length
103 obj.exception = str(e)[:max_length]
104 obj.traceback = format_exc()
105
106 # Send the exception as the webhook_processing_error signal
107 webhook_processing_error.send(
108 sender=WebhookEventTrigger, exception=e, data=getattr(e, "http_body", "")
109 )
110
111 # re-raise the exception so Django sees it
112 raise e
113 finally:
114 obj.save()
115
116 return obj
117
118 @cached_property
119 def json_body(self):
120 try:
121 return json.loads(self.body)
122 except ValueError:
123 return {}
124
125 @property
126 def is_test_event(self):
127 return self.json_body.get("id") == webhooks.TEST_EVENT_ID
128
129 def validate(self, api_key=None):
130 """
131 The original contents of the Event message must be confirmed by
132 refetching it and comparing the fetched data with the original data.
133
134 This function makes an API call to Stripe to redownload the Event data
135 and returns whether or not it matches the WebhookEventTrigger data.
136 """
137
138 local_data = self.json_body
139 if "id" not in local_data or "livemode" not in local_data:
140 return False
141
142 if self.is_test_event:
143 logger.info("Test webhook received: {}".format(local_data))
144 return False
145
146 if djstripe_settings.WEBHOOK_VALIDATION is None:
147 # validation disabled
148 return True
149 elif (
150 djstripe_settings.WEBHOOK_VALIDATION == "verify_signature"
151 and djstripe_settings.WEBHOOK_SECRET
152 ):
153 try:
154 stripe.WebhookSignature.verify_header(
155 self.body,
156 self.headers.get("stripe-signature"),
157 djstripe_settings.WEBHOOK_SECRET,
158 djstripe_settings.WEBHOOK_TOLERANCE,
159 )
160 except stripe.error.SignatureVerificationError:
161 return False
162 else:
163 return True
164
165 livemode = local_data["livemode"]
166 api_key = api_key or djstripe_settings.get_default_api_key(livemode)
167
168 # Retrieve the event using the api_version specified in itself
169 with stripe_temporary_api_version(local_data["api_version"], validate=False):
170 remote_data = Event.stripe_class.retrieve(id=local_data["id"], api_key=api_key)
171
172 return local_data["data"] == remote_data["data"]
173
174 def process(self, save=True):
175 # Reset traceback and exception in case of reprocessing
176 self.exception = ""
177 self.traceback = ""
178
179 self.event = Event.process(self.json_body)
180 self.processed = True
181 if save:
182 self.save()
183
184 return self.event
185
[end of djstripe/models/webhooks.py]
[start of djstripe/webhooks.py]
1 """
2 Utils related to processing or registering for webhooks
3
4 A model registers itself here if it wants to be in the list of processing
5 functions for a particular webhook. Each processor will have the ability
6 to modify the event object, access event data, and do what it needs to do
7
8 registrations are keyed by top-level event type (e.g. "invoice", "customer", etc)
9 Each registration entry is a list of processors
10 Each processor in these lists is a function to be called
11 The function signature is:
12 <Event object>
13
14 There is also a "global registry" which is just a list of processors (as defined above)
15
16 NOTE: global processors are called before other processors.
17 """
18 import functools
19 import itertools
20 from collections import defaultdict
21
22 __all__ = ["handler", "handler_all", "call_handlers"]
23
24
25 registrations = defaultdict(list)
26 registrations_global = list()
27
28 TEST_EVENT_ID = "evt_00000000000000"
29
30
31 def handler(*event_types):
32 """
33 Decorator that registers a function as a webhook handler.
34
35 Functions can be registered for event types (e.g. 'customer') or
36 fully qualified event sub-types (e.g. 'customer.subscription.deleted').
37
38 If an event type is specified, the handler will receive callbacks for
39 ALL webhook events of that type. For example, if 'customer' is specified,
40 the handler will receive events for 'customer.subscription.created',
41 'customer.subscription.updated', etc.
42
43 :param event_types: The event type(s) that should be handled.
44 :type event_types: str.
45 """
46
47 def decorator(func):
48 for event_type in event_types:
49 registrations[event_type].append(func)
50 return func
51
52 return decorator
53
54
55 def handler_all(func=None):
56 """
57 Decorator that registers a function as a webhook handler for ALL webhook events.
58
59 Handles all webhooks regardless of event type or sub-type.
60 """
61 if not func:
62 return functools.partial(handler_all)
63
64 registrations_global.append(func)
65
66 return func
67
68
69 def call_handlers(event):
70 """
71 Invoke all handlers for the provided event type/sub-type.
72
73 The handlers are invoked in the following order:
74
75 1. Global handlers
76 2. Event type handlers
77 3. Event sub-type handlers
78
79 Handlers within each group are invoked in order of registration.
80
81 :param event: The event model object.
82 :type event: ``djstripe.models.Event``
83 """
84 chain = [registrations_global]
85
86 # Build up a list of handlers with each qualified part of the event
87 # category and verb. For example, "customer.subscription.created" creates:
88 # 1. "customer"
89 # 2. "customer.subscription"
90 # 3. "customer.subscription.created"
91 for index, _ in enumerate(event.parts):
92 qualified_event_type = ".".join(event.parts[: (index + 1)])
93 chain.append(registrations[qualified_event_type])
94
95 for handler_func in itertools.chain(*chain):
96 handler_func(event=event)
97
[end of djstripe/webhooks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/djstripe/models/webhooks.py b/djstripe/models/webhooks.py
--- a/djstripe/models/webhooks.py
+++ b/djstripe/models/webhooks.py
@@ -7,7 +7,6 @@
from django.utils.functional import cached_property
from .. import settings as djstripe_settings
-from .. import webhooks
from ..context_managers import stripe_temporary_api_version
from ..fields import JSONField
from ..signals import webhook_processing_error
@@ -124,7 +123,8 @@
@property
def is_test_event(self):
- return self.json_body.get("id") == webhooks.TEST_EVENT_ID
+ event_id = self.json_body.get("id")
+ return event_id and event_id.endswith("_00000000000000")
def validate(self, api_key=None):
"""
diff --git a/djstripe/webhooks.py b/djstripe/webhooks.py
--- a/djstripe/webhooks.py
+++ b/djstripe/webhooks.py
@@ -25,6 +25,8 @@
registrations = defaultdict(list)
registrations_global = list()
+# Legacy. In previous versions of Stripe API, all test events used this ID.
+# Check out issue #779 for more information.
TEST_EVENT_ID = "evt_00000000000000"
|
{"golden_diff": "diff --git a/djstripe/models/webhooks.py b/djstripe/models/webhooks.py\n--- a/djstripe/models/webhooks.py\n+++ b/djstripe/models/webhooks.py\n@@ -7,7 +7,6 @@\n from django.utils.functional import cached_property\n \n from .. import settings as djstripe_settings\n-from .. import webhooks\n from ..context_managers import stripe_temporary_api_version\n from ..fields import JSONField\n from ..signals import webhook_processing_error\n@@ -124,7 +123,8 @@\n \n \t@property\n \tdef is_test_event(self):\n-\t\treturn self.json_body.get(\"id\") == webhooks.TEST_EVENT_ID\n+\t\tevent_id = self.json_body.get(\"id\")\n+\t\treturn event_id and event_id.endswith(\"_00000000000000\")\n \n \tdef validate(self, api_key=None):\n \t\t\"\"\"\ndiff --git a/djstripe/webhooks.py b/djstripe/webhooks.py\n--- a/djstripe/webhooks.py\n+++ b/djstripe/webhooks.py\n@@ -25,6 +25,8 @@\n registrations = defaultdict(list)\n registrations_global = list()\n \n+# Legacy. In previous versions of Stripe API, all test events used this ID.\n+# Check out issue #779 for more information.\n TEST_EVENT_ID = \"evt_00000000000000\"\n", "issue": "Test webhook with 1.2.3 vs 2018.11.08 returns 500 error 'No such event: account.external_00000000000000'\nHello, I am using a Stripe account in TEST mode with API version 2018.11.08 and Django Stripe version 1.2.3. When sending a test request to test the webhook, the view returns 500 instead of 200. But when triggering real events, they are correctly handled \r\n\r\nIn django admin or in stripe logging I can see test events have event ids such as \"account.external_00000000000000\" or \"balance.available_00000000000000\" etc but the library is checking TEST_EVENT_ID = \"evt_00000000000000\" \n", "before_files": [{"content": "import json\nimport warnings\nfrom traceback import format_exc\n\nimport stripe\nfrom django.db import models\nfrom django.utils.functional import cached_property\n\nfrom .. import settings as djstripe_settings\nfrom .. import webhooks\nfrom ..context_managers import stripe_temporary_api_version\nfrom ..fields import JSONField\nfrom ..signals import webhook_processing_error\nfrom ..utils import fix_django_headers\nfrom .base import logger\nfrom .core import Event\n\n\ndef _get_version():\n\tfrom .. import __version__\n\n\treturn __version__\n\n\nclass WebhookEventTrigger(models.Model):\n\t\"\"\"\n\tAn instance of a request that reached the server endpoint for Stripe webhooks.\n\n\tWebhook Events are initially **UNTRUSTED**, as it is possible for any web entity to\n\tpost any data to our webhook url. Data posted may be valid Stripe information, garbage, or even malicious.\n\tThe 'valid' flag in this model monitors this.\n\t\"\"\"\n\n\tid = models.BigAutoField(primary_key=True)\n\tremote_ip = models.GenericIPAddressField(help_text=\"IP address of the request client.\")\n\theaders = JSONField()\n\tbody = models.TextField(blank=True)\n\tvalid = models.BooleanField(\n\t\tdefault=False, help_text=\"Whether or not the webhook event has passed validation\"\n\t)\n\tprocessed = models.BooleanField(\n\t\tdefault=False,\n\t\thelp_text=\"Whether or not the webhook event has been successfully processed\",\n\t)\n\texception = models.CharField(max_length=128, blank=True)\n\ttraceback = models.TextField(\n\t\tblank=True, help_text=\"Traceback if an exception was thrown during processing\"\n\t)\n\tevent = models.ForeignKey(\n\t\t\"Event\",\n\t\ton_delete=models.SET_NULL,\n\t\tnull=True,\n\t\tblank=True,\n\t\thelp_text=\"Event object contained in the (valid) Webhook\",\n\t)\n\tdjstripe_version = models.CharField(\n\t\tmax_length=32,\n\t\tdefault=_get_version, # Needs to be a callable, otherwise it's a db default.\n\t\thelp_text=\"The version of dj-stripe when the webhook was received\",\n\t)\n\tcreated = models.DateTimeField(auto_now_add=True)\n\tupdated = models.DateTimeField(auto_now=True)\n\n\t@classmethod\n\tdef from_request(cls, request):\n\t\t\"\"\"\n\t\tCreate, validate and process a WebhookEventTrigger given a Django\n\t\trequest object.\n\n\t\tThe process is three-fold:\n\t\t1. Create a WebhookEventTrigger object from a Django request.\n\t\t2. Validate the WebhookEventTrigger as a Stripe event using the API.\n\t\t3. If valid, process it into an Event object (and child resource).\n\t\t\"\"\"\n\n\t\theaders = fix_django_headers(request.META)\n\t\tassert headers\n\t\ttry:\n\t\t\tbody = request.body.decode(request.encoding or \"utf-8\")\n\t\texcept Exception:\n\t\t\tbody = \"(error decoding body)\"\n\n\t\tip = request.META.get(\"REMOTE_ADDR\")\n\t\tif ip is None:\n\t\t\twarnings.warn(\n\t\t\t\t\"Could not determine remote IP (missing REMOTE_ADDR). \"\n\t\t\t\t\"This is likely an issue with your wsgi/server setup.\"\n\t\t\t)\n\t\t\tip = \"0.0.0.0\"\n\t\tobj = cls.objects.create(headers=headers, body=body, remote_ip=ip)\n\n\t\ttry:\n\t\t\tobj.valid = obj.validate()\n\t\t\tif obj.valid:\n\t\t\t\tif djstripe_settings.WEBHOOK_EVENT_CALLBACK:\n\t\t\t\t\t# If WEBHOOK_EVENT_CALLBACK, pass it for processing\n\t\t\t\t\tdjstripe_settings.WEBHOOK_EVENT_CALLBACK(obj)\n\t\t\t\telse:\n\t\t\t\t\t# Process the item (do not save it, it'll get saved below)\n\t\t\t\t\tobj.process(save=False)\n\t\texcept Exception as e:\n\t\t\tmax_length = WebhookEventTrigger._meta.get_field(\"exception\").max_length\n\t\t\tobj.exception = str(e)[:max_length]\n\t\t\tobj.traceback = format_exc()\n\n\t\t\t# Send the exception as the webhook_processing_error signal\n\t\t\twebhook_processing_error.send(\n\t\t\t\tsender=WebhookEventTrigger, exception=e, data=getattr(e, \"http_body\", \"\")\n\t\t\t)\n\n\t\t\t# re-raise the exception so Django sees it\n\t\t\traise e\n\t\tfinally:\n\t\t\tobj.save()\n\n\t\treturn obj\n\n\t@cached_property\n\tdef json_body(self):\n\t\ttry:\n\t\t\treturn json.loads(self.body)\n\t\texcept ValueError:\n\t\t\treturn {}\n\n\t@property\n\tdef is_test_event(self):\n\t\treturn self.json_body.get(\"id\") == webhooks.TEST_EVENT_ID\n\n\tdef validate(self, api_key=None):\n\t\t\"\"\"\n\t\tThe original contents of the Event message must be confirmed by\n\t\trefetching it and comparing the fetched data with the original data.\n\n\t\tThis function makes an API call to Stripe to redownload the Event data\n\t\tand returns whether or not it matches the WebhookEventTrigger data.\n\t\t\"\"\"\n\n\t\tlocal_data = self.json_body\n\t\tif \"id\" not in local_data or \"livemode\" not in local_data:\n\t\t\treturn False\n\n\t\tif self.is_test_event:\n\t\t\tlogger.info(\"Test webhook received: {}\".format(local_data))\n\t\t\treturn False\n\n\t\tif djstripe_settings.WEBHOOK_VALIDATION is None:\n\t\t\t# validation disabled\n\t\t\treturn True\n\t\telif (\n\t\t\tdjstripe_settings.WEBHOOK_VALIDATION == \"verify_signature\"\n\t\t\tand djstripe_settings.WEBHOOK_SECRET\n\t\t):\n\t\t\ttry:\n\t\t\t\tstripe.WebhookSignature.verify_header(\n\t\t\t\t\tself.body,\n\t\t\t\t\tself.headers.get(\"stripe-signature\"),\n\t\t\t\t\tdjstripe_settings.WEBHOOK_SECRET,\n\t\t\t\t\tdjstripe_settings.WEBHOOK_TOLERANCE,\n\t\t\t\t)\n\t\t\texcept stripe.error.SignatureVerificationError:\n\t\t\t\treturn False\n\t\t\telse:\n\t\t\t\treturn True\n\n\t\tlivemode = local_data[\"livemode\"]\n\t\tapi_key = api_key or djstripe_settings.get_default_api_key(livemode)\n\n\t\t# Retrieve the event using the api_version specified in itself\n\t\twith stripe_temporary_api_version(local_data[\"api_version\"], validate=False):\n\t\t\tremote_data = Event.stripe_class.retrieve(id=local_data[\"id\"], api_key=api_key)\n\n\t\treturn local_data[\"data\"] == remote_data[\"data\"]\n\n\tdef process(self, save=True):\n\t\t# Reset traceback and exception in case of reprocessing\n\t\tself.exception = \"\"\n\t\tself.traceback = \"\"\n\n\t\tself.event = Event.process(self.json_body)\n\t\tself.processed = True\n\t\tif save:\n\t\t\tself.save()\n\n\t\treturn self.event\n", "path": "djstripe/models/webhooks.py"}, {"content": "\"\"\"\nUtils related to processing or registering for webhooks\n\nA model registers itself here if it wants to be in the list of processing\nfunctions for a particular webhook. Each processor will have the ability\nto modify the event object, access event data, and do what it needs to do\n\nregistrations are keyed by top-level event type (e.g. \"invoice\", \"customer\", etc)\nEach registration entry is a list of processors\nEach processor in these lists is a function to be called\nThe function signature is:\n\t<Event object>\n\nThere is also a \"global registry\" which is just a list of processors (as defined above)\n\nNOTE: global processors are called before other processors.\n\"\"\"\nimport functools\nimport itertools\nfrom collections import defaultdict\n\n__all__ = [\"handler\", \"handler_all\", \"call_handlers\"]\n\n\nregistrations = defaultdict(list)\nregistrations_global = list()\n\nTEST_EVENT_ID = \"evt_00000000000000\"\n\n\ndef handler(*event_types):\n\t\"\"\"\n\tDecorator that registers a function as a webhook handler.\n\n\tFunctions can be registered for event types (e.g. 'customer') or\n\tfully qualified event sub-types (e.g. 'customer.subscription.deleted').\n\n\tIf an event type is specified, the handler will receive callbacks for\n\tALL webhook events of that type. For example, if 'customer' is specified,\n\tthe handler will receive events for 'customer.subscription.created',\n\t'customer.subscription.updated', etc.\n\n\t:param event_types: The event type(s) that should be handled.\n\t:type event_types: str.\n\t\"\"\"\n\n\tdef decorator(func):\n\t\tfor event_type in event_types:\n\t\t\tregistrations[event_type].append(func)\n\t\treturn func\n\n\treturn decorator\n\n\ndef handler_all(func=None):\n\t\"\"\"\n\tDecorator that registers a function as a webhook handler for ALL webhook events.\n\n\tHandles all webhooks regardless of event type or sub-type.\n\t\"\"\"\n\tif not func:\n\t\treturn functools.partial(handler_all)\n\n\tregistrations_global.append(func)\n\n\treturn func\n\n\ndef call_handlers(event):\n\t\"\"\"\n\tInvoke all handlers for the provided event type/sub-type.\n\n\tThe handlers are invoked in the following order:\n\n\t1. Global handlers\n\t2. Event type handlers\n\t3. Event sub-type handlers\n\n\tHandlers within each group are invoked in order of registration.\n\n\t:param event: The event model object.\n\t:type event: ``djstripe.models.Event``\n\t\"\"\"\n\tchain = [registrations_global]\n\n\t# Build up a list of handlers with each qualified part of the event\n\t# category and verb. For example, \"customer.subscription.created\" creates:\n\t# 1. \"customer\"\n\t# 2. \"customer.subscription\"\n\t# 3. \"customer.subscription.created\"\n\tfor index, _ in enumerate(event.parts):\n\t\tqualified_event_type = \".\".join(event.parts[: (index + 1)])\n\t\tchain.append(registrations[qualified_event_type])\n\n\tfor handler_func in itertools.chain(*chain):\n\t\thandler_func(event=event)\n", "path": "djstripe/webhooks.py"}]}
| 3,457 | 303 |
gh_patches_debug_50470
|
rasdani/github-patches
|
git_diff
|
cython__cython-4942
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Annotated attributes of cclass are not supporting pointers
<!--
**PLEASE READ THIS FIRST:**
- Do not use the bug and feature tracker for support requests. Use the `cython-users` mailing list instead.
- Did you search for similar issues already? Please do, it helps to save us precious time that we otherwise could not invest into development.
- Did you try the latest master branch or pre-release? It might already have what you want to report. Also see the [Changelog](https://github.com/cython/cython/blob/master/CHANGES.rst) regarding recent changes.
-->
**Describe the bug**
The compilation is failing, when attribute of cclass is declared using annotated type containing pointer.
**To Reproduce**
Following code:
```python
import cython
@cython.cclass
class Foo:
a: cython.pointer(cython.int)
def bar(self):
self.a = cython.NULL
```
fails during compilation with error:
```
$ cython -3 test.py
Error compiling Cython file:
------------------------------------------------------------
...
@cython.cclass
class Foo:
a: cython.pointer(cython.int)
def bar(self):
self.a = cython.NULL
^
------------------------------------------------------------
test.py:8:23: Cannot convert 'void *' to Python object
```
**Expected behavior**
Compilation should be successfull.
**Environment (please complete the following information):**
- OS: Linux
- Python version: Python 3.9.2
- Cython version: master
**Additional context**
When `declare()` statement or `cython.p_int` type is used, compilation is successful:
```python
import cython
@cython.cclass
class Foo:
a = cython.declare(cython.pointer(cython.int))
def bar(self):
self.a = cython.NULL
```
```python
import cython
@cython.cclass
class Foo:
a: cython.p_int
def bar(self):
self.a = cython.NULL
```
</issue>
<code>
[start of docs/examples/tutorial/clibraries/queue.py]
1 from cython.cimports import cqueue
2
3 @cython.cclass
4 class Queue:
5 _c_queue = cython.declare(cython.pointer(cqueue.Queue))
6
7 def __cinit__(self):
8 self._c_queue = cqueue.queue_new()
9
[end of docs/examples/tutorial/clibraries/queue.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/examples/tutorial/clibraries/queue.py b/docs/examples/tutorial/clibraries/queue.py
--- a/docs/examples/tutorial/clibraries/queue.py
+++ b/docs/examples/tutorial/clibraries/queue.py
@@ -2,7 +2,7 @@
@cython.cclass
class Queue:
- _c_queue = cython.declare(cython.pointer(cqueue.Queue))
+ _c_queue: cython.pointer(cqueue.Queue)
def __cinit__(self):
self._c_queue = cqueue.queue_new()
|
{"golden_diff": "diff --git a/docs/examples/tutorial/clibraries/queue.py b/docs/examples/tutorial/clibraries/queue.py\n--- a/docs/examples/tutorial/clibraries/queue.py\n+++ b/docs/examples/tutorial/clibraries/queue.py\n@@ -2,7 +2,7 @@\n \n @cython.cclass\n class Queue:\n- _c_queue = cython.declare(cython.pointer(cqueue.Queue))\n+ _c_queue: cython.pointer(cqueue.Queue)\n \n def __cinit__(self):\n self._c_queue = cqueue.queue_new()\n", "issue": "[BUG] Annotated attributes of cclass are not supporting pointers\n<!--\r\n**PLEASE READ THIS FIRST:**\r\n- Do not use the bug and feature tracker for support requests. Use the `cython-users` mailing list instead.\r\n- Did you search for similar issues already? Please do, it helps to save us precious time that we otherwise could not invest into development.\r\n- Did you try the latest master branch or pre-release? It might already have what you want to report. Also see the [Changelog](https://github.com/cython/cython/blob/master/CHANGES.rst) regarding recent changes.\r\n-->\r\n\r\n**Describe the bug**\r\nThe compilation is failing, when attribute of cclass is declared using annotated type containing pointer.\r\n\r\n**To Reproduce**\r\nFollowing code:\r\n```python\r\nimport cython\r\n\r\[email protected]\r\nclass Foo:\r\n a: cython.pointer(cython.int)\r\n\r\n def bar(self):\r\n self.a = cython.NULL\r\n```\r\nfails during compilation with error:\r\n```\r\n$ cython -3 test.py\r\n\r\nError compiling Cython file:\r\n------------------------------------------------------------\r\n...\r\[email protected]\r\nclass Foo:\r\n a: cython.pointer(cython.int)\r\n\r\n def bar(self):\r\n self.a = cython.NULL\r\n ^\r\n------------------------------------------------------------\r\n\r\ntest.py:8:23: Cannot convert 'void *' to Python object\r\n```\r\n\r\n**Expected behavior**\r\nCompilation should be successfull.\r\n\r\n**Environment (please complete the following information):**\r\n - OS: Linux\r\n - Python version: Python 3.9.2\r\n - Cython version: master\r\n\r\n**Additional context**\r\nWhen `declare()` statement or `cython.p_int` type is used, compilation is successful:\r\n\r\n```python\r\nimport cython\r\n\r\[email protected]\r\nclass Foo:\r\n a = cython.declare(cython.pointer(cython.int))\r\n\r\n def bar(self):\r\n self.a = cython.NULL\r\n```\r\n\r\n```python\r\nimport cython\r\n\r\[email protected]\r\nclass Foo:\r\n a: cython.p_int\r\n\r\n def bar(self):\r\n self.a = cython.NULL\r\n```\r\n\n", "before_files": [{"content": "from cython.cimports import cqueue\n\[email protected]\nclass Queue:\n _c_queue = cython.declare(cython.pointer(cqueue.Queue))\n\n def __cinit__(self):\n self._c_queue = cqueue.queue_new()\n", "path": "docs/examples/tutorial/clibraries/queue.py"}]}
| 1,040 | 114 |
gh_patches_debug_40379
|
rasdani/github-patches
|
git_diff
|
wemake-services__wemake-python-styleguide-2409
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Knowing the violation code it is quite hard to find the violation description
**Use Case.** I am testing wemake-python-styleguide on my project.
```
...
22:5 WPS337 Found multiline conditions
if (
^
Full list of violations and explanations:
https://wemake-python-stylegui.de/en/0.13.4/pages/usage/violations/
make: *** [infra/make/lint.mk:3: lint] Error 1
```
To get more information on what `WPS337` is for, I have to do several steps:
* Open https://wemake-python-stylegui.de/en/0.13.4/pages/usage/violations/
* Realize that WPS337 is probably behind the link titled `WPS300 - WPS399`
* And now I am stuck; I can `Ctrl + F` → `337`, but that won't help; so I try to find the word `multiline`
* and finally I reach the description I wanted: https://wemake-python-stylegui.de/en/0.13.4/pages/usage/violations/consistency.html#wemake_python_styleguide.violations.consistency.MultilineConditionsViolation
This takes time, especially when the title of the violation on this page does not exactly correspond to the name the tool prints on command line.
**Ideally,** I would see a link to the violation description right in the console, like this:
```
...
22:5 WPS337 Found multiline conditions https://wps.it/0.13.4/WPS337/
if (
^
...
```
Regrettably, wps.to domain is registered already, but wps.it is there.
**If that is not possible,** it would help if the docs pages at least included the violation codes, to make the page searchable via `Ctrl + F`.
</issue>
<code>
[start of wemake_python_styleguide/violations/base.py]
1 """
2 Contains detailed technical information about :term:`violation` internals.
3
4 .. _violations:
5
6 Violations API
7 --------------
8
9 .. currentmodule:: wemake_python_styleguide.violations.base
10
11 .. autoclasstree:: wemake_python_styleguide.violations.base
12
13 .. autosummary::
14 :nosignatures:
15
16 ASTViolation
17 MaybeASTViolation
18 TokenizeViolation
19 SimpleViolation
20
21 Violation cannot have more than one base class.
22 See :ref:`tutorial` for more information about choosing a correct base class.
23
24 Conventions
25 ~~~~~~~~~~~
26
27 - Each violation class name should end with "Violation"
28 - Each violation must have a long docstring with full description
29 - Each violation must have "Reasoning" and "Solution" sections
30 - Each violation must have "versionadded" policy
31 - Each violation should have an example with correct and wrong usages
32 - If violation error template should have a parameter
33 it should be the last part of the text: ``: {0}``
34
35 Deprecating a violation
36 ~~~~~~~~~~~~~~~~~~~~~~~
37
38 When you want to mark some violation as deprecated,
39 then assign ``deprecated`` boolean flag to it:
40
41 .. code:: python
42
43 @final
44 class SomeViolation(ASTViolation):
45 deprecated = True
46
47 Reference
48 ~~~~~~~~~
49
50 """
51
52 import abc
53 import ast
54 import enum
55 import tokenize
56 from typing import Callable, ClassVar, Optional, Set, Tuple, Union
57
58 from typing_extensions import final
59
60 #: General type for all possible nodes where error happens.
61 ErrorNode = Union[
62 ast.AST,
63 tokenize.TokenInfo,
64 None,
65 ]
66
67 #: We use this type to define helper classes with callbacks to add violations.
68 ErrorCallback = Callable[['BaseViolation'], None]
69
70
71 @enum.unique
72 class ViolationPostfixes(enum.Enum):
73 """String values of postfixes used for violation baselines."""
74
75 bigger_than = ' > {0}'
76 less_than = ' < {0}'
77
78
79 class BaseViolation(object, metaclass=abc.ABCMeta):
80 """
81 Abstract base class for all style violations.
82
83 It basically just defines how to create any error and how to format
84 this error later on.
85
86 Each subclass must define ``error_template`` and ``code`` fields.
87
88 Attributes:
89 error_template: message that will be shown to user after formatting.
90 code: unique violation number. Used to identify the violation.
91 previous_codes: just a documentation thing to track changes in time.
92 deprecated: indicates that this violation will be removed soon.
93 postfix_template: indicates message that we show at the very end.
94
95 """
96
97 error_template: ClassVar[str]
98 code: ClassVar[int]
99 previous_codes: ClassVar[Set[int]]
100 deprecated: ClassVar[bool] = False
101
102 # We use this code to show base metrics and thresholds mostly:
103 postfix_template: ClassVar[ViolationPostfixes] = (
104 ViolationPostfixes.bigger_than
105 )
106
107 def __init__(
108 self,
109 node: ErrorNode,
110 text: Optional[str] = None,
111 baseline: Optional[int] = None,
112 ) -> None:
113 """
114 Creates a new instance of an abstract violation.
115
116 Arguments:
117 node: violation was raised by this node. If applicable.
118 text: extra text to format the final message. If applicable.
119 baseline: some complexity violations show the logic threshold here.
120
121 """
122 self._node = node
123 self._text = text
124 self._baseline = baseline
125
126 @final
127 def message(self) -> str:
128 """
129 Returns error's formatted message with code and reason.
130
131 Conditionally formats the ``error_template`` if it is required.
132 """
133 return '{0} {1}{2}'.format(
134 self._full_code(),
135 self.error_template.format(self._text),
136 self._postfix_information(),
137 )
138
139 @final
140 def node_items(self) -> Tuple[int, int, str]:
141 """Returns tuple to match ``flake8`` API format."""
142 return (*self._location(), self.message())
143
144 @final
145 def _full_code(self) -> str:
146 """
147 Returns fully formatted code.
148
149 Adds violation letter to the numbers.
150 Also ensures that codes like ``3`` will be represented as ``WPS003``.
151 """
152 return 'WPS{0}'.format(str(self.code).zfill(3))
153
154 @final
155 def _postfix_information(self) -> str:
156 """
157 Adds useful information to the end of the violation message.
158
159 Useful for complexity baselines and other thresholds.
160 """
161 if self._baseline is None:
162 return ''
163 return self.postfix_template.value.format(self._baseline)
164
165 @abc.abstractmethod
166 def _location(self) -> Tuple[int, int]:
167 """Base method for showing error location."""
168
169
170 class _BaseASTViolation(BaseViolation, metaclass=abc.ABCMeta):
171 """Used as a based type for all ``ast`` violations."""
172
173 _node: Optional[ast.AST]
174
175 @final
176 def _location(self) -> Tuple[int, int]:
177 line_number = getattr(self._node, 'lineno', 0)
178 column_offset = getattr(self._node, 'col_offset', 0)
179 return line_number, column_offset
180
181
182 class ASTViolation(_BaseASTViolation, metaclass=abc.ABCMeta):
183 """Violation for ``ast`` based style visitors."""
184
185 _node: ast.AST
186
187
188 class MaybeASTViolation(_BaseASTViolation, metaclass=abc.ABCMeta):
189 """
190 Violation for ``ast`` and modules visitors.
191
192 Is used for violations that share the same rule for nodes and module names.
193 Is wildly used for naming rules.
194 """
195
196 def __init__(
197 self,
198 node: Optional[ast.AST] = None,
199 text: Optional[str] = None,
200 baseline: Optional[int] = None,
201 ) -> None:
202 """Creates new instance of module violation without explicit node."""
203 super().__init__(node, text=text, baseline=baseline)
204
205
206 class TokenizeViolation(BaseViolation, metaclass=abc.ABCMeta):
207 """Violation for ``tokenize`` based visitors."""
208
209 _node: tokenize.TokenInfo
210
211 @final
212 def _location(self) -> Tuple[int, int]:
213 return self._node.start
214
215
216 class SimpleViolation(BaseViolation, metaclass=abc.ABCMeta):
217 """Violation for cases where there's no associated nodes."""
218
219 _node: None
220
221 def __init__(
222 self,
223 node=None,
224 text: Optional[str] = None,
225 baseline: Optional[int] = None,
226 ) -> None:
227 """Creates new instance of simple style violation."""
228 super().__init__(node, text=text, baseline=baseline)
229
230 @final
231 def _location(self) -> Tuple[int, int]:
232 """
233 Return violation location inside the file.
234
235 Default location is in the so-called "file beginning".
236 Cannot be ignored by inline ``noqa`` comments.
237 """
238 return 0, 0
239
[end of wemake_python_styleguide/violations/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wemake_python_styleguide/violations/base.py b/wemake_python_styleguide/violations/base.py
--- a/wemake_python_styleguide/violations/base.py
+++ b/wemake_python_styleguide/violations/base.py
@@ -76,7 +76,8 @@
less_than = ' < {0}'
-class BaseViolation(object, metaclass=abc.ABCMeta):
+# TODO: remove `noqa` after a new release:
+class BaseViolation(object, metaclass=abc.ABCMeta): # noqa: WPS338
"""
Abstract base class for all style violations.
@@ -99,11 +100,34 @@
previous_codes: ClassVar[Set[int]]
deprecated: ClassVar[bool] = False
+ # assigned in __init_subclass__
+ full_code: ClassVar[str]
+ summary: ClassVar[str]
+
# We use this code to show base metrics and thresholds mostly:
postfix_template: ClassVar[ViolationPostfixes] = (
ViolationPostfixes.bigger_than
)
+ def __init_subclass__(cls, **kwargs) -> None:
+ """Sets additional values for subclasses."""
+ super().__init_subclass__(**kwargs)
+ violation_code = getattr(cls, 'code', None)
+ if violation_code is None:
+ return
+ if cls.__doc__ is None:
+ raise TypeError(
+ 'Please include a docstring documenting {0}'.format(cls),
+ )
+ # this is mostly done for docs to display the full code,
+ # allowing its indexing in search engines and better discoverability
+ cls.full_code = cls._full_code()
+ cls.summary = cls.__doc__.lstrip().split('\n', maxsplit=1)[0]
+ # this hack adds full code to summary table in the docs
+ cls.__doc__ = _prepend_skipping_whitespaces(
+ '{0} — '.format(cls.full_code), cls.__doc__,
+ )
+
def __init__(
self,
node: ErrorNode,
@@ -131,7 +155,7 @@
Conditionally formats the ``error_template`` if it is required.
"""
return '{0} {1}{2}'.format(
- self._full_code(),
+ self.full_code,
self.error_template.format(self._text),
self._postfix_information(),
)
@@ -142,14 +166,15 @@
return (*self._location(), self.message())
@final
- def _full_code(self) -> str:
+ @classmethod
+ def _full_code(cls) -> str:
"""
Returns fully formatted code.
Adds violation letter to the numbers.
Also ensures that codes like ``3`` will be represented as ``WPS003``.
"""
- return 'WPS{0}'.format(str(self.code).zfill(3))
+ return 'WPS{0}'.format(str(cls.code).zfill(3))
@final
def _postfix_information(self) -> str:
@@ -236,3 +261,9 @@
Cannot be ignored by inline ``noqa`` comments.
"""
return 0, 0
+
+
+def _prepend_skipping_whitespaces(prefix: str, text: str) -> str:
+ lstripped_text = text.lstrip()
+ leading_whitespaces = text[:len(text) - len(lstripped_text)]
+ return leading_whitespaces + prefix + lstripped_text
|
{"golden_diff": "diff --git a/wemake_python_styleguide/violations/base.py b/wemake_python_styleguide/violations/base.py\n--- a/wemake_python_styleguide/violations/base.py\n+++ b/wemake_python_styleguide/violations/base.py\n@@ -76,7 +76,8 @@\n less_than = ' < {0}'\n \n \n-class BaseViolation(object, metaclass=abc.ABCMeta):\n+# TODO: remove `noqa` after a new release:\n+class BaseViolation(object, metaclass=abc.ABCMeta): # noqa: WPS338\n \"\"\"\n Abstract base class for all style violations.\n \n@@ -99,11 +100,34 @@\n previous_codes: ClassVar[Set[int]]\n deprecated: ClassVar[bool] = False\n \n+ # assigned in __init_subclass__\n+ full_code: ClassVar[str]\n+ summary: ClassVar[str]\n+\n # We use this code to show base metrics and thresholds mostly:\n postfix_template: ClassVar[ViolationPostfixes] = (\n ViolationPostfixes.bigger_than\n )\n \n+ def __init_subclass__(cls, **kwargs) -> None:\n+ \"\"\"Sets additional values for subclasses.\"\"\"\n+ super().__init_subclass__(**kwargs)\n+ violation_code = getattr(cls, 'code', None)\n+ if violation_code is None:\n+ return\n+ if cls.__doc__ is None:\n+ raise TypeError(\n+ 'Please include a docstring documenting {0}'.format(cls),\n+ )\n+ # this is mostly done for docs to display the full code,\n+ # allowing its indexing in search engines and better discoverability\n+ cls.full_code = cls._full_code()\n+ cls.summary = cls.__doc__.lstrip().split('\\n', maxsplit=1)[0]\n+ # this hack adds full code to summary table in the docs\n+ cls.__doc__ = _prepend_skipping_whitespaces(\n+ '{0} \u2014 '.format(cls.full_code), cls.__doc__,\n+ )\n+\n def __init__(\n self,\n node: ErrorNode,\n@@ -131,7 +155,7 @@\n Conditionally formats the ``error_template`` if it is required.\n \"\"\"\n return '{0} {1}{2}'.format(\n- self._full_code(),\n+ self.full_code,\n self.error_template.format(self._text),\n self._postfix_information(),\n )\n@@ -142,14 +166,15 @@\n return (*self._location(), self.message())\n \n @final\n- def _full_code(self) -> str:\n+ @classmethod\n+ def _full_code(cls) -> str:\n \"\"\"\n Returns fully formatted code.\n \n Adds violation letter to the numbers.\n Also ensures that codes like ``3`` will be represented as ``WPS003``.\n \"\"\"\n- return 'WPS{0}'.format(str(self.code).zfill(3))\n+ return 'WPS{0}'.format(str(cls.code).zfill(3))\n \n @final\n def _postfix_information(self) -> str:\n@@ -236,3 +261,9 @@\n Cannot be ignored by inline ``noqa`` comments.\n \"\"\"\n return 0, 0\n+\n+\n+def _prepend_skipping_whitespaces(prefix: str, text: str) -> str:\n+ lstripped_text = text.lstrip()\n+ leading_whitespaces = text[:len(text) - len(lstripped_text)]\n+ return leading_whitespaces + prefix + lstripped_text\n", "issue": "Knowing the violation code it is quite hard to find the violation description\n**Use Case.** I am testing wemake-python-styleguide on my project.\r\n\r\n```\r\n...\r\n 22:5 WPS337 Found multiline conditions\r\n if (\r\n ^\r\n\r\nFull list of violations and explanations:\r\nhttps://wemake-python-stylegui.de/en/0.13.4/pages/usage/violations/\r\nmake: *** [infra/make/lint.mk:3: lint] Error 1\r\n```\r\n\r\nTo get more information on what `WPS337` is for, I have to do several steps:\r\n\r\n* Open https://wemake-python-stylegui.de/en/0.13.4/pages/usage/violations/\r\n* Realize that WPS337 is probably behind the link titled `WPS300 - WPS399`\r\n* And now I am stuck; I can `Ctrl + F` \u2192 `337`, but that won't help; so I try to find the word `multiline`\r\n* and finally I reach the description I wanted: https://wemake-python-stylegui.de/en/0.13.4/pages/usage/violations/consistency.html#wemake_python_styleguide.violations.consistency.MultilineConditionsViolation\r\n\r\nThis takes time, especially when the title of the violation on this page does not exactly correspond to the name the tool prints on command line.\r\n\r\n**Ideally,** I would see a link to the violation description right in the console, like this:\r\n\r\n```\r\n...\r\n 22:5 WPS337 Found multiline conditions https://wps.it/0.13.4/WPS337/\r\n if (\r\n ^\r\n...\r\n```\r\n\r\nRegrettably, wps.to domain is registered already, but wps.it is there.\r\n\r\n**If that is not possible,** it would help if the docs pages at least included the violation codes, to make the page searchable via `Ctrl + F`.\n", "before_files": [{"content": "\"\"\"\nContains detailed technical information about :term:`violation` internals.\n\n.. _violations:\n\nViolations API\n--------------\n\n.. currentmodule:: wemake_python_styleguide.violations.base\n\n.. autoclasstree:: wemake_python_styleguide.violations.base\n\n.. autosummary::\n :nosignatures:\n\n ASTViolation\n MaybeASTViolation\n TokenizeViolation\n SimpleViolation\n\nViolation cannot have more than one base class.\nSee :ref:`tutorial` for more information about choosing a correct base class.\n\nConventions\n~~~~~~~~~~~\n\n- Each violation class name should end with \"Violation\"\n- Each violation must have a long docstring with full description\n- Each violation must have \"Reasoning\" and \"Solution\" sections\n- Each violation must have \"versionadded\" policy\n- Each violation should have an example with correct and wrong usages\n- If violation error template should have a parameter\n it should be the last part of the text: ``: {0}``\n\nDeprecating a violation\n~~~~~~~~~~~~~~~~~~~~~~~\n\nWhen you want to mark some violation as deprecated,\nthen assign ``deprecated`` boolean flag to it:\n\n.. code:: python\n\n @final\n class SomeViolation(ASTViolation):\n deprecated = True\n\nReference\n~~~~~~~~~\n\n\"\"\"\n\nimport abc\nimport ast\nimport enum\nimport tokenize\nfrom typing import Callable, ClassVar, Optional, Set, Tuple, Union\n\nfrom typing_extensions import final\n\n#: General type for all possible nodes where error happens.\nErrorNode = Union[\n ast.AST,\n tokenize.TokenInfo,\n None,\n]\n\n#: We use this type to define helper classes with callbacks to add violations.\nErrorCallback = Callable[['BaseViolation'], None]\n\n\[email protected]\nclass ViolationPostfixes(enum.Enum):\n \"\"\"String values of postfixes used for violation baselines.\"\"\"\n\n bigger_than = ' > {0}'\n less_than = ' < {0}'\n\n\nclass BaseViolation(object, metaclass=abc.ABCMeta):\n \"\"\"\n Abstract base class for all style violations.\n\n It basically just defines how to create any error and how to format\n this error later on.\n\n Each subclass must define ``error_template`` and ``code`` fields.\n\n Attributes:\n error_template: message that will be shown to user after formatting.\n code: unique violation number. Used to identify the violation.\n previous_codes: just a documentation thing to track changes in time.\n deprecated: indicates that this violation will be removed soon.\n postfix_template: indicates message that we show at the very end.\n\n \"\"\"\n\n error_template: ClassVar[str]\n code: ClassVar[int]\n previous_codes: ClassVar[Set[int]]\n deprecated: ClassVar[bool] = False\n\n # We use this code to show base metrics and thresholds mostly:\n postfix_template: ClassVar[ViolationPostfixes] = (\n ViolationPostfixes.bigger_than\n )\n\n def __init__(\n self,\n node: ErrorNode,\n text: Optional[str] = None,\n baseline: Optional[int] = None,\n ) -> None:\n \"\"\"\n Creates a new instance of an abstract violation.\n\n Arguments:\n node: violation was raised by this node. If applicable.\n text: extra text to format the final message. If applicable.\n baseline: some complexity violations show the logic threshold here.\n\n \"\"\"\n self._node = node\n self._text = text\n self._baseline = baseline\n\n @final\n def message(self) -> str:\n \"\"\"\n Returns error's formatted message with code and reason.\n\n Conditionally formats the ``error_template`` if it is required.\n \"\"\"\n return '{0} {1}{2}'.format(\n self._full_code(),\n self.error_template.format(self._text),\n self._postfix_information(),\n )\n\n @final\n def node_items(self) -> Tuple[int, int, str]:\n \"\"\"Returns tuple to match ``flake8`` API format.\"\"\"\n return (*self._location(), self.message())\n\n @final\n def _full_code(self) -> str:\n \"\"\"\n Returns fully formatted code.\n\n Adds violation letter to the numbers.\n Also ensures that codes like ``3`` will be represented as ``WPS003``.\n \"\"\"\n return 'WPS{0}'.format(str(self.code).zfill(3))\n\n @final\n def _postfix_information(self) -> str:\n \"\"\"\n Adds useful information to the end of the violation message.\n\n Useful for complexity baselines and other thresholds.\n \"\"\"\n if self._baseline is None:\n return ''\n return self.postfix_template.value.format(self._baseline)\n\n @abc.abstractmethod\n def _location(self) -> Tuple[int, int]:\n \"\"\"Base method for showing error location.\"\"\"\n\n\nclass _BaseASTViolation(BaseViolation, metaclass=abc.ABCMeta):\n \"\"\"Used as a based type for all ``ast`` violations.\"\"\"\n\n _node: Optional[ast.AST]\n\n @final\n def _location(self) -> Tuple[int, int]:\n line_number = getattr(self._node, 'lineno', 0)\n column_offset = getattr(self._node, 'col_offset', 0)\n return line_number, column_offset\n\n\nclass ASTViolation(_BaseASTViolation, metaclass=abc.ABCMeta):\n \"\"\"Violation for ``ast`` based style visitors.\"\"\"\n\n _node: ast.AST\n\n\nclass MaybeASTViolation(_BaseASTViolation, metaclass=abc.ABCMeta):\n \"\"\"\n Violation for ``ast`` and modules visitors.\n\n Is used for violations that share the same rule for nodes and module names.\n Is wildly used for naming rules.\n \"\"\"\n\n def __init__(\n self,\n node: Optional[ast.AST] = None,\n text: Optional[str] = None,\n baseline: Optional[int] = None,\n ) -> None:\n \"\"\"Creates new instance of module violation without explicit node.\"\"\"\n super().__init__(node, text=text, baseline=baseline)\n\n\nclass TokenizeViolation(BaseViolation, metaclass=abc.ABCMeta):\n \"\"\"Violation for ``tokenize`` based visitors.\"\"\"\n\n _node: tokenize.TokenInfo\n\n @final\n def _location(self) -> Tuple[int, int]:\n return self._node.start\n\n\nclass SimpleViolation(BaseViolation, metaclass=abc.ABCMeta):\n \"\"\"Violation for cases where there's no associated nodes.\"\"\"\n\n _node: None\n\n def __init__(\n self,\n node=None,\n text: Optional[str] = None,\n baseline: Optional[int] = None,\n ) -> None:\n \"\"\"Creates new instance of simple style violation.\"\"\"\n super().__init__(node, text=text, baseline=baseline)\n\n @final\n def _location(self) -> Tuple[int, int]:\n \"\"\"\n Return violation location inside the file.\n\n Default location is in the so-called \"file beginning\".\n Cannot be ignored by inline ``noqa`` comments.\n \"\"\"\n return 0, 0\n", "path": "wemake_python_styleguide/violations/base.py"}]}
| 3,104 | 794 |
gh_patches_debug_6338
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-1164
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dbt trying to parse tags in analyses even when wrapped in `raw` tags
## Issue
### Steps to Reproduce
1. Put an unknown tag within `raw` tags in a sql file, in your _analysis_ directory (note that the bug doesn't occur in a model).
```
{% raw %}
{% form %}
date_part:
type: select
default: day
options: [hour, day, week, month]
{% endform %}
{% endraw %}
```
2. run `dbt compile`
### Expected Result
The following should be compiled:
```
{% form %}
date_part:
type: select
default: day
options: [hour, day, week, month]
{% endform %}
```
### Actual Result
Error on compilation:
```bash
Found 0 models, 0 tests, 0 archives, 1 analyses, 122 macros, 2 operations, 0 seed files
10:56:47 | Concurrency: 1 threads (target='dev')
10:56:47 |
Encountered an error:
Runtime Error
Compilation Error in analysis form_tag (analysis/analysis/form_tag.sql)
Encountered unknown tag 'form'.
line 2
{% form %}
```
### System information
OS: MacOS Mojave
Python 3.7.1
dbt 0.12.1
</issue>
<code>
[start of dbt/compilation.py]
1 import itertools
2 import os
3 import json
4 from collections import OrderedDict, defaultdict
5 import sqlparse
6
7 import dbt.utils
8 import dbt.include
9 import dbt.tracking
10
11 from dbt.utils import get_materialization, NodeType, is_type
12
13 from dbt.linker import Linker
14
15 import dbt.compat
16 import dbt.context.runtime
17 import dbt.contracts.project
18 import dbt.exceptions
19 import dbt.flags
20 import dbt.loader
21 import dbt.config
22 from dbt.contracts.graph.compiled import CompiledNode, CompiledGraph
23
24 from dbt.clients.system import write_json
25 from dbt.logger import GLOBAL_LOGGER as logger
26
27 graph_file_name = 'graph.gpickle'
28 manifest_file_name = 'manifest.json'
29
30
31 def print_compile_stats(stats):
32 names = {
33 NodeType.Model: 'models',
34 NodeType.Test: 'tests',
35 NodeType.Archive: 'archives',
36 NodeType.Analysis: 'analyses',
37 NodeType.Macro: 'macros',
38 NodeType.Operation: 'operations',
39 NodeType.Seed: 'seed files',
40 }
41
42 results = {k: 0 for k in names.keys()}
43 results.update(stats)
44
45 stat_line = ", ".join(
46 ["{} {}".format(ct, names.get(t)) for t, ct in results.items()])
47
48 logger.info("Found {}".format(stat_line))
49
50
51 def _add_prepended_cte(prepended_ctes, new_cte):
52 for dct in prepended_ctes:
53 if dct['id'] == new_cte['id']:
54 dct['sql'] = new_cte['sql']
55 return
56 prepended_ctes.append(new_cte)
57
58
59 def _extend_prepended_ctes(prepended_ctes, new_prepended_ctes):
60 for new_cte in new_prepended_ctes:
61 _add_prepended_cte(prepended_ctes, new_cte)
62
63
64 def prepend_ctes(model, manifest):
65 model, _, manifest = recursively_prepend_ctes(model, manifest)
66
67 return (model, manifest)
68
69
70 def recursively_prepend_ctes(model, manifest):
71 if model.extra_ctes_injected:
72 return (model, model.extra_ctes, manifest)
73
74 if dbt.flags.STRICT_MODE:
75 # ensure that all the nodes in this manifest are compiled
76 CompiledGraph(**manifest.to_flat_graph())
77
78 prepended_ctes = []
79
80 for cte in model.extra_ctes:
81 cte_id = cte['id']
82 cte_to_add = manifest.nodes.get(cte_id)
83 cte_to_add, new_prepended_ctes, manifest = recursively_prepend_ctes(
84 cte_to_add, manifest)
85
86 _extend_prepended_ctes(prepended_ctes, new_prepended_ctes)
87 new_cte_name = '__dbt__CTE__{}'.format(cte_to_add.get('name'))
88 sql = ' {} as (\n{}\n)'.format(new_cte_name, cte_to_add.compiled_sql)
89 _add_prepended_cte(prepended_ctes, {'id': cte_id, 'sql': sql})
90
91 model.prepend_ctes(prepended_ctes)
92
93 manifest.nodes[model.unique_id] = model
94
95 return (model, prepended_ctes, manifest)
96
97
98 class Compiler(object):
99 def __init__(self, config):
100 self.config = config
101
102 def initialize(self):
103 dbt.clients.system.make_directory(self.config.target_path)
104 dbt.clients.system.make_directory(self.config.modules_path)
105
106 def compile_node(self, node, manifest, extra_context=None):
107 if extra_context is None:
108 extra_context = {}
109
110 logger.debug("Compiling {}".format(node.get('unique_id')))
111
112 data = node.to_dict()
113 data.update({
114 'compiled': False,
115 'compiled_sql': None,
116 'extra_ctes_injected': False,
117 'extra_ctes': [],
118 'injected_sql': None,
119 })
120 compiled_node = CompiledNode(**data)
121
122 context = dbt.context.runtime.generate(
123 compiled_node, self.config, manifest)
124 context.update(extra_context)
125
126 compiled_node.compiled_sql = dbt.clients.jinja.get_rendered(
127 node.get('raw_sql'),
128 context,
129 node)
130
131 compiled_node.compiled = True
132
133 injected_node, _ = prepend_ctes(compiled_node, manifest)
134
135 should_wrap = {NodeType.Test, NodeType.Analysis, NodeType.Operation}
136 if injected_node.resource_type in should_wrap:
137 # data tests get wrapped in count(*)
138 # TODO : move this somewhere more reasonable
139 if 'data' in injected_node.tags and \
140 is_type(injected_node, NodeType.Test):
141 injected_node.wrapped_sql = (
142 "select count(*) from (\n{test_sql}\n) sbq").format(
143 test_sql=injected_node.injected_sql)
144 else:
145 # don't wrap schema tests or analyses.
146 injected_node.wrapped_sql = injected_node.injected_sql
147
148 elif is_type(injected_node, NodeType.Archive):
149 # unfortunately we do everything automagically for
150 # archives. in the future it'd be nice to generate
151 # the SQL at the parser level.
152 pass
153
154 elif(is_type(injected_node, NodeType.Model) and
155 get_materialization(injected_node) == 'ephemeral'):
156 pass
157
158 else:
159 injected_node.wrapped_sql = None
160
161 return injected_node
162
163 def write_manifest_file(self, manifest):
164 """Write the manifest file to disk.
165
166 manifest should be a Manifest.
167 """
168 filename = manifest_file_name
169 manifest_path = os.path.join(self.config.target_path, filename)
170 write_json(manifest_path, manifest.serialize())
171
172 def write_graph_file(self, linker):
173 filename = graph_file_name
174 graph_path = os.path.join(self.config.target_path, filename)
175 linker.write_graph(graph_path)
176
177 def link_node(self, linker, node, manifest):
178 linker.add_node(node.unique_id)
179
180 linker.update_node_data(
181 node.unique_id,
182 node.to_dict())
183
184 for dependency in node.depends_on_nodes:
185 if manifest.nodes.get(dependency):
186 linker.dependency(
187 node.unique_id,
188 (manifest.nodes.get(dependency).unique_id))
189
190 else:
191 dbt.exceptions.dependency_not_found(node, dependency)
192
193 def link_graph(self, linker, manifest):
194 for node in manifest.nodes.values():
195 self.link_node(linker, node, manifest)
196
197 cycle = linker.find_cycles()
198
199 if cycle:
200 raise RuntimeError("Found a cycle: {}".format(cycle))
201
202 def get_all_projects(self):
203 all_projects = {self.config.project_name: self.config}
204 dependency_projects = dbt.utils.dependency_projects(self.config)
205
206 for project_cfg in dependency_projects:
207 name = project_cfg.project_name
208 all_projects[name] = project_cfg
209
210 if dbt.flags.STRICT_MODE:
211 dbt.contracts.project.ProjectList(**all_projects)
212
213 return all_projects
214
215 def _check_resource_uniqueness(cls, manifest):
216 names_resources = {}
217 alias_resources = {}
218
219 for resource, node in manifest.nodes.items():
220 if node.resource_type not in NodeType.refable():
221 continue
222
223 name = node.name
224 alias = "{}.{}".format(node.schema, node.alias)
225
226 existing_node = names_resources.get(name)
227 if existing_node is not None:
228 dbt.exceptions.raise_duplicate_resource_name(
229 existing_node, node)
230
231 existing_alias = alias_resources.get(alias)
232 if existing_alias is not None:
233 dbt.exceptions.raise_ambiguous_alias(
234 existing_alias, node)
235
236 names_resources[name] = node
237 alias_resources[alias] = node
238
239 def compile(self):
240 linker = Linker()
241
242 all_projects = self.get_all_projects()
243
244 manifest = dbt.loader.GraphLoader.load_all(self.config, all_projects)
245
246 self.write_manifest_file(manifest)
247
248 self._check_resource_uniqueness(manifest)
249
250 resource_fqns = manifest.get_resource_fqns()
251 self.config.warn_for_unused_resource_config_paths(resource_fqns,
252 manifest.disabled)
253
254 self.link_graph(linker, manifest)
255
256 stats = defaultdict(int)
257
258 for node_name, node in itertools.chain(
259 manifest.nodes.items(),
260 manifest.macros.items()):
261 stats[node.resource_type] += 1
262
263 self.write_graph_file(linker)
264 print_compile_stats(stats)
265
266 return manifest, linker
267
[end of dbt/compilation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dbt/compilation.py b/dbt/compilation.py
--- a/dbt/compilation.py
+++ b/dbt/compilation.py
@@ -132,7 +132,7 @@
injected_node, _ = prepend_ctes(compiled_node, manifest)
- should_wrap = {NodeType.Test, NodeType.Analysis, NodeType.Operation}
+ should_wrap = {NodeType.Test, NodeType.Operation}
if injected_node.resource_type in should_wrap:
# data tests get wrapped in count(*)
# TODO : move this somewhere more reasonable
|
{"golden_diff": "diff --git a/dbt/compilation.py b/dbt/compilation.py\n--- a/dbt/compilation.py\n+++ b/dbt/compilation.py\n@@ -132,7 +132,7 @@\n \n injected_node, _ = prepend_ctes(compiled_node, manifest)\n \n- should_wrap = {NodeType.Test, NodeType.Analysis, NodeType.Operation}\n+ should_wrap = {NodeType.Test, NodeType.Operation}\n if injected_node.resource_type in should_wrap:\n # data tests get wrapped in count(*)\n # TODO : move this somewhere more reasonable\n", "issue": "dbt trying to parse tags in analyses even when wrapped in `raw` tags\n## Issue\r\n\r\n### Steps to Reproduce\r\n1. Put an unknown tag within `raw` tags in a sql file, in your _analysis_ directory (note that the bug doesn't occur in a model).\r\n```\r\n{% raw %}\r\n{% form %}\r\ndate_part:\r\n type: select\r\n default: day\r\n options: [hour, day, week, month]\r\n{% endform %}\r\n{% endraw %}\r\n```\r\n2. run `dbt compile`\r\n\r\n### Expected Result\r\nThe following should be compiled:\r\n```\r\n{% form %}\r\ndate_part:\r\n type: select\r\n default: day\r\n options: [hour, day, week, month]\r\n{% endform %}\r\n```\r\n\r\n\r\n### Actual Result\r\nError on compilation:\r\n```bash\r\nFound 0 models, 0 tests, 0 archives, 1 analyses, 122 macros, 2 operations, 0 seed files\r\n\r\n10:56:47 | Concurrency: 1 threads (target='dev')\r\n10:56:47 |\r\nEncountered an error:\r\nRuntime Error\r\n Compilation Error in analysis form_tag (analysis/analysis/form_tag.sql)\r\n Encountered unknown tag 'form'.\r\n line 2\r\n {% form %}\r\n```\r\n\r\n### System information\r\nOS: MacOS Mojave\r\nPython 3.7.1\r\ndbt 0.12.1\r\n\r\n\r\n\n", "before_files": [{"content": "import itertools\nimport os\nimport json\nfrom collections import OrderedDict, defaultdict\nimport sqlparse\n\nimport dbt.utils\nimport dbt.include\nimport dbt.tracking\n\nfrom dbt.utils import get_materialization, NodeType, is_type\n\nfrom dbt.linker import Linker\n\nimport dbt.compat\nimport dbt.context.runtime\nimport dbt.contracts.project\nimport dbt.exceptions\nimport dbt.flags\nimport dbt.loader\nimport dbt.config\nfrom dbt.contracts.graph.compiled import CompiledNode, CompiledGraph\n\nfrom dbt.clients.system import write_json\nfrom dbt.logger import GLOBAL_LOGGER as logger\n\ngraph_file_name = 'graph.gpickle'\nmanifest_file_name = 'manifest.json'\n\n\ndef print_compile_stats(stats):\n names = {\n NodeType.Model: 'models',\n NodeType.Test: 'tests',\n NodeType.Archive: 'archives',\n NodeType.Analysis: 'analyses',\n NodeType.Macro: 'macros',\n NodeType.Operation: 'operations',\n NodeType.Seed: 'seed files',\n }\n\n results = {k: 0 for k in names.keys()}\n results.update(stats)\n\n stat_line = \", \".join(\n [\"{} {}\".format(ct, names.get(t)) for t, ct in results.items()])\n\n logger.info(\"Found {}\".format(stat_line))\n\n\ndef _add_prepended_cte(prepended_ctes, new_cte):\n for dct in prepended_ctes:\n if dct['id'] == new_cte['id']:\n dct['sql'] = new_cte['sql']\n return\n prepended_ctes.append(new_cte)\n\n\ndef _extend_prepended_ctes(prepended_ctes, new_prepended_ctes):\n for new_cte in new_prepended_ctes:\n _add_prepended_cte(prepended_ctes, new_cte)\n\n\ndef prepend_ctes(model, manifest):\n model, _, manifest = recursively_prepend_ctes(model, manifest)\n\n return (model, manifest)\n\n\ndef recursively_prepend_ctes(model, manifest):\n if model.extra_ctes_injected:\n return (model, model.extra_ctes, manifest)\n\n if dbt.flags.STRICT_MODE:\n # ensure that all the nodes in this manifest are compiled\n CompiledGraph(**manifest.to_flat_graph())\n\n prepended_ctes = []\n\n for cte in model.extra_ctes:\n cte_id = cte['id']\n cte_to_add = manifest.nodes.get(cte_id)\n cte_to_add, new_prepended_ctes, manifest = recursively_prepend_ctes(\n cte_to_add, manifest)\n\n _extend_prepended_ctes(prepended_ctes, new_prepended_ctes)\n new_cte_name = '__dbt__CTE__{}'.format(cte_to_add.get('name'))\n sql = ' {} as (\\n{}\\n)'.format(new_cte_name, cte_to_add.compiled_sql)\n _add_prepended_cte(prepended_ctes, {'id': cte_id, 'sql': sql})\n\n model.prepend_ctes(prepended_ctes)\n\n manifest.nodes[model.unique_id] = model\n\n return (model, prepended_ctes, manifest)\n\n\nclass Compiler(object):\n def __init__(self, config):\n self.config = config\n\n def initialize(self):\n dbt.clients.system.make_directory(self.config.target_path)\n dbt.clients.system.make_directory(self.config.modules_path)\n\n def compile_node(self, node, manifest, extra_context=None):\n if extra_context is None:\n extra_context = {}\n\n logger.debug(\"Compiling {}\".format(node.get('unique_id')))\n\n data = node.to_dict()\n data.update({\n 'compiled': False,\n 'compiled_sql': None,\n 'extra_ctes_injected': False,\n 'extra_ctes': [],\n 'injected_sql': None,\n })\n compiled_node = CompiledNode(**data)\n\n context = dbt.context.runtime.generate(\n compiled_node, self.config, manifest)\n context.update(extra_context)\n\n compiled_node.compiled_sql = dbt.clients.jinja.get_rendered(\n node.get('raw_sql'),\n context,\n node)\n\n compiled_node.compiled = True\n\n injected_node, _ = prepend_ctes(compiled_node, manifest)\n\n should_wrap = {NodeType.Test, NodeType.Analysis, NodeType.Operation}\n if injected_node.resource_type in should_wrap:\n # data tests get wrapped in count(*)\n # TODO : move this somewhere more reasonable\n if 'data' in injected_node.tags and \\\n is_type(injected_node, NodeType.Test):\n injected_node.wrapped_sql = (\n \"select count(*) from (\\n{test_sql}\\n) sbq\").format(\n test_sql=injected_node.injected_sql)\n else:\n # don't wrap schema tests or analyses.\n injected_node.wrapped_sql = injected_node.injected_sql\n\n elif is_type(injected_node, NodeType.Archive):\n # unfortunately we do everything automagically for\n # archives. in the future it'd be nice to generate\n # the SQL at the parser level.\n pass\n\n elif(is_type(injected_node, NodeType.Model) and\n get_materialization(injected_node) == 'ephemeral'):\n pass\n\n else:\n injected_node.wrapped_sql = None\n\n return injected_node\n\n def write_manifest_file(self, manifest):\n \"\"\"Write the manifest file to disk.\n\n manifest should be a Manifest.\n \"\"\"\n filename = manifest_file_name\n manifest_path = os.path.join(self.config.target_path, filename)\n write_json(manifest_path, manifest.serialize())\n\n def write_graph_file(self, linker):\n filename = graph_file_name\n graph_path = os.path.join(self.config.target_path, filename)\n linker.write_graph(graph_path)\n\n def link_node(self, linker, node, manifest):\n linker.add_node(node.unique_id)\n\n linker.update_node_data(\n node.unique_id,\n node.to_dict())\n\n for dependency in node.depends_on_nodes:\n if manifest.nodes.get(dependency):\n linker.dependency(\n node.unique_id,\n (manifest.nodes.get(dependency).unique_id))\n\n else:\n dbt.exceptions.dependency_not_found(node, dependency)\n\n def link_graph(self, linker, manifest):\n for node in manifest.nodes.values():\n self.link_node(linker, node, manifest)\n\n cycle = linker.find_cycles()\n\n if cycle:\n raise RuntimeError(\"Found a cycle: {}\".format(cycle))\n\n def get_all_projects(self):\n all_projects = {self.config.project_name: self.config}\n dependency_projects = dbt.utils.dependency_projects(self.config)\n\n for project_cfg in dependency_projects:\n name = project_cfg.project_name\n all_projects[name] = project_cfg\n\n if dbt.flags.STRICT_MODE:\n dbt.contracts.project.ProjectList(**all_projects)\n\n return all_projects\n\n def _check_resource_uniqueness(cls, manifest):\n names_resources = {}\n alias_resources = {}\n\n for resource, node in manifest.nodes.items():\n if node.resource_type not in NodeType.refable():\n continue\n\n name = node.name\n alias = \"{}.{}\".format(node.schema, node.alias)\n\n existing_node = names_resources.get(name)\n if existing_node is not None:\n dbt.exceptions.raise_duplicate_resource_name(\n existing_node, node)\n\n existing_alias = alias_resources.get(alias)\n if existing_alias is not None:\n dbt.exceptions.raise_ambiguous_alias(\n existing_alias, node)\n\n names_resources[name] = node\n alias_resources[alias] = node\n\n def compile(self):\n linker = Linker()\n\n all_projects = self.get_all_projects()\n\n manifest = dbt.loader.GraphLoader.load_all(self.config, all_projects)\n\n self.write_manifest_file(manifest)\n\n self._check_resource_uniqueness(manifest)\n\n resource_fqns = manifest.get_resource_fqns()\n self.config.warn_for_unused_resource_config_paths(resource_fqns,\n manifest.disabled)\n\n self.link_graph(linker, manifest)\n\n stats = defaultdict(int)\n\n for node_name, node in itertools.chain(\n manifest.nodes.items(),\n manifest.macros.items()):\n stats[node.resource_type] += 1\n\n self.write_graph_file(linker)\n print_compile_stats(stats)\n\n return manifest, linker\n", "path": "dbt/compilation.py"}]}
| 3,353 | 125 |
gh_patches_debug_41296
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-768
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Nikola needs a -q (quiet) switch for testing and other automated tasks
Just needs to hide NOTICEs
</issue>
<code>
[start of nikola/main.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2013 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 from __future__ import print_function, unicode_literals
28 from operator import attrgetter
29 import os
30 import shutil
31 import sys
32 import traceback
33
34 from doit.loader import generate_tasks
35 from doit.cmd_base import TaskLoader
36 from doit.reporter import ExecutedOnlyReporter
37 from doit.doit_cmd import DoitMain
38 from doit.cmd_help import Help as DoitHelp
39 from doit.cmd_run import Run as DoitRun
40 from doit.cmd_clean import Clean as DoitClean
41
42 from . import __version__
43 from .nikola import Nikola
44 from .utils import _reload, sys_decode, LOGGER, STRICT_HANDLER
45
46
47 config = {}
48
49
50 def main(args):
51 if len(args) > 0 and args[0] == 'build' and '--strict' in args:
52 LOGGER.notice('Running in strict mode')
53 STRICT_HANDLER.push_application()
54 global config
55 sys.path.append('')
56 try:
57 import conf
58 _reload(conf)
59 config = conf.__dict__
60 except Exception:
61 if os.path.exists('conf.py'):
62 msg = traceback.format_exc(0).splitlines()[1]
63 LOGGER.error('In conf.py line {0}: {1}'.format(sys.exc_info()[2].tb_lineno, msg))
64 sys.exit(1)
65 config = {}
66
67 site = Nikola(**config)
68 return DoitNikola(site).run(args)
69
70
71 class Help(DoitHelp):
72 """show Nikola usage instead of doit """
73
74 @staticmethod
75 def print_usage(cmds):
76 """print nikola "usage" (basic help) instructions"""
77 print("Nikola is a tool to create static websites and blogs. For full documentation and more information, please visit http://getnikola.com\n\n")
78 print("Available commands:")
79 for cmd in sorted(cmds.values(), key=attrgetter('name')):
80 print(" nikola %-*s %s" % (20, cmd.name, cmd.doc_purpose))
81 print("")
82 print(" nikola help show help / reference")
83 print(" nikola help <command> show command usage")
84 print(" nikola help <task-name> show task usage")
85
86
87 class Build(DoitRun):
88 """expose "run" command as "build" for backward compatibility"""
89 def __init__(self, *args, **kw):
90 opts = list(self.cmd_options)
91 opts.append(
92 {
93 'name': 'strict',
94 'long': 'strict',
95 'default': False,
96 'type': bool,
97 'help': "Fail on things that would normally be warnings.",
98 }
99 )
100 self.cmd_options = tuple(opts)
101 super(Build, self).__init__(*args, **kw)
102
103
104 class Clean(DoitClean):
105 """A clean that removes cache/"""
106
107 def clean_tasks(self, tasks, dryrun):
108 if not dryrun and config:
109 cache_folder = config.get('CACHE_FOLDER', 'cache')
110 if os.path.exists(cache_folder):
111 shutil.rmtree(cache_folder)
112 return super(Clean, self).clean_tasks(tasks, dryrun)
113
114
115 class NikolaTaskLoader(TaskLoader):
116 """custom task loader to get tasks from Nikola instead of dodo.py file"""
117 def __init__(self, nikola):
118 self.nikola = nikola
119
120 def load_tasks(self, cmd, opt_values, pos_args):
121 DOIT_CONFIG = {
122 'reporter': ExecutedOnlyReporter,
123 'default_tasks': ['render_site', 'post_render'],
124 }
125 tasks = generate_tasks('render_site', self.nikola.gen_tasks('render_site', "Task"))
126 latetasks = generate_tasks('post_render', self.nikola.gen_tasks('post_render', "LateTask"))
127 return tasks + latetasks, DOIT_CONFIG
128
129
130 class DoitNikola(DoitMain):
131 # overwite help command
132 DOIT_CMDS = list(DoitMain.DOIT_CMDS) + [Help, Build, Clean]
133 TASK_LOADER = NikolaTaskLoader
134
135 def __init__(self, nikola):
136 self.nikola = nikola
137 self.task_loader = self.TASK_LOADER(nikola)
138
139 def get_commands(self):
140 # core doit commands
141 cmds = DoitMain.get_commands(self)
142 # load nikola commands
143 for name, cmd in self.nikola.commands.items():
144 cmds[name] = cmd
145 return cmds
146
147 def run(self, cmd_args):
148 sub_cmds = self.get_commands()
149 args = self.process_args(cmd_args)
150 args = [sys_decode(arg) for arg in args]
151
152 if len(args) == 0 or any(arg in ["--help", '-h'] for arg in args):
153 cmd_args = ['help']
154 args = ['help']
155 # Hide run because Nikola uses build
156 sub_cmds.pop('run')
157
158 if len(args) == 0 or args[0] not in sub_cmds.keys() or \
159 args[0] == 'build':
160 # Check for conf.py before launching run
161 if not self.nikola.configured:
162 LOGGER.error("This command needs to run inside an "
163 "existing Nikola site.")
164 return False
165 return super(DoitNikola, self).run(cmd_args)
166
167 @staticmethod
168 def print_version():
169 print("Nikola version " + __version__)
170
[end of nikola/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nikola/main.py b/nikola/main.py
--- a/nikola/main.py
+++ b/nikola/main.py
@@ -38,6 +38,7 @@
from doit.cmd_help import Help as DoitHelp
from doit.cmd_run import Run as DoitRun
from doit.cmd_clean import Clean as DoitClean
+from logbook import NullHandler
from . import __version__
from .nikola import Nikola
@@ -48,9 +49,14 @@
def main(args):
+ quiet = False
if len(args) > 0 and args[0] == 'build' and '--strict' in args:
LOGGER.notice('Running in strict mode')
STRICT_HANDLER.push_application()
+ if len(args) > 0 and args[0] == 'build' and '-q' in args or '--quiet' in args:
+ nullhandler = NullHandler()
+ nullhandler.push_application()
+ quiet = True
global config
sys.path.append('')
try:
@@ -65,7 +71,7 @@
config = {}
site = Nikola(**config)
- return DoitNikola(site).run(args)
+ return DoitNikola(site, quiet).run(args)
class Help(DoitHelp):
@@ -97,6 +103,16 @@
'help': "Fail on things that would normally be warnings.",
}
)
+ opts.append(
+ {
+ 'name': 'quiet',
+ 'long': 'quiet',
+ 'short': 'q',
+ 'default': False,
+ 'type': bool,
+ 'help': "Run quietly.",
+ }
+ )
self.cmd_options = tuple(opts)
super(Build, self).__init__(*args, **kw)
@@ -114,14 +130,21 @@
class NikolaTaskLoader(TaskLoader):
"""custom task loader to get tasks from Nikola instead of dodo.py file"""
- def __init__(self, nikola):
+ def __init__(self, nikola, quiet=False):
self.nikola = nikola
+ self.quiet = quiet
def load_tasks(self, cmd, opt_values, pos_args):
- DOIT_CONFIG = {
- 'reporter': ExecutedOnlyReporter,
- 'default_tasks': ['render_site', 'post_render'],
- }
+ if self.quiet:
+ DOIT_CONFIG = {
+ 'verbosity': 0,
+ 'reporter': 'zero',
+ }
+ else:
+ DOIT_CONFIG = {
+ 'reporter': ExecutedOnlyReporter,
+ }
+ DOIT_CONFIG['default_tasks'] = ['render_site', 'post_render']
tasks = generate_tasks('render_site', self.nikola.gen_tasks('render_site', "Task"))
latetasks = generate_tasks('post_render', self.nikola.gen_tasks('post_render', "LateTask"))
return tasks + latetasks, DOIT_CONFIG
@@ -132,9 +155,9 @@
DOIT_CMDS = list(DoitMain.DOIT_CMDS) + [Help, Build, Clean]
TASK_LOADER = NikolaTaskLoader
- def __init__(self, nikola):
+ def __init__(self, nikola, quiet=False):
self.nikola = nikola
- self.task_loader = self.TASK_LOADER(nikola)
+ self.task_loader = self.TASK_LOADER(nikola, quiet)
def get_commands(self):
# core doit commands
|
{"golden_diff": "diff --git a/nikola/main.py b/nikola/main.py\n--- a/nikola/main.py\n+++ b/nikola/main.py\n@@ -38,6 +38,7 @@\n from doit.cmd_help import Help as DoitHelp\n from doit.cmd_run import Run as DoitRun\n from doit.cmd_clean import Clean as DoitClean\n+from logbook import NullHandler\n \n from . import __version__\n from .nikola import Nikola\n@@ -48,9 +49,14 @@\n \n \n def main(args):\n+ quiet = False\n if len(args) > 0 and args[0] == 'build' and '--strict' in args:\n LOGGER.notice('Running in strict mode')\n STRICT_HANDLER.push_application()\n+ if len(args) > 0 and args[0] == 'build' and '-q' in args or '--quiet' in args:\n+ nullhandler = NullHandler()\n+ nullhandler.push_application()\n+ quiet = True\n global config\n sys.path.append('')\n try:\n@@ -65,7 +71,7 @@\n config = {}\n \n site = Nikola(**config)\n- return DoitNikola(site).run(args)\n+ return DoitNikola(site, quiet).run(args)\n \n \n class Help(DoitHelp):\n@@ -97,6 +103,16 @@\n 'help': \"Fail on things that would normally be warnings.\",\n }\n )\n+ opts.append(\n+ {\n+ 'name': 'quiet',\n+ 'long': 'quiet',\n+ 'short': 'q',\n+ 'default': False,\n+ 'type': bool,\n+ 'help': \"Run quietly.\",\n+ }\n+ )\n self.cmd_options = tuple(opts)\n super(Build, self).__init__(*args, **kw)\n \n@@ -114,14 +130,21 @@\n \n class NikolaTaskLoader(TaskLoader):\n \"\"\"custom task loader to get tasks from Nikola instead of dodo.py file\"\"\"\n- def __init__(self, nikola):\n+ def __init__(self, nikola, quiet=False):\n self.nikola = nikola\n+ self.quiet = quiet\n \n def load_tasks(self, cmd, opt_values, pos_args):\n- DOIT_CONFIG = {\n- 'reporter': ExecutedOnlyReporter,\n- 'default_tasks': ['render_site', 'post_render'],\n- }\n+ if self.quiet:\n+ DOIT_CONFIG = {\n+ 'verbosity': 0,\n+ 'reporter': 'zero',\n+ }\n+ else:\n+ DOIT_CONFIG = {\n+ 'reporter': ExecutedOnlyReporter,\n+ }\n+ DOIT_CONFIG['default_tasks'] = ['render_site', 'post_render']\n tasks = generate_tasks('render_site', self.nikola.gen_tasks('render_site', \"Task\"))\n latetasks = generate_tasks('post_render', self.nikola.gen_tasks('post_render', \"LateTask\"))\n return tasks + latetasks, DOIT_CONFIG\n@@ -132,9 +155,9 @@\n DOIT_CMDS = list(DoitMain.DOIT_CMDS) + [Help, Build, Clean]\n TASK_LOADER = NikolaTaskLoader\n \n- def __init__(self, nikola):\n+ def __init__(self, nikola, quiet=False):\n self.nikola = nikola\n- self.task_loader = self.TASK_LOADER(nikola)\n+ self.task_loader = self.TASK_LOADER(nikola, quiet)\n \n def get_commands(self):\n # core doit commands\n", "issue": "Nikola needs a -q (quiet) switch for testing and other automated tasks\nJust needs to hide NOTICEs\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2013 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import print_function, unicode_literals\nfrom operator import attrgetter\nimport os\nimport shutil\nimport sys\nimport traceback\n\nfrom doit.loader import generate_tasks\nfrom doit.cmd_base import TaskLoader\nfrom doit.reporter import ExecutedOnlyReporter\nfrom doit.doit_cmd import DoitMain\nfrom doit.cmd_help import Help as DoitHelp\nfrom doit.cmd_run import Run as DoitRun\nfrom doit.cmd_clean import Clean as DoitClean\n\nfrom . import __version__\nfrom .nikola import Nikola\nfrom .utils import _reload, sys_decode, LOGGER, STRICT_HANDLER\n\n\nconfig = {}\n\n\ndef main(args):\n if len(args) > 0 and args[0] == 'build' and '--strict' in args:\n LOGGER.notice('Running in strict mode')\n STRICT_HANDLER.push_application()\n global config\n sys.path.append('')\n try:\n import conf\n _reload(conf)\n config = conf.__dict__\n except Exception:\n if os.path.exists('conf.py'):\n msg = traceback.format_exc(0).splitlines()[1]\n LOGGER.error('In conf.py line {0}: {1}'.format(sys.exc_info()[2].tb_lineno, msg))\n sys.exit(1)\n config = {}\n\n site = Nikola(**config)\n return DoitNikola(site).run(args)\n\n\nclass Help(DoitHelp):\n \"\"\"show Nikola usage instead of doit \"\"\"\n\n @staticmethod\n def print_usage(cmds):\n \"\"\"print nikola \"usage\" (basic help) instructions\"\"\"\n print(\"Nikola is a tool to create static websites and blogs. For full documentation and more information, please visit http://getnikola.com\\n\\n\")\n print(\"Available commands:\")\n for cmd in sorted(cmds.values(), key=attrgetter('name')):\n print(\" nikola %-*s %s\" % (20, cmd.name, cmd.doc_purpose))\n print(\"\")\n print(\" nikola help show help / reference\")\n print(\" nikola help <command> show command usage\")\n print(\" nikola help <task-name> show task usage\")\n\n\nclass Build(DoitRun):\n \"\"\"expose \"run\" command as \"build\" for backward compatibility\"\"\"\n def __init__(self, *args, **kw):\n opts = list(self.cmd_options)\n opts.append(\n {\n 'name': 'strict',\n 'long': 'strict',\n 'default': False,\n 'type': bool,\n 'help': \"Fail on things that would normally be warnings.\",\n }\n )\n self.cmd_options = tuple(opts)\n super(Build, self).__init__(*args, **kw)\n\n\nclass Clean(DoitClean):\n \"\"\"A clean that removes cache/\"\"\"\n\n def clean_tasks(self, tasks, dryrun):\n if not dryrun and config:\n cache_folder = config.get('CACHE_FOLDER', 'cache')\n if os.path.exists(cache_folder):\n shutil.rmtree(cache_folder)\n return super(Clean, self).clean_tasks(tasks, dryrun)\n\n\nclass NikolaTaskLoader(TaskLoader):\n \"\"\"custom task loader to get tasks from Nikola instead of dodo.py file\"\"\"\n def __init__(self, nikola):\n self.nikola = nikola\n\n def load_tasks(self, cmd, opt_values, pos_args):\n DOIT_CONFIG = {\n 'reporter': ExecutedOnlyReporter,\n 'default_tasks': ['render_site', 'post_render'],\n }\n tasks = generate_tasks('render_site', self.nikola.gen_tasks('render_site', \"Task\"))\n latetasks = generate_tasks('post_render', self.nikola.gen_tasks('post_render', \"LateTask\"))\n return tasks + latetasks, DOIT_CONFIG\n\n\nclass DoitNikola(DoitMain):\n # overwite help command\n DOIT_CMDS = list(DoitMain.DOIT_CMDS) + [Help, Build, Clean]\n TASK_LOADER = NikolaTaskLoader\n\n def __init__(self, nikola):\n self.nikola = nikola\n self.task_loader = self.TASK_LOADER(nikola)\n\n def get_commands(self):\n # core doit commands\n cmds = DoitMain.get_commands(self)\n # load nikola commands\n for name, cmd in self.nikola.commands.items():\n cmds[name] = cmd\n return cmds\n\n def run(self, cmd_args):\n sub_cmds = self.get_commands()\n args = self.process_args(cmd_args)\n args = [sys_decode(arg) for arg in args]\n\n if len(args) == 0 or any(arg in [\"--help\", '-h'] for arg in args):\n cmd_args = ['help']\n args = ['help']\n # Hide run because Nikola uses build\n sub_cmds.pop('run')\n\n if len(args) == 0 or args[0] not in sub_cmds.keys() or \\\n args[0] == 'build':\n # Check for conf.py before launching run\n if not self.nikola.configured:\n LOGGER.error(\"This command needs to run inside an \"\n \"existing Nikola site.\")\n return False\n return super(DoitNikola, self).run(cmd_args)\n\n @staticmethod\n def print_version():\n print(\"Nikola version \" + __version__)\n", "path": "nikola/main.py"}]}
| 2,371 | 794 |
gh_patches_debug_13268
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-987
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Scheduler: Cannot drag and drop items
I tried out event: https://open-event.herokuapp.com/events/59/scheduler/
</issue>
<code>
[start of open_event/api/sessions.py]
1 from flask.ext.restplus import Resource, Namespace
2 from sqlalchemy.orm.collections import InstrumentedList
3
4 from open_event.models.session import Session as SessionModel
5 from open_event.models.track import Track as TrackModel
6 from open_event.models.microlocation import Microlocation as MicrolocationModel
7 from open_event.models.speaker import Speaker as SpeakerModel
8
9 from .helpers.helpers import get_paginated_list, requires_auth, \
10 save_db_model, get_object_in_event
11 from .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \
12 PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES
13 from .helpers import custom_fields as fields
14 from .helpers.special_fields import SessionLanguageField, SessionStateField
15
16 api = Namespace('sessions', description='Sessions', path='/')
17
18 # Create models
19 SESSION_TRACK = api.model('SessionTrack', {
20 'id': fields.Integer(required=True),
21 'name': fields.String(),
22 })
23
24 SESSION_SPEAKER = api.model('SessionSpeaker', {
25 'id': fields.Integer(required=True),
26 'name': fields.String(),
27 'organisation': fields.String()
28 })
29
30 SESSION_MICROLOCATION = api.model('SessionMicrolocation', {
31 'id': fields.Integer(required=True),
32 'name': fields.String(),
33 })
34
35 SESSION = api.model('Session', {
36 'id': fields.Integer(required=True),
37 'title': fields.String(required=True),
38 'subtitle': fields.String(),
39 'short_abstract': fields.String(),
40 'long_abstract': fields.String(required=True),
41 'comments': fields.String(),
42 'start_time': fields.DateTime(required=True),
43 'end_time': fields.DateTime(required=True),
44 'track': fields.Nested(SESSION_TRACK, allow_null=True),
45 'speakers': fields.List(fields.Nested(SESSION_SPEAKER)),
46 'language': SessionLanguageField(),
47 'microlocation': fields.Nested(SESSION_MICROLOCATION, allow_null=True),
48 'slides': fields.String(),
49 'video': fields.String(),
50 'audio': fields.String(),
51 'signup_url': fields.Uri(),
52 'state': SessionStateField()
53 })
54
55 SESSION_PAGINATED = api.clone('SessionPaginated', PAGINATED_MODEL, {
56 'results': fields.List(fields.Nested(SESSION))
57 })
58
59 SESSION_POST = api.clone('SessionPost', SESSION, {
60 'track_id': fields.Integer(),
61 'speaker_ids': fields.List(fields.Integer()),
62 'microlocation_id': fields.Integer()
63 })
64 del SESSION_POST['id']
65 del SESSION_POST['track']
66 del SESSION_POST['speakers']
67 del SESSION_POST['microlocation']
68
69
70 # Create DAO
71 class SessionDAO(ServiceDAO):
72 def _delete_fields(self, data):
73 del data['speaker_ids']
74 del data['track_id']
75 del data['microlocation_id']
76 data['start_time'] = SESSION_POST['start_time'].from_str(
77 data['start_time'])
78 data['end_time'] = SESSION_POST['end_time'].from_str(data['end_time'])
79 return data
80
81 def get_object(self, model, sid, event_id):
82 """
83 returns object (model). Checks if object is in same event
84 """
85 if sid is None:
86 return None
87 return get_object_in_event(model, sid, event_id)
88
89 def fix_payload_post(self, event_id, data):
90 """
91 Fixes payload of POST request
92 """
93 data['track'] = self.get_object(TrackModel, data['track_id'], event_id)
94 data['microlocation'] = self.get_object(MicrolocationModel, data['microlocation_id'], event_id)
95 data['event_id'] = event_id
96 data['speakers'] = InstrumentedList(
97 SpeakerModel.query.get(_) for _ in data['speaker_ids']
98 if self.get_object(SpeakerModel, _, event_id) is not None
99 )
100 data = self._delete_fields(data)
101 return data
102
103 def update(self, event_id, service_id, data):
104 data = self.validate(data)
105 data_copy = data.copy()
106 data_copy = self.fix_payload_post(event_id, data_copy)
107 data = self._delete_fields(data)
108 obj = ServiceDAO.update(self, event_id, service_id, data)
109 obj.track = data_copy['track']
110 obj.microlocation = data_copy['microlocation']
111 obj.speakers = data_copy['speakers']
112 obj = save_db_model(obj, SessionModel.__name__, event_id)
113 return obj
114
115 def create(self, event_id, data, url):
116 data = self.validate(data)
117 payload = self.fix_payload_post(event_id, data)
118 return ServiceDAO.create(self, event_id, payload, url, validate=False)
119
120
121 DAO = SessionDAO(SessionModel, SESSION_POST)
122
123
124 @api.route('/events/<int:event_id>/sessions/<int:session_id>')
125 @api.response(404, 'Session not found')
126 @api.response(400, 'Session does not belong to event')
127 class Session(Resource):
128 @api.doc('get_session')
129 @api.marshal_with(SESSION)
130 def get(self, event_id, session_id):
131 """Fetch a session given its id"""
132 return DAO.get(event_id, session_id)
133
134 @requires_auth
135 @api.doc('delete_session')
136 @api.marshal_with(SESSION)
137 def delete(self, event_id, session_id):
138 """Delete a session given its id"""
139 return DAO.delete(event_id, session_id)
140
141 @requires_auth
142 @api.doc('update_session', responses=PUT_RESPONSES)
143 @api.marshal_with(SESSION)
144 @api.expect(SESSION_POST)
145 def put(self, event_id, session_id):
146 """Update a session given its id"""
147 return DAO.update(event_id, session_id, self.api.payload)
148
149
150 @api.route('/events/<int:event_id>/sessions')
151 class SessionList(Resource):
152 @api.doc('list_sessions')
153 @api.marshal_list_with(SESSION)
154 def get(self, event_id):
155 """List all sessions"""
156 return DAO.list(event_id)
157
158 @requires_auth
159 @api.doc('create_session', responses=POST_RESPONSES)
160 @api.marshal_with(SESSION)
161 @api.expect(SESSION_POST)
162 def post(self, event_id):
163 """Create a session"""
164 return DAO.create(
165 event_id,
166 self.api.payload,
167 self.api.url_for(self, event_id=event_id)
168 )
169
170
171 @api.route('/events/<int:event_id>/sessions/page')
172 class SessionListPaginated(Resource, PaginatedResourceBase):
173 @api.doc('list_sessions_paginated', params=PAGE_PARAMS)
174 @api.marshal_with(SESSION_PAGINATED)
175 def get(self, event_id):
176 """List sessions in a paginated manner"""
177 return get_paginated_list(
178 SessionModel,
179 self.api.url_for(self, event_id=event_id),
180 args=self.parser.parse_args(),
181 event_id=event_id
182 )
183
[end of open_event/api/sessions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/open_event/api/sessions.py b/open_event/api/sessions.py
--- a/open_event/api/sessions.py
+++ b/open_event/api/sessions.py
@@ -37,10 +37,10 @@
'title': fields.String(required=True),
'subtitle': fields.String(),
'short_abstract': fields.String(),
- 'long_abstract': fields.String(required=True),
+ 'long_abstract': fields.String(),
'comments': fields.String(),
- 'start_time': fields.DateTime(required=True),
- 'end_time': fields.DateTime(required=True),
+ 'start_time': fields.DateTime(),
+ 'end_time': fields.DateTime(),
'track': fields.Nested(SESSION_TRACK, allow_null=True),
'speakers': fields.List(fields.Nested(SESSION_SPEAKER)),
'language': SessionLanguageField(),
|
{"golden_diff": "diff --git a/open_event/api/sessions.py b/open_event/api/sessions.py\n--- a/open_event/api/sessions.py\n+++ b/open_event/api/sessions.py\n@@ -37,10 +37,10 @@\n 'title': fields.String(required=True),\n 'subtitle': fields.String(),\n 'short_abstract': fields.String(),\n- 'long_abstract': fields.String(required=True),\n+ 'long_abstract': fields.String(),\n 'comments': fields.String(),\n- 'start_time': fields.DateTime(required=True),\n- 'end_time': fields.DateTime(required=True),\n+ 'start_time': fields.DateTime(),\n+ 'end_time': fields.DateTime(),\n 'track': fields.Nested(SESSION_TRACK, allow_null=True),\n 'speakers': fields.List(fields.Nested(SESSION_SPEAKER)),\n 'language': SessionLanguageField(),\n", "issue": "Scheduler: Cannot drag and drop items\nI tried out event: https://open-event.herokuapp.com/events/59/scheduler/\n\n", "before_files": [{"content": "from flask.ext.restplus import Resource, Namespace\nfrom sqlalchemy.orm.collections import InstrumentedList\n\nfrom open_event.models.session import Session as SessionModel\nfrom open_event.models.track import Track as TrackModel\nfrom open_event.models.microlocation import Microlocation as MicrolocationModel\nfrom open_event.models.speaker import Speaker as SpeakerModel\n\nfrom .helpers.helpers import get_paginated_list, requires_auth, \\\n save_db_model, get_object_in_event\nfrom .helpers.utils import PAGINATED_MODEL, PaginatedResourceBase, ServiceDAO, \\\n PAGE_PARAMS, POST_RESPONSES, PUT_RESPONSES\nfrom .helpers import custom_fields as fields\nfrom .helpers.special_fields import SessionLanguageField, SessionStateField\n\napi = Namespace('sessions', description='Sessions', path='/')\n\n# Create models\nSESSION_TRACK = api.model('SessionTrack', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n})\n\nSESSION_SPEAKER = api.model('SessionSpeaker', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n 'organisation': fields.String()\n})\n\nSESSION_MICROLOCATION = api.model('SessionMicrolocation', {\n 'id': fields.Integer(required=True),\n 'name': fields.String(),\n})\n\nSESSION = api.model('Session', {\n 'id': fields.Integer(required=True),\n 'title': fields.String(required=True),\n 'subtitle': fields.String(),\n 'short_abstract': fields.String(),\n 'long_abstract': fields.String(required=True),\n 'comments': fields.String(),\n 'start_time': fields.DateTime(required=True),\n 'end_time': fields.DateTime(required=True),\n 'track': fields.Nested(SESSION_TRACK, allow_null=True),\n 'speakers': fields.List(fields.Nested(SESSION_SPEAKER)),\n 'language': SessionLanguageField(),\n 'microlocation': fields.Nested(SESSION_MICROLOCATION, allow_null=True),\n 'slides': fields.String(),\n 'video': fields.String(),\n 'audio': fields.String(),\n 'signup_url': fields.Uri(),\n 'state': SessionStateField()\n})\n\nSESSION_PAGINATED = api.clone('SessionPaginated', PAGINATED_MODEL, {\n 'results': fields.List(fields.Nested(SESSION))\n})\n\nSESSION_POST = api.clone('SessionPost', SESSION, {\n 'track_id': fields.Integer(),\n 'speaker_ids': fields.List(fields.Integer()),\n 'microlocation_id': fields.Integer()\n})\ndel SESSION_POST['id']\ndel SESSION_POST['track']\ndel SESSION_POST['speakers']\ndel SESSION_POST['microlocation']\n\n\n# Create DAO\nclass SessionDAO(ServiceDAO):\n def _delete_fields(self, data):\n del data['speaker_ids']\n del data['track_id']\n del data['microlocation_id']\n data['start_time'] = SESSION_POST['start_time'].from_str(\n data['start_time'])\n data['end_time'] = SESSION_POST['end_time'].from_str(data['end_time'])\n return data\n\n def get_object(self, model, sid, event_id):\n \"\"\"\n returns object (model). Checks if object is in same event\n \"\"\"\n if sid is None:\n return None\n return get_object_in_event(model, sid, event_id)\n\n def fix_payload_post(self, event_id, data):\n \"\"\"\n Fixes payload of POST request\n \"\"\"\n data['track'] = self.get_object(TrackModel, data['track_id'], event_id)\n data['microlocation'] = self.get_object(MicrolocationModel, data['microlocation_id'], event_id)\n data['event_id'] = event_id\n data['speakers'] = InstrumentedList(\n SpeakerModel.query.get(_) for _ in data['speaker_ids']\n if self.get_object(SpeakerModel, _, event_id) is not None\n )\n data = self._delete_fields(data)\n return data\n\n def update(self, event_id, service_id, data):\n data = self.validate(data)\n data_copy = data.copy()\n data_copy = self.fix_payload_post(event_id, data_copy)\n data = self._delete_fields(data)\n obj = ServiceDAO.update(self, event_id, service_id, data)\n obj.track = data_copy['track']\n obj.microlocation = data_copy['microlocation']\n obj.speakers = data_copy['speakers']\n obj = save_db_model(obj, SessionModel.__name__, event_id)\n return obj\n\n def create(self, event_id, data, url):\n data = self.validate(data)\n payload = self.fix_payload_post(event_id, data)\n return ServiceDAO.create(self, event_id, payload, url, validate=False)\n\n\nDAO = SessionDAO(SessionModel, SESSION_POST)\n\n\[email protected]('/events/<int:event_id>/sessions/<int:session_id>')\[email protected](404, 'Session not found')\[email protected](400, 'Session does not belong to event')\nclass Session(Resource):\n @api.doc('get_session')\n @api.marshal_with(SESSION)\n def get(self, event_id, session_id):\n \"\"\"Fetch a session given its id\"\"\"\n return DAO.get(event_id, session_id)\n\n @requires_auth\n @api.doc('delete_session')\n @api.marshal_with(SESSION)\n def delete(self, event_id, session_id):\n \"\"\"Delete a session given its id\"\"\"\n return DAO.delete(event_id, session_id)\n\n @requires_auth\n @api.doc('update_session', responses=PUT_RESPONSES)\n @api.marshal_with(SESSION)\n @api.expect(SESSION_POST)\n def put(self, event_id, session_id):\n \"\"\"Update a session given its id\"\"\"\n return DAO.update(event_id, session_id, self.api.payload)\n\n\[email protected]('/events/<int:event_id>/sessions')\nclass SessionList(Resource):\n @api.doc('list_sessions')\n @api.marshal_list_with(SESSION)\n def get(self, event_id):\n \"\"\"List all sessions\"\"\"\n return DAO.list(event_id)\n\n @requires_auth\n @api.doc('create_session', responses=POST_RESPONSES)\n @api.marshal_with(SESSION)\n @api.expect(SESSION_POST)\n def post(self, event_id):\n \"\"\"Create a session\"\"\"\n return DAO.create(\n event_id,\n self.api.payload,\n self.api.url_for(self, event_id=event_id)\n )\n\n\[email protected]('/events/<int:event_id>/sessions/page')\nclass SessionListPaginated(Resource, PaginatedResourceBase):\n @api.doc('list_sessions_paginated', params=PAGE_PARAMS)\n @api.marshal_with(SESSION_PAGINATED)\n def get(self, event_id):\n \"\"\"List sessions in a paginated manner\"\"\"\n return get_paginated_list(\n SessionModel,\n self.api.url_for(self, event_id=event_id),\n args=self.parser.parse_args(),\n event_id=event_id\n )\n", "path": "open_event/api/sessions.py"}]}
| 2,468 | 179 |
gh_patches_debug_9937
|
rasdani/github-patches
|
git_diff
|
liberapay__liberapay.com-530
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SMTP exceptions aren't caught
- https://sentry.io/share/issue/3132343535362e313939323633313931/
- https://sentry.io/share/issue/3132343535362e313939323533393935/
- https://sentry.io/share/issue/3132343535362e313939323533383839/
```
SMTPServerDisconnected: please run connect() first
File "site-packages/algorithm.py", line 321, in loop
new_state = function(**deps.as_kwargs)
File "liberapay/security/authentication.py", line 136, in authenticate_user_if_possible
p = sign_in_with_form_data(body, state)
File "liberapay/security/authentication.py", line 88, in sign_in_with_form_data
p.add_email(email, cursor=c)
File "liberapay/models/participant.py", line 618, in add_email
r = self.send_email('verification', email=email, link=link.format(**locals()))
File "liberapay/models/participant.py", line 733, in send_email
n = website.mailer.send(**message)
File "mailshake/mailers/base.py", line 41, in send
return self.send_messages(EmailMessage(*args, **kwargs))
File "mailshake/mailers/smtp.py", line 105, in send_messages
sent = self._send(message)
File "mailshake/mailers/smtp.py", line 126, in _send
from_email, recipients, email_message.as_bytes()
File "python2.7/smtplib.py", line 733, in sendmail
(code, resp) = self.mail(from_addr, esmtp_opts)
File "python2.7/smtplib.py", line 480, in mail
self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist))
File "python2.7/smtplib.py", line 341, in putcmd
self.send(str)
File "python2.7/smtplib.py", line 333, in send
raise SMTPServerDisconnected('please run connect() first')
```
```
SMTPRecipientsRefused: {u"~3921 <[email protected]|ping-n21127.0.0.1||`ping-c21127.0.0.1`#'|ping-n21127.0.0.1||`ping-c21127.0.0.1`#\\>": (501, 'Syntax error')}
File "site-packages/algorithm.py", line 321, in loop
new_state = function(**deps.as_kwargs)
File "liberapay/security/authentication.py", line 136, in authenticate_user_if_possible
p = sign_in_with_form_data(body, state)
File "liberapay/security/authentication.py", line 88, in sign_in_with_form_data
p.add_email(email, cursor=c)
File "liberapay/models/participant.py", line 618, in add_email
r = self.send_email('verification', email=email, link=link.format(**locals()))
File "liberapay/models/participant.py", line 733, in send_email
n = website.mailer.send(**message)
File "mailshake/mailers/base.py", line 41, in send
return self.send_messages(EmailMessage(*args, **kwargs))
File "mailshake/mailers/smtp.py", line 105, in send_messages
sent = self._send(message)
File "mailshake/mailers/smtp.py", line 126, in _send
from_email, recipients, email_message.as_bytes()
File "python2.7/smtplib.py", line 747, in sendmail
raise SMTPRecipientsRefused(senderrs)
```
```
SMTPDataError: (554, 'Recipient format is invalid: \'["[email protected]\'and3405=3406--"]\'')
File "site-packages/algorithm.py", line 321, in loop
new_state = function(**deps.as_kwargs)
File "liberapay/security/authentication.py", line 136, in authenticate_user_if_possible
p = sign_in_with_form_data(body, state)
File "liberapay/security/authentication.py", line 88, in sign_in_with_form_data
p.add_email(email, cursor=c)
File "liberapay/models/participant.py", line 618, in add_email
r = self.send_email('verification', email=email, link=link.format(**locals()))
File "liberapay/models/participant.py", line 733, in send_email
n = website.mailer.send(**message)
File "mailshake/mailers/base.py", line 41, in send
return self.send_messages(EmailMessage(*args, **kwargs))
File "mailshake/mailers/smtp.py", line 105, in send_messages
sent = self._send(message)
File "mailshake/mailers/smtp.py", line 126, in _send
from_email, recipients, email_message.as_bytes()
File "python2.7/smtplib.py", line 751, in sendmail
raise SMTPDataError(code, resp)
```
</issue>
<code>
[start of liberapay/constants.py]
1 # coding: utf8
2 from __future__ import print_function, unicode_literals
3
4 from collections import namedtuple, OrderedDict
5 from datetime import date, datetime, timedelta
6 from decimal import Decimal, ROUND_UP
7 import re
8
9 from jinja2 import StrictUndefined
10 from pando.utils import utc
11
12
13 class CustomUndefined(StrictUndefined):
14 __bool__ = __nonzero__ = lambda self: False
15
16 def __str__(self):
17 try:
18 self._fail_with_undefined_error()
19 except Exception as e:
20 self._tell_sentry(e, {})
21 return ''
22
23 __unicode__ = __str__
24
25
26 def check_bits(bits):
27 assert len(set(bits)) == len(bits) # no duplicates
28 assert not [b for b in bits if '{0:b}'.format(b).count('1') != 1] # single bit
29
30
31 Event = namedtuple('Event', 'name bit title')
32
33 Fees = namedtuple('Fees', ('var', 'fix'))
34
35
36 _ = lambda a: a
37
38 ASCII_ALLOWED_IN_USERNAME = set("0123456789"
39 "abcdefghijklmnopqrstuvwxyz"
40 "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
41 "-_")
42
43 AVATAR_QUERY = '?s=160&default=retro'
44 AVATAR_SOURCES = 'libravatar bitbucket facebook github google twitter'.split()
45
46 BIRTHDAY = date(2015, 5, 22)
47
48 D_CENT = Decimal('0.01')
49 D_INF = Decimal('inf')
50 D_UNIT = Decimal('1.00')
51 D_ZERO = Decimal('0.00')
52
53 DONATION_LIMITS_WEEKLY = (Decimal('0.01'), Decimal('100.00'))
54 DONATION_LIMITS = {
55 'weekly': DONATION_LIMITS_WEEKLY,
56 'monthly': tuple((x * Decimal(52) / Decimal(12)).quantize(D_CENT, rounding=ROUND_UP)
57 for x in DONATION_LIMITS_WEEKLY),
58 'yearly': tuple((x * Decimal(52)).quantize(D_CENT)
59 for x in DONATION_LIMITS_WEEKLY),
60 }
61 DONATION_WEEKLY_MIN, DONATION_WEEKLY_MAX = DONATION_LIMITS_WEEKLY
62
63 ELSEWHERE_ACTIONS = {'connect', 'lock', 'unlock'}
64
65 EMAIL_VERIFICATION_TIMEOUT = timedelta(hours=24)
66 EMAIL_RE = re.compile(r'^[^@]+@[^@]+\.[^@]+$')
67
68 EPOCH = datetime(1970, 1, 1, 0, 0, 0, 0, utc)
69
70 EVENTS = [
71 Event('income', 1, _("When I receive money")),
72 Event('low_balance', 2, _("When there isn't enough money in my wallet to cover my donations")),
73 Event('withdrawal_created', 4, _("When a transfer to my bank account is initiated")),
74 Event('withdrawal_failed', 8, _("When a transfer to my bank account fails")),
75 Event('pledgee_joined', 16, _("When someone I pledge to joins Liberapay")),
76 Event('team_invite', 32, _("When someone invites me to join a team")),
77 Event('payin_bankwire_failed', 64, _("When a bank wire transfer to my Liberapay wallet fails")),
78 Event('payin_bankwire_succeeded', 128, _("When a bank wire transfer to my Liberapay wallet succeeds")),
79 ]
80 check_bits([e.bit for e in EVENTS])
81 EVENTS = OrderedDict((e.name, e) for e in EVENTS)
82 EVENTS_S = ' '.join(EVENTS.keys())
83
84 # https://www.mangopay.com/pricing/
85 FEE_PAYIN_BANK_WIRE = Fees(Decimal('0.005'), Decimal(0)) # 0.5%
86 FEE_PAYIN_CARD = Fees(Decimal('0.018'), Decimal('0.18')) # 1.8% + €0.18
87 FEE_PAYOUT = Fees(Decimal(0), Decimal(0))
88 FEE_PAYOUT_OUTSIDE_SEPA = Fees(Decimal(0), Decimal('2.5'))
89 FEE_PAYOUT_WARN = Decimal('0.03') # warn user when fee exceeds 3%
90 FEE_VAT = Decimal('0.17') # 17% (Luxembourg rate)
91
92 JINJA_ENV_COMMON = dict(
93 trim_blocks=True, lstrip_blocks=True,
94 line_statement_prefix='%',
95 # undefined=CustomUndefined,
96 )
97
98 # https://docs.mangopay.com/api-references/kyc-rules/
99 KYC_PAYIN_YEARLY_THRESHOLD = Decimal('2500')
100 KYC_PAYOUT_YEARLY_THRESHOLD = Decimal('1000')
101
102 LAUNCH_TIME = datetime(2016, 2, 3, 12, 50, 0, 0, utc)
103
104 PASSWORD_MIN_SIZE = 8
105 PASSWORD_MAX_SIZE = 150
106
107 PAYIN_BANK_WIRE_MIN = Decimal('2.00')
108 PAYIN_CARD_MIN = Decimal("15.00") # fee ≈ 3.5%
109 PAYIN_CARD_TARGET = Decimal("92.00") # fee ≈ 2.33%
110
111 PERIOD_CONVERSION_RATES = {
112 'weekly': Decimal(1),
113 'monthly': Decimal(12) / Decimal(52),
114 'yearly': Decimal(1) / Decimal(52),
115 }
116
117 POSTAL_ADDRESS_KEYS = (
118 'AddressLine1', 'AddressLine2', 'City', 'Region', 'PostalCode', 'Country'
119 )
120
121 PRIVACY_FIELDS = OrderedDict([
122 ('hide_giving', _("Hide total giving from others.")),
123 ('hide_receiving', _("Hide total receiving from others.")),
124 ('hide_from_search', _("Hide myself from search results on Liberapay.")),
125 ('profile_noindex', _("Tell web search engines not to index my profile.")),
126 ('hide_from_lists', _("Prevent my profile from being listed on Liberapay.")),
127 ])
128 PRIVACY_FIELDS_S = ' '.join(PRIVACY_FIELDS.keys())
129
130 PRIVILEGES = dict(admin=1, run_payday=2)
131 check_bits(list(PRIVILEGES.values()))
132
133 QUARANTINE = timedelta(weeks=4)
134
135 SEPA_ZONE = set("""
136 AT BE BG CH CY CZ DE DK EE ES ES FI FR GB GI GR HR HU IE IS IT LI LT LU LV
137 MC MT NL NO PL PT RO SE SI SK
138 """.split())
139
140 SESSION = str('session') # bytes in python2, unicode in python3
141 SESSION_REFRESH = timedelta(hours=1)
142 SESSION_TIMEOUT = timedelta(hours=6)
143
144 STANDARD_TIPS = (
145 (_("Symbolic ({0} per week)"), Decimal('0.01')),
146 (_("Small ({0} per week)"), Decimal('0.25')),
147 (_("Medium ({0} per week)"), Decimal('1.00')),
148 (_("Large ({0} per week)"), Decimal('5.00')),
149 (_("Maximum ({0} per week)"), DONATION_WEEKLY_MAX),
150 )
151
152 USERNAME_MAX_SIZE = 32
153
154 del _
155
[end of liberapay/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/liberapay/constants.py b/liberapay/constants.py
--- a/liberapay/constants.py
+++ b/liberapay/constants.py
@@ -63,7 +63,16 @@
ELSEWHERE_ACTIONS = {'connect', 'lock', 'unlock'}
EMAIL_VERIFICATION_TIMEOUT = timedelta(hours=24)
-EMAIL_RE = re.compile(r'^[^@]+@[^@]+\.[^@]+$')
+EMAIL_RE = re.compile(r'''
+ # This is the regexp used by MangoPay (as of February 2017).
+ # It rejects some valid but exotic addresses.
+ # https://en.wikipedia.org/wiki/Email_address
+ ^
+ [a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*
+ @
+ ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\.)+[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?
+ $
+''', re.VERBOSE)
EPOCH = datetime(1970, 1, 1, 0, 0, 0, 0, utc)
|
{"golden_diff": "diff --git a/liberapay/constants.py b/liberapay/constants.py\n--- a/liberapay/constants.py\n+++ b/liberapay/constants.py\n@@ -63,7 +63,16 @@\n ELSEWHERE_ACTIONS = {'connect', 'lock', 'unlock'}\n \n EMAIL_VERIFICATION_TIMEOUT = timedelta(hours=24)\n-EMAIL_RE = re.compile(r'^[^@]+@[^@]+\\.[^@]+$')\n+EMAIL_RE = re.compile(r'''\n+ # This is the regexp used by MangoPay (as of February 2017).\n+ # It rejects some valid but exotic addresses.\n+ # https://en.wikipedia.org/wiki/Email_address\n+ ^\n+ [a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(\\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*\n+ @\n+ ([a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\\.)+[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?\n+ $\n+''', re.VERBOSE)\n \n EPOCH = datetime(1970, 1, 1, 0, 0, 0, 0, utc)\n", "issue": "SMTP exceptions aren't caught\n- https://sentry.io/share/issue/3132343535362e313939323633313931/\r\n- https://sentry.io/share/issue/3132343535362e313939323533393935/\r\n- https://sentry.io/share/issue/3132343535362e313939323533383839/\r\n\r\n```\r\nSMTPServerDisconnected: please run connect() first\r\n File \"site-packages/algorithm.py\", line 321, in loop\r\n new_state = function(**deps.as_kwargs)\r\n File \"liberapay/security/authentication.py\", line 136, in authenticate_user_if_possible\r\n p = sign_in_with_form_data(body, state)\r\n File \"liberapay/security/authentication.py\", line 88, in sign_in_with_form_data\r\n p.add_email(email, cursor=c)\r\n File \"liberapay/models/participant.py\", line 618, in add_email\r\n r = self.send_email('verification', email=email, link=link.format(**locals()))\r\n File \"liberapay/models/participant.py\", line 733, in send_email\r\n n = website.mailer.send(**message)\r\n File \"mailshake/mailers/base.py\", line 41, in send\r\n return self.send_messages(EmailMessage(*args, **kwargs))\r\n File \"mailshake/mailers/smtp.py\", line 105, in send_messages\r\n sent = self._send(message)\r\n File \"mailshake/mailers/smtp.py\", line 126, in _send\r\n from_email, recipients, email_message.as_bytes()\r\n File \"python2.7/smtplib.py\", line 733, in sendmail\r\n (code, resp) = self.mail(from_addr, esmtp_opts)\r\n File \"python2.7/smtplib.py\", line 480, in mail\r\n self.putcmd(\"mail\", \"FROM:%s%s\" % (quoteaddr(sender), optionlist))\r\n File \"python2.7/smtplib.py\", line 341, in putcmd\r\n self.send(str)\r\n File \"python2.7/smtplib.py\", line 333, in send\r\n raise SMTPServerDisconnected('please run connect() first')\r\n```\r\n\r\n```\r\nSMTPRecipientsRefused: {u\"~3921 <[email protected]|ping-n21127.0.0.1||`ping-c21127.0.0.1`#'|ping-n21127.0.0.1||`ping-c21127.0.0.1`#\\\\>\": (501, 'Syntax error')}\r\n File \"site-packages/algorithm.py\", line 321, in loop\r\n new_state = function(**deps.as_kwargs)\r\n File \"liberapay/security/authentication.py\", line 136, in authenticate_user_if_possible\r\n p = sign_in_with_form_data(body, state)\r\n File \"liberapay/security/authentication.py\", line 88, in sign_in_with_form_data\r\n p.add_email(email, cursor=c)\r\n File \"liberapay/models/participant.py\", line 618, in add_email\r\n r = self.send_email('verification', email=email, link=link.format(**locals()))\r\n File \"liberapay/models/participant.py\", line 733, in send_email\r\n n = website.mailer.send(**message)\r\n File \"mailshake/mailers/base.py\", line 41, in send\r\n return self.send_messages(EmailMessage(*args, **kwargs))\r\n File \"mailshake/mailers/smtp.py\", line 105, in send_messages\r\n sent = self._send(message)\r\n File \"mailshake/mailers/smtp.py\", line 126, in _send\r\n from_email, recipients, email_message.as_bytes()\r\n File \"python2.7/smtplib.py\", line 747, in sendmail\r\n raise SMTPRecipientsRefused(senderrs)\r\n```\r\n\r\n```\r\nSMTPDataError: (554, 'Recipient format is invalid: \\'[\"[email protected]\\'and3405=3406--\"]\\'')\r\n File \"site-packages/algorithm.py\", line 321, in loop\r\n new_state = function(**deps.as_kwargs)\r\n File \"liberapay/security/authentication.py\", line 136, in authenticate_user_if_possible\r\n p = sign_in_with_form_data(body, state)\r\n File \"liberapay/security/authentication.py\", line 88, in sign_in_with_form_data\r\n p.add_email(email, cursor=c)\r\n File \"liberapay/models/participant.py\", line 618, in add_email\r\n r = self.send_email('verification', email=email, link=link.format(**locals()))\r\n File \"liberapay/models/participant.py\", line 733, in send_email\r\n n = website.mailer.send(**message)\r\n File \"mailshake/mailers/base.py\", line 41, in send\r\n return self.send_messages(EmailMessage(*args, **kwargs))\r\n File \"mailshake/mailers/smtp.py\", line 105, in send_messages\r\n sent = self._send(message)\r\n File \"mailshake/mailers/smtp.py\", line 126, in _send\r\n from_email, recipients, email_message.as_bytes()\r\n File \"python2.7/smtplib.py\", line 751, in sendmail\r\n raise SMTPDataError(code, resp)\r\n```\n", "before_files": [{"content": "# coding: utf8\nfrom __future__ import print_function, unicode_literals\n\nfrom collections import namedtuple, OrderedDict\nfrom datetime import date, datetime, timedelta\nfrom decimal import Decimal, ROUND_UP\nimport re\n\nfrom jinja2 import StrictUndefined\nfrom pando.utils import utc\n\n\nclass CustomUndefined(StrictUndefined):\n __bool__ = __nonzero__ = lambda self: False\n\n def __str__(self):\n try:\n self._fail_with_undefined_error()\n except Exception as e:\n self._tell_sentry(e, {})\n return ''\n\n __unicode__ = __str__\n\n\ndef check_bits(bits):\n assert len(set(bits)) == len(bits) # no duplicates\n assert not [b for b in bits if '{0:b}'.format(b).count('1') != 1] # single bit\n\n\nEvent = namedtuple('Event', 'name bit title')\n\nFees = namedtuple('Fees', ('var', 'fix'))\n\n\n_ = lambda a: a\n\nASCII_ALLOWED_IN_USERNAME = set(\"0123456789\"\n \"abcdefghijklmnopqrstuvwxyz\"\n \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n \"-_\")\n\nAVATAR_QUERY = '?s=160&default=retro'\nAVATAR_SOURCES = 'libravatar bitbucket facebook github google twitter'.split()\n\nBIRTHDAY = date(2015, 5, 22)\n\nD_CENT = Decimal('0.01')\nD_INF = Decimal('inf')\nD_UNIT = Decimal('1.00')\nD_ZERO = Decimal('0.00')\n\nDONATION_LIMITS_WEEKLY = (Decimal('0.01'), Decimal('100.00'))\nDONATION_LIMITS = {\n 'weekly': DONATION_LIMITS_WEEKLY,\n 'monthly': tuple((x * Decimal(52) / Decimal(12)).quantize(D_CENT, rounding=ROUND_UP)\n for x in DONATION_LIMITS_WEEKLY),\n 'yearly': tuple((x * Decimal(52)).quantize(D_CENT)\n for x in DONATION_LIMITS_WEEKLY),\n}\nDONATION_WEEKLY_MIN, DONATION_WEEKLY_MAX = DONATION_LIMITS_WEEKLY\n\nELSEWHERE_ACTIONS = {'connect', 'lock', 'unlock'}\n\nEMAIL_VERIFICATION_TIMEOUT = timedelta(hours=24)\nEMAIL_RE = re.compile(r'^[^@]+@[^@]+\\.[^@]+$')\n\nEPOCH = datetime(1970, 1, 1, 0, 0, 0, 0, utc)\n\nEVENTS = [\n Event('income', 1, _(\"When I receive money\")),\n Event('low_balance', 2, _(\"When there isn't enough money in my wallet to cover my donations\")),\n Event('withdrawal_created', 4, _(\"When a transfer to my bank account is initiated\")),\n Event('withdrawal_failed', 8, _(\"When a transfer to my bank account fails\")),\n Event('pledgee_joined', 16, _(\"When someone I pledge to joins Liberapay\")),\n Event('team_invite', 32, _(\"When someone invites me to join a team\")),\n Event('payin_bankwire_failed', 64, _(\"When a bank wire transfer to my Liberapay wallet fails\")),\n Event('payin_bankwire_succeeded', 128, _(\"When a bank wire transfer to my Liberapay wallet succeeds\")),\n]\ncheck_bits([e.bit for e in EVENTS])\nEVENTS = OrderedDict((e.name, e) for e in EVENTS)\nEVENTS_S = ' '.join(EVENTS.keys())\n\n# https://www.mangopay.com/pricing/\nFEE_PAYIN_BANK_WIRE = Fees(Decimal('0.005'), Decimal(0)) # 0.5%\nFEE_PAYIN_CARD = Fees(Decimal('0.018'), Decimal('0.18')) # 1.8% + \u20ac0.18\nFEE_PAYOUT = Fees(Decimal(0), Decimal(0))\nFEE_PAYOUT_OUTSIDE_SEPA = Fees(Decimal(0), Decimal('2.5'))\nFEE_PAYOUT_WARN = Decimal('0.03') # warn user when fee exceeds 3%\nFEE_VAT = Decimal('0.17') # 17% (Luxembourg rate)\n\nJINJA_ENV_COMMON = dict(\n trim_blocks=True, lstrip_blocks=True,\n line_statement_prefix='%',\n # undefined=CustomUndefined,\n)\n\n# https://docs.mangopay.com/api-references/kyc-rules/\nKYC_PAYIN_YEARLY_THRESHOLD = Decimal('2500')\nKYC_PAYOUT_YEARLY_THRESHOLD = Decimal('1000')\n\nLAUNCH_TIME = datetime(2016, 2, 3, 12, 50, 0, 0, utc)\n\nPASSWORD_MIN_SIZE = 8\nPASSWORD_MAX_SIZE = 150\n\nPAYIN_BANK_WIRE_MIN = Decimal('2.00')\nPAYIN_CARD_MIN = Decimal(\"15.00\") # fee \u2248 3.5%\nPAYIN_CARD_TARGET = Decimal(\"92.00\") # fee \u2248 2.33%\n\nPERIOD_CONVERSION_RATES = {\n 'weekly': Decimal(1),\n 'monthly': Decimal(12) / Decimal(52),\n 'yearly': Decimal(1) / Decimal(52),\n}\n\nPOSTAL_ADDRESS_KEYS = (\n 'AddressLine1', 'AddressLine2', 'City', 'Region', 'PostalCode', 'Country'\n)\n\nPRIVACY_FIELDS = OrderedDict([\n ('hide_giving', _(\"Hide total giving from others.\")),\n ('hide_receiving', _(\"Hide total receiving from others.\")),\n ('hide_from_search', _(\"Hide myself from search results on Liberapay.\")),\n ('profile_noindex', _(\"Tell web search engines not to index my profile.\")),\n ('hide_from_lists', _(\"Prevent my profile from being listed on Liberapay.\")),\n])\nPRIVACY_FIELDS_S = ' '.join(PRIVACY_FIELDS.keys())\n\nPRIVILEGES = dict(admin=1, run_payday=2)\ncheck_bits(list(PRIVILEGES.values()))\n\nQUARANTINE = timedelta(weeks=4)\n\nSEPA_ZONE = set(\"\"\"\n AT BE BG CH CY CZ DE DK EE ES ES FI FR GB GI GR HR HU IE IS IT LI LT LU LV\n MC MT NL NO PL PT RO SE SI SK\n\"\"\".split())\n\nSESSION = str('session') # bytes in python2, unicode in python3\nSESSION_REFRESH = timedelta(hours=1)\nSESSION_TIMEOUT = timedelta(hours=6)\n\nSTANDARD_TIPS = (\n (_(\"Symbolic ({0} per week)\"), Decimal('0.01')),\n (_(\"Small ({0} per week)\"), Decimal('0.25')),\n (_(\"Medium ({0} per week)\"), Decimal('1.00')),\n (_(\"Large ({0} per week)\"), Decimal('5.00')),\n (_(\"Maximum ({0} per week)\"), DONATION_WEEKLY_MAX),\n)\n\nUSERNAME_MAX_SIZE = 32\n\ndel _\n", "path": "liberapay/constants.py"}]}
| 3,708 | 297 |
gh_patches_debug_36141
|
rasdani/github-patches
|
git_diff
|
PennyLaneAI__pennylane-4831
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Unexpected behavior with merge_amplitude_embedding and batched input
### Expected behavior
When using merge_amplitude_embedding to combine two separate embeddings, the output should be the same as two separate MottonenStatePreparations.
### Actual behavior
Instead, what I observe is that the output not only doesn't match the batch size, but it's also incorrect.
### Additional information
Removing `@qml.transforms.merge_amplitude_embedding` solves the issue, but I'm not sure it's intended.
### Source code
```shell
import pennylane as qml
import torch
dev = qml.device("default.qubit", wires=2)
@qml.qnode(dev, interface="torch")
def circuit1(a, b):
qml.MottonenStatePreparation(a, wires=0)
qml.MottonenStatePreparation(b, wires=1)
return qml.probs(wires=range(2))
a = torch.randn((3,2))
b = torch.randn((3,2))
a = a/a.norm(dim=1, keepdim=True)
b = b/b.norm(dim=1, keepdim=True)
print('a:')
print(a)
print('b:')
print(b)
print('Batched call to circuit1')
print(circuit1(a, b).detach().numpy())
# Output:
# a:
# tensor([[ 0.4929, 0.8701],
# [ 0.7628, 0.6466],
# [-0.6488, -0.7610]])
# b:
# tensor([[-0.6827, -0.7307],
# [ 0.6346, -0.7728],
# [ 0.2947, -0.9556]])
# Batched call to circuit1
# [[0.11325859 0.12973125 0.35284562 0.40416454]
# [0.23433313 0.34756744 0.16836992 0.24972952]
# [0.03655629 0.38435813 0.0502934 0.52879218]]
@qml.transforms.merge_amplitude_embedding
@qml.qnode(dev, interface="torch")
def circuit2(a, b):
qml.AmplitudeEmbedding(a, wires=0)
qml.AmplitudeEmbedding(b, wires=1)
return qml.probs(wires=range(2))
print('Batched call to circuit2')
print(circuit2(a, b).detach().numpy())
print('Repeated call to circuit2')
print(circuit2(a[0], b[0]).detach().numpy())
print(circuit2(a[1], b[1]).detach().numpy())
print(circuit2(a[2], b[2]).detach().numpy())
# Output:
# Batched call to circuit2
# [1. 0. 0. 0.]
# Repeated call to circuit2
# [1. 0. 0. 0.]
# [1. 0. 0. 0.]
# [1. 0. 0. 0.]
```
### Tracebacks
_No response_
### System information
```shell
Name: PennyLane
Version: 0.33.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /home/tiblias/miniconda3/envs/qc-gpu/lib/python3.10/site-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Lightning, PennyLane-Lightning-GPU, PennyLane-qiskit, pennylane-qulacs
Platform info: Linux-6.2.0-36-generic-x86_64-with-glibc2.35
Python version: 3.10.12
Numpy version: 1.23.5
Scipy version: 1.11.2
Installed devices:
- default.gaussian (PennyLane-0.33.0)
- default.mixed (PennyLane-0.33.0)
- default.qubit (PennyLane-0.33.0)
- default.qubit.autograd (PennyLane-0.33.0)
- default.qubit.jax (PennyLane-0.33.0)
- default.qubit.legacy (PennyLane-0.33.0)
- default.qubit.tf (PennyLane-0.33.0)
- default.qubit.torch (PennyLane-0.33.0)
- default.qutrit (PennyLane-0.33.0)
- null.qubit (PennyLane-0.33.0)
- lightning.qubit (PennyLane-Lightning-0.33.1)
- lightning.gpu (PennyLane-Lightning-GPU-0.31.0)
- qiskit.aer (PennyLane-qiskit-0.31.0)
- qiskit.basicaer (PennyLane-qiskit-0.31.0)
- qiskit.ibmq (PennyLane-qiskit-0.31.0)
- qiskit.ibmq.circuit_runner (PennyLane-qiskit-0.31.0)
- qiskit.ibmq.sampler (PennyLane-qiskit-0.31.0)
- qiskit.remote (PennyLane-qiskit-0.31.0)
- qulacs.simulator (pennylane-qulacs-0.29.0)
```
### Existing GitHub issues
- [X] I have searched existing GitHub issues to make sure the issue does not already exist.
</issue>
<code>
[start of pennylane/transforms/optimization/merge_amplitude_embedding.py]
1 # Copyright 2018-2021 Xanadu Quantum Technologies Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Transform for merging AmplitudeEmbedding gates in a quantum circuit."""
15 from typing import Sequence, Callable
16
17 from pennylane.transforms import transform
18 from pennylane.tape import QuantumTape
19 from pennylane import AmplitudeEmbedding
20 from pennylane._device import DeviceError
21 from pennylane.math import flatten, reshape
22
23
24 @transform
25 def merge_amplitude_embedding(tape: QuantumTape) -> (Sequence[QuantumTape], Callable):
26 r"""Quantum function transform to combine amplitude embedding templates that act on different qubits.
27
28 Args:
29 tape (QNode or QuantumTape or Callable): A quantum circuit.
30
31 Returns:
32 qnode (QNode) or quantum function (Callable) or tuple[List[.QuantumTape], function]: The transformed circuit as described in :func:`qml.transform <pennylane.transform>`.
33
34
35 **Example**
36
37 >>> dev = qml.device('default.qubit', wires=4)
38
39 You can apply the transform directly on :class:`QNode`:
40
41 .. code-block:: python
42
43 @qml.transforms.merge_amplitude_embedding
44 @qml.qnode(device=dev)
45 def circuit():
46 qml.CNOT(wires = [0,1])
47 qml.AmplitudeEmbedding([0,1], wires = 2)
48 qml.AmplitudeEmbedding([0,1], wires = 3)
49 return qml.state()
50
51 >>> circuit()
52 [1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]
53
54 .. details::
55 :title: Usage Details
56
57 You can also apply it on quantum function.
58
59 .. code-block:: python
60
61 def qfunc():
62 qml.CNOT(wires = [0,1])
63 qml.AmplitudeEmbedding([0,1], wires = 2)
64 qml.AmplitudeEmbedding([0,1], wires = 3)
65 return qml.state()
66
67 The circuit before compilation will not work because of using two amplitude embedding.
68
69 Using the transformation we can join the different amplitude embedding into a single one:
70
71 >>> optimized_qfunc = qml.transforms.merge_amplitude_embedding(qfunc)
72 >>> optimized_qnode = qml.QNode(optimized_qfunc, dev)
73 >>> print(qml.draw(optimized_qnode)())
74 0: ─╭●──────────────────────┤ State
75 1: ─╰X──────────────────────┤ State
76 2: ─╭AmplitudeEmbedding(M0)─┤ State
77 3: ─╰AmplitudeEmbedding(M0)─┤ State
78 M0 =
79 [0.+0.j 0.+0.j 0.+0.j 1.+0.j]
80
81 """
82 # Make a working copy of the list to traverse
83 list_copy = tape.operations.copy()
84 not_amplitude_embedding = []
85 visited_wires = set()
86 input_wires, input_vectors, input_batch_size = [], [], []
87 while len(list_copy) > 0:
88 current_gate = list_copy[0]
89 wires_set = set(current_gate.wires)
90
91 # Check if the current gate is an AmplitudeEmbedding.
92 if not isinstance(current_gate, AmplitudeEmbedding):
93 not_amplitude_embedding.append(current_gate)
94 list_copy.pop(0)
95 visited_wires = visited_wires.union(wires_set)
96 continue
97
98 # Check the qubits have not been used.
99 if len(visited_wires.intersection(wires_set)) > 0:
100 raise DeviceError(
101 f"Operation {current_gate.name} cannot be used after other Operation applied in the same qubit "
102 )
103 input_wires.append(current_gate.wires)
104 input_vectors.append(current_gate.parameters[0])
105 input_batch_size.append(current_gate.batch_size)
106 list_copy.pop(0)
107 visited_wires = visited_wires.union(wires_set)
108
109 if len(input_wires) > 0:
110 final_wires = input_wires[0]
111 final_vector = input_vectors[0]
112 final_batch_size = input_batch_size[0]
113
114 # Merge all parameters and qubits into a single one.
115 for w, v, b in zip(input_wires[1:], input_vectors[1:], input_batch_size[1:]):
116 final_vector = final_vector[..., :, None] * v[..., None, :]
117 final_batch_size = final_batch_size or b
118 final_wires = final_wires + w
119
120 if final_batch_size:
121 final_vector = reshape(final_vector, (final_batch_size, -1))
122 else:
123 final_vector = flatten(final_vector)
124
125 AmplitudeEmbedding(final_vector, wires=final_wires)
126
127 new_operations = []
128 for gate in not_amplitude_embedding:
129 new_operations.append(gate)
130
131 new_tape = type(tape)(new_operations, tape.measurements, shots=tape.shots)
132
133 def null_postprocessing(results):
134 """A postprocesing function returned by a transform that only converts the batch of results
135 into a result for a single ``QuantumTape``.
136 """
137 return results[0]
138
139 return [new_tape], null_postprocessing
140
[end of pennylane/transforms/optimization/merge_amplitude_embedding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pennylane/transforms/optimization/merge_amplitude_embedding.py b/pennylane/transforms/optimization/merge_amplitude_embedding.py
--- a/pennylane/transforms/optimization/merge_amplitude_embedding.py
+++ b/pennylane/transforms/optimization/merge_amplitude_embedding.py
@@ -19,6 +19,7 @@
from pennylane import AmplitudeEmbedding
from pennylane._device import DeviceError
from pennylane.math import flatten, reshape
+from pennylane.queuing import QueuingManager
@transform
@@ -79,19 +80,15 @@
[0.+0.j 0.+0.j 0.+0.j 1.+0.j]
"""
- # Make a working copy of the list to traverse
- list_copy = tape.operations.copy()
- not_amplitude_embedding = []
+ new_operations = []
visited_wires = set()
input_wires, input_vectors, input_batch_size = [], [], []
- while len(list_copy) > 0:
- current_gate = list_copy[0]
+ for current_gate in tape.operations:
wires_set = set(current_gate.wires)
# Check if the current gate is an AmplitudeEmbedding.
if not isinstance(current_gate, AmplitudeEmbedding):
- not_amplitude_embedding.append(current_gate)
- list_copy.pop(0)
+ new_operations.append(current_gate)
visited_wires = visited_wires.union(wires_set)
continue
@@ -103,7 +100,6 @@
input_wires.append(current_gate.wires)
input_vectors.append(current_gate.parameters[0])
input_batch_size.append(current_gate.batch_size)
- list_copy.pop(0)
visited_wires = visited_wires.union(wires_set)
if len(input_wires) > 0:
@@ -122,11 +118,8 @@
else:
final_vector = flatten(final_vector)
- AmplitudeEmbedding(final_vector, wires=final_wires)
-
- new_operations = []
- for gate in not_amplitude_embedding:
- new_operations.append(gate)
+ with QueuingManager.stop_recording():
+ new_operations.insert(0, AmplitudeEmbedding(final_vector, wires=final_wires))
new_tape = type(tape)(new_operations, tape.measurements, shots=tape.shots)
|
{"golden_diff": "diff --git a/pennylane/transforms/optimization/merge_amplitude_embedding.py b/pennylane/transforms/optimization/merge_amplitude_embedding.py\n--- a/pennylane/transforms/optimization/merge_amplitude_embedding.py\n+++ b/pennylane/transforms/optimization/merge_amplitude_embedding.py\n@@ -19,6 +19,7 @@\n from pennylane import AmplitudeEmbedding\n from pennylane._device import DeviceError\n from pennylane.math import flatten, reshape\n+from pennylane.queuing import QueuingManager\n \n \n @transform\n@@ -79,19 +80,15 @@\n [0.+0.j 0.+0.j 0.+0.j 1.+0.j]\n \n \"\"\"\n- # Make a working copy of the list to traverse\n- list_copy = tape.operations.copy()\n- not_amplitude_embedding = []\n+ new_operations = []\n visited_wires = set()\n input_wires, input_vectors, input_batch_size = [], [], []\n- while len(list_copy) > 0:\n- current_gate = list_copy[0]\n+ for current_gate in tape.operations:\n wires_set = set(current_gate.wires)\n \n # Check if the current gate is an AmplitudeEmbedding.\n if not isinstance(current_gate, AmplitudeEmbedding):\n- not_amplitude_embedding.append(current_gate)\n- list_copy.pop(0)\n+ new_operations.append(current_gate)\n visited_wires = visited_wires.union(wires_set)\n continue\n \n@@ -103,7 +100,6 @@\n input_wires.append(current_gate.wires)\n input_vectors.append(current_gate.parameters[0])\n input_batch_size.append(current_gate.batch_size)\n- list_copy.pop(0)\n visited_wires = visited_wires.union(wires_set)\n \n if len(input_wires) > 0:\n@@ -122,11 +118,8 @@\n else:\n final_vector = flatten(final_vector)\n \n- AmplitudeEmbedding(final_vector, wires=final_wires)\n-\n- new_operations = []\n- for gate in not_amplitude_embedding:\n- new_operations.append(gate)\n+ with QueuingManager.stop_recording():\n+ new_operations.insert(0, AmplitudeEmbedding(final_vector, wires=final_wires))\n \n new_tape = type(tape)(new_operations, tape.measurements, shots=tape.shots)\n", "issue": "[BUG] Unexpected behavior with merge_amplitude_embedding and batched input\n### Expected behavior\n\nWhen using merge_amplitude_embedding to combine two separate embeddings, the output should be the same as two separate MottonenStatePreparations.\n\n### Actual behavior\n\nInstead, what I observe is that the output not only doesn't match the batch size, but it's also incorrect.\r\n\n\n### Additional information\n\nRemoving `@qml.transforms.merge_amplitude_embedding` solves the issue, but I'm not sure it's intended.\n\n### Source code\n\n```shell\nimport pennylane as qml\r\nimport torch\r\n\r\ndev = qml.device(\"default.qubit\", wires=2)\r\n\r\[email protected](dev, interface=\"torch\")\r\ndef circuit1(a, b): \r\n qml.MottonenStatePreparation(a, wires=0)\r\n qml.MottonenStatePreparation(b, wires=1)\r\n return qml.probs(wires=range(2))\r\n\r\na = torch.randn((3,2))\r\nb = torch.randn((3,2))\r\na = a/a.norm(dim=1, keepdim=True)\r\nb = b/b.norm(dim=1, keepdim=True)\r\n\r\nprint('a:')\r\nprint(a)\r\nprint('b:')\r\nprint(b)\r\n\r\nprint('Batched call to circuit1')\r\nprint(circuit1(a, b).detach().numpy())\r\n\r\n# Output:\r\n# a:\r\n# tensor([[ 0.4929, 0.8701],\r\n# [ 0.7628, 0.6466],\r\n# [-0.6488, -0.7610]])\r\n# b:\r\n# tensor([[-0.6827, -0.7307],\r\n# [ 0.6346, -0.7728],\r\n# [ 0.2947, -0.9556]])\r\n# Batched call to circuit1\r\n# [[0.11325859 0.12973125 0.35284562 0.40416454]\r\n# [0.23433313 0.34756744 0.16836992 0.24972952]\r\n# [0.03655629 0.38435813 0.0502934 0.52879218]]\r\n\r\n\r\[email protected]_amplitude_embedding\r\[email protected](dev, interface=\"torch\")\r\ndef circuit2(a, b): \r\n qml.AmplitudeEmbedding(a, wires=0)\r\n qml.AmplitudeEmbedding(b, wires=1)\r\n return qml.probs(wires=range(2))\r\n\r\nprint('Batched call to circuit2')\r\nprint(circuit2(a, b).detach().numpy())\r\n\r\nprint('Repeated call to circuit2')\r\nprint(circuit2(a[0], b[0]).detach().numpy())\r\nprint(circuit2(a[1], b[1]).detach().numpy())\r\nprint(circuit2(a[2], b[2]).detach().numpy())\r\n\r\n\r\n# Output:\r\n# Batched call to circuit2\r\n# [1. 0. 0. 0.]\r\n# Repeated call to circuit2\r\n# [1. 0. 0. 0.]\r\n# [1. 0. 0. 0.]\r\n# [1. 0. 0. 0.]\n```\n\n\n### Tracebacks\n\n_No response_\n\n### System information\n\n```shell\nName: PennyLane\r\nVersion: 0.33.0\r\nSummary: PennyLane is a Python quantum machine learning library by Xanadu Inc.\r\nHome-page: https://github.com/PennyLaneAI/pennylane\r\nAuthor: \r\nAuthor-email: \r\nLicense: Apache License 2.0\r\nLocation: /home/tiblias/miniconda3/envs/qc-gpu/lib/python3.10/site-packages\r\nRequires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions\r\nRequired-by: PennyLane-Lightning, PennyLane-Lightning-GPU, PennyLane-qiskit, pennylane-qulacs\r\n\r\nPlatform info: Linux-6.2.0-36-generic-x86_64-with-glibc2.35\r\nPython version: 3.10.12\r\nNumpy version: 1.23.5\r\nScipy version: 1.11.2\r\nInstalled devices:\r\n- default.gaussian (PennyLane-0.33.0)\r\n- default.mixed (PennyLane-0.33.0)\r\n- default.qubit (PennyLane-0.33.0)\r\n- default.qubit.autograd (PennyLane-0.33.0)\r\n- default.qubit.jax (PennyLane-0.33.0)\r\n- default.qubit.legacy (PennyLane-0.33.0)\r\n- default.qubit.tf (PennyLane-0.33.0)\r\n- default.qubit.torch (PennyLane-0.33.0)\r\n- default.qutrit (PennyLane-0.33.0)\r\n- null.qubit (PennyLane-0.33.0)\r\n- lightning.qubit (PennyLane-Lightning-0.33.1)\r\n- lightning.gpu (PennyLane-Lightning-GPU-0.31.0)\r\n- qiskit.aer (PennyLane-qiskit-0.31.0)\r\n- qiskit.basicaer (PennyLane-qiskit-0.31.0)\r\n- qiskit.ibmq (PennyLane-qiskit-0.31.0)\r\n- qiskit.ibmq.circuit_runner (PennyLane-qiskit-0.31.0)\r\n- qiskit.ibmq.sampler (PennyLane-qiskit-0.31.0)\r\n- qiskit.remote (PennyLane-qiskit-0.31.0)\r\n- qulacs.simulator (pennylane-qulacs-0.29.0)\n```\n\n\n### Existing GitHub issues\n\n- [X] I have searched existing GitHub issues to make sure the issue does not already exist.\n", "before_files": [{"content": "# Copyright 2018-2021 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Transform for merging AmplitudeEmbedding gates in a quantum circuit.\"\"\"\nfrom typing import Sequence, Callable\n\nfrom pennylane.transforms import transform\nfrom pennylane.tape import QuantumTape\nfrom pennylane import AmplitudeEmbedding\nfrom pennylane._device import DeviceError\nfrom pennylane.math import flatten, reshape\n\n\n@transform\ndef merge_amplitude_embedding(tape: QuantumTape) -> (Sequence[QuantumTape], Callable):\n r\"\"\"Quantum function transform to combine amplitude embedding templates that act on different qubits.\n\n Args:\n tape (QNode or QuantumTape or Callable): A quantum circuit.\n\n Returns:\n qnode (QNode) or quantum function (Callable) or tuple[List[.QuantumTape], function]: The transformed circuit as described in :func:`qml.transform <pennylane.transform>`.\n\n\n **Example**\n\n >>> dev = qml.device('default.qubit', wires=4)\n\n You can apply the transform directly on :class:`QNode`:\n\n .. code-block:: python\n\n @qml.transforms.merge_amplitude_embedding\n @qml.qnode(device=dev)\n def circuit():\n qml.CNOT(wires = [0,1])\n qml.AmplitudeEmbedding([0,1], wires = 2)\n qml.AmplitudeEmbedding([0,1], wires = 3)\n return qml.state()\n\n >>> circuit()\n [1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j]\n\n .. details::\n :title: Usage Details\n\n You can also apply it on quantum function.\n\n .. code-block:: python\n\n def qfunc():\n qml.CNOT(wires = [0,1])\n qml.AmplitudeEmbedding([0,1], wires = 2)\n qml.AmplitudeEmbedding([0,1], wires = 3)\n return qml.state()\n\n The circuit before compilation will not work because of using two amplitude embedding.\n\n Using the transformation we can join the different amplitude embedding into a single one:\n\n >>> optimized_qfunc = qml.transforms.merge_amplitude_embedding(qfunc)\n >>> optimized_qnode = qml.QNode(optimized_qfunc, dev)\n >>> print(qml.draw(optimized_qnode)())\n 0: \u2500\u256d\u25cf\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 State\n 1: \u2500\u2570X\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 State\n 2: \u2500\u256dAmplitudeEmbedding(M0)\u2500\u2524 State\n 3: \u2500\u2570AmplitudeEmbedding(M0)\u2500\u2524 State\n M0 =\n [0.+0.j 0.+0.j 0.+0.j 1.+0.j]\n\n \"\"\"\n # Make a working copy of the list to traverse\n list_copy = tape.operations.copy()\n not_amplitude_embedding = []\n visited_wires = set()\n input_wires, input_vectors, input_batch_size = [], [], []\n while len(list_copy) > 0:\n current_gate = list_copy[0]\n wires_set = set(current_gate.wires)\n\n # Check if the current gate is an AmplitudeEmbedding.\n if not isinstance(current_gate, AmplitudeEmbedding):\n not_amplitude_embedding.append(current_gate)\n list_copy.pop(0)\n visited_wires = visited_wires.union(wires_set)\n continue\n\n # Check the qubits have not been used.\n if len(visited_wires.intersection(wires_set)) > 0:\n raise DeviceError(\n f\"Operation {current_gate.name} cannot be used after other Operation applied in the same qubit \"\n )\n input_wires.append(current_gate.wires)\n input_vectors.append(current_gate.parameters[0])\n input_batch_size.append(current_gate.batch_size)\n list_copy.pop(0)\n visited_wires = visited_wires.union(wires_set)\n\n if len(input_wires) > 0:\n final_wires = input_wires[0]\n final_vector = input_vectors[0]\n final_batch_size = input_batch_size[0]\n\n # Merge all parameters and qubits into a single one.\n for w, v, b in zip(input_wires[1:], input_vectors[1:], input_batch_size[1:]):\n final_vector = final_vector[..., :, None] * v[..., None, :]\n final_batch_size = final_batch_size or b\n final_wires = final_wires + w\n\n if final_batch_size:\n final_vector = reshape(final_vector, (final_batch_size, -1))\n else:\n final_vector = flatten(final_vector)\n\n AmplitudeEmbedding(final_vector, wires=final_wires)\n\n new_operations = []\n for gate in not_amplitude_embedding:\n new_operations.append(gate)\n\n new_tape = type(tape)(new_operations, tape.measurements, shots=tape.shots)\n\n def null_postprocessing(results):\n \"\"\"A postprocesing function returned by a transform that only converts the batch of results\n into a result for a single ``QuantumTape``.\n \"\"\"\n return results[0]\n\n return [new_tape], null_postprocessing\n", "path": "pennylane/transforms/optimization/merge_amplitude_embedding.py"}]}
| 3,611 | 535 |
gh_patches_debug_6285
|
rasdani/github-patches
|
git_diff
|
encode__httpx-1503
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CertTypes `keyfile` and `password` should be Optional types.
`SSLContext.load_cert_chain` can take `None` as arguments values ([docs](https://docs.python.org/3/library/ssl.html#ssl.SSLContext.load_cert_chain)) so I guess this:
https://github.com/encode/httpx/blob/c09e61d50c8f169187cada6dbf14b89c7763c63f/httpx/_types.py#L54
should be rewritten as follows:
```python
CertTypes = Union[str, Tuple[str, Optional[str]], Tuple[str, Optional[str], Optional[str]]]
```
</issue>
<code>
[start of httpx/_types.py]
1 """
2 Type definitions for type checking purposes.
3 """
4
5 import ssl
6 from http.cookiejar import CookieJar
7 from typing import (
8 IO,
9 TYPE_CHECKING,
10 AsyncIterable,
11 Callable,
12 Dict,
13 Iterable,
14 List,
15 Mapping,
16 Optional,
17 Sequence,
18 Tuple,
19 Union,
20 )
21
22 if TYPE_CHECKING: # pragma: no cover
23 from ._auth import Auth # noqa: F401
24 from ._config import Proxy, Timeout # noqa: F401
25 from ._models import URL, Cookies, Headers, QueryParams, Request # noqa: F401
26
27
28 PrimitiveData = Optional[Union[str, int, float, bool]]
29
30 RawURL = Tuple[bytes, bytes, Optional[int], bytes]
31
32 URLTypes = Union["URL", str]
33
34 QueryParamTypes = Union[
35 "QueryParams",
36 Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],
37 List[Tuple[str, PrimitiveData]],
38 Tuple[Tuple[str, PrimitiveData], ...],
39 str,
40 bytes,
41 None,
42 ]
43
44 HeaderTypes = Union[
45 "Headers",
46 Dict[str, str],
47 Dict[bytes, bytes],
48 Sequence[Tuple[str, str]],
49 Sequence[Tuple[bytes, bytes]],
50 ]
51
52 CookieTypes = Union["Cookies", CookieJar, Dict[str, str], List[Tuple[str, str]]]
53
54 CertTypes = Union[str, Tuple[str, str], Tuple[str, str, str]]
55 VerifyTypes = Union[str, bool, ssl.SSLContext]
56 TimeoutTypes = Union[
57 Optional[float],
58 Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],
59 "Timeout",
60 ]
61 ProxiesTypes = Union[URLTypes, "Proxy", Dict[URLTypes, Union[None, URLTypes, "Proxy"]]]
62
63 AuthTypes = Union[
64 Tuple[Union[str, bytes], Union[str, bytes]],
65 Callable[["Request"], "Request"],
66 "Auth",
67 None,
68 ]
69
70 ByteStream = Union[Iterable[bytes], AsyncIterable[bytes]]
71 RequestContent = Union[str, bytes, ByteStream]
72 ResponseContent = Union[str, bytes, ByteStream]
73
74 RequestData = dict
75
76 FileContent = Union[IO[str], IO[bytes], str, bytes]
77 FileTypes = Union[
78 # file (or text)
79 FileContent,
80 # (filename, file (or text))
81 Tuple[Optional[str], FileContent],
82 # (filename, file (or text), content_type)
83 Tuple[Optional[str], FileContent, Optional[str]],
84 ]
85 RequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]
86
[end of httpx/_types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/httpx/_types.py b/httpx/_types.py
--- a/httpx/_types.py
+++ b/httpx/_types.py
@@ -51,7 +51,14 @@
CookieTypes = Union["Cookies", CookieJar, Dict[str, str], List[Tuple[str, str]]]
-CertTypes = Union[str, Tuple[str, str], Tuple[str, str, str]]
+CertTypes = Union[
+ # certfile
+ str,
+ # (certfile, keyfile)
+ Tuple[str, Optional[str]],
+ # (certfile, keyfile, password)
+ Tuple[str, Optional[str], Optional[str]],
+]
VerifyTypes = Union[str, bool, ssl.SSLContext]
TimeoutTypes = Union[
Optional[float],
|
{"golden_diff": "diff --git a/httpx/_types.py b/httpx/_types.py\n--- a/httpx/_types.py\n+++ b/httpx/_types.py\n@@ -51,7 +51,14 @@\n \n CookieTypes = Union[\"Cookies\", CookieJar, Dict[str, str], List[Tuple[str, str]]]\n \n-CertTypes = Union[str, Tuple[str, str], Tuple[str, str, str]]\n+CertTypes = Union[\n+ # certfile\n+ str,\n+ # (certfile, keyfile)\n+ Tuple[str, Optional[str]],\n+ # (certfile, keyfile, password)\n+ Tuple[str, Optional[str], Optional[str]],\n+]\n VerifyTypes = Union[str, bool, ssl.SSLContext]\n TimeoutTypes = Union[\n Optional[float],\n", "issue": "CertTypes `keyfile` and `password` should be Optional types.\n`SSLContext.load_cert_chain` can take `None` as arguments values ([docs](https://docs.python.org/3/library/ssl.html#ssl.SSLContext.load_cert_chain)) so I guess this:\r\nhttps://github.com/encode/httpx/blob/c09e61d50c8f169187cada6dbf14b89c7763c63f/httpx/_types.py#L54\r\nshould be rewritten as follows:\r\n```python\r\nCertTypes = Union[str, Tuple[str, Optional[str]], Tuple[str, Optional[str], Optional[str]]] \r\n```\n", "before_files": [{"content": "\"\"\"\nType definitions for type checking purposes.\n\"\"\"\n\nimport ssl\nfrom http.cookiejar import CookieJar\nfrom typing import (\n IO,\n TYPE_CHECKING,\n AsyncIterable,\n Callable,\n Dict,\n Iterable,\n List,\n Mapping,\n Optional,\n Sequence,\n Tuple,\n Union,\n)\n\nif TYPE_CHECKING: # pragma: no cover\n from ._auth import Auth # noqa: F401\n from ._config import Proxy, Timeout # noqa: F401\n from ._models import URL, Cookies, Headers, QueryParams, Request # noqa: F401\n\n\nPrimitiveData = Optional[Union[str, int, float, bool]]\n\nRawURL = Tuple[bytes, bytes, Optional[int], bytes]\n\nURLTypes = Union[\"URL\", str]\n\nQueryParamTypes = Union[\n \"QueryParams\",\n Mapping[str, Union[PrimitiveData, Sequence[PrimitiveData]]],\n List[Tuple[str, PrimitiveData]],\n Tuple[Tuple[str, PrimitiveData], ...],\n str,\n bytes,\n None,\n]\n\nHeaderTypes = Union[\n \"Headers\",\n Dict[str, str],\n Dict[bytes, bytes],\n Sequence[Tuple[str, str]],\n Sequence[Tuple[bytes, bytes]],\n]\n\nCookieTypes = Union[\"Cookies\", CookieJar, Dict[str, str], List[Tuple[str, str]]]\n\nCertTypes = Union[str, Tuple[str, str], Tuple[str, str, str]]\nVerifyTypes = Union[str, bool, ssl.SSLContext]\nTimeoutTypes = Union[\n Optional[float],\n Tuple[Optional[float], Optional[float], Optional[float], Optional[float]],\n \"Timeout\",\n]\nProxiesTypes = Union[URLTypes, \"Proxy\", Dict[URLTypes, Union[None, URLTypes, \"Proxy\"]]]\n\nAuthTypes = Union[\n Tuple[Union[str, bytes], Union[str, bytes]],\n Callable[[\"Request\"], \"Request\"],\n \"Auth\",\n None,\n]\n\nByteStream = Union[Iterable[bytes], AsyncIterable[bytes]]\nRequestContent = Union[str, bytes, ByteStream]\nResponseContent = Union[str, bytes, ByteStream]\n\nRequestData = dict\n\nFileContent = Union[IO[str], IO[bytes], str, bytes]\nFileTypes = Union[\n # file (or text)\n FileContent,\n # (filename, file (or text))\n Tuple[Optional[str], FileContent],\n # (filename, file (or text), content_type)\n Tuple[Optional[str], FileContent, Optional[str]],\n]\nRequestFiles = Union[Mapping[str, FileTypes], Sequence[Tuple[str, FileTypes]]]\n", "path": "httpx/_types.py"}]}
| 1,409 | 170 |
gh_patches_debug_20197
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-5569
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Authentik Outpost Proxy Bad Gateway error when sign_out
**Describe the bug**
I get a 502 "Bad gateway" error (and a go stacktrace) when I go on outpost sign_out url
**To Reproduce**
1. Set up a authentik outpost proxy
2. Configure your app to login with a traefik forwardAuth middleware
3. Login with success
4. Got to the signout url `https://svc.my_domain.tld/outpost.goauthentik.io/sign_out`
5. See error 502 "Bad Gateway"
6. In the log `http: panic serving 172.18.0.2:36148: interface conversion: interface {} is nil, not application.Claims`
**Expected behavior**
A successful Authentik/App logout
**Screenshots**
N/A
**Logs**
```
authentik-proxy-1 | 2023/05/04 10:58:37 http: panic serving 172.18.0.2:36148: interface conversion: interface {} is nil, not application.Claims
authentik-proxy-1 | goroutine 38672 [running]:
authentik-proxy-1 | net/http.(*conn).serve.func1()
authentik-proxy-1 | /usr/local/go/src/net/http/server.go:1854 +0xbf
authentik-proxy-1 | panic({0x1029280, 0xc0007e0930})
authentik-proxy-1 | /usr/local/go/src/runtime/panic.go:890 +0x263
authentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2/application.(*Application).Logout(0xc000208700, {0xc0006ff6c0, 0x40})
authentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/application/session.go:97 +0x169d
authentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2/application.(*Application).handleSignOut(0xc000208700, {0x12cea40, 0xc00039b460}, 0xc00043b480?)
authentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/application/application.go:274 +0x297
authentik-proxy-1 | net/http.HandlerFunc.ServeHTTP(0x0?, {0x12cea40?, 0xc00039b460?}, 0x12?)
authentik-proxy-1 | /usr/local/go/src/net/http/server.go:2122 +0x2f
authentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2/application.NewApplication.func3.1({0x12cea40, 0xc00039b460}, 0xc0007b6f00)
authentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/application/application.go:187 +0x1f9
authentik-proxy-1 | net/http.HandlerFunc.ServeHTTP(0xc0006ff6c0?, {0x12cea40?, 0xc00039b460?}, 0x0?)
authentik-proxy-1 | /usr/local/go/src/net/http/server.go:2122 +0x2f
authentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2/application.NewApplication.func2.1({0x12cea40, 0xc00039b460}, 0xc0007b6f00)
authentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/application/application.go:165 +0x222
authentik-proxy-1 | net/http.HandlerFunc.ServeHTTP(0xc0008c6d20?, {0x12cea40?, 0xc00039b460?}, 0x0?)
authentik-proxy-1 | /usr/local/go/src/net/http/server.go:2122 +0x2f
authentik-proxy-1 | goauthentik.io/internal/utils/web.loggingHandler.ServeHTTP({{0x12cb9e0?, 0xc00039b420?}, 0xc00016f2d0?, 0xc0001256d0?}, {0x12cee30?, 0xc000482fc0}, 0xc0007b6f00)
authentik-proxy-1 | /go/src/goauthentik.io/internal/utils/web/middleware.go:98 +0x12c
authentik-proxy-1 | github.com/gorilla/mux.(*Router).ServeHTTP(0xc000000300, {0x12cee30, 0xc000482fc0}, 0xc0007b6d00)
authentik-proxy-1 | /go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210 +0x1cf
authentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2/application.(*Application).ServeHTTP(...)
authentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/application/application.go:250
authentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2.(*ProxyServer).Handle(0xc00021c370, {0x12cee30, 0xc000482fc0}, 0xc0007b6d00)
authentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/handlers.go:129 +0xa25
authentik-proxy-1 | net/http.HandlerFunc.ServeHTTP(0x1136b00?, {0x12cee30?, 0xc000482fc0?}, 0x7f59d0?)
authentik-proxy-1 | /usr/local/go/src/net/http/server.go:2122 +0x2f
authentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2.NewProxyServer.func1.1({0x12cee30, 0xc000482fc0}, 0xc0003b9260?)
authentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/proxyv2.go:46 +0x3c
authentik-proxy-1 | net/http.HandlerFunc.ServeHTTP(0xc0007b6c00?, {0x12cee30?, 0xc000482fc0?}, 0xc0008c79e8?)
authentik-proxy-1 | /usr/local/go/src/net/http/server.go:2122 +0x2f
authentik-proxy-1 | github.com/gorilla/mux.(*Router).ServeHTTP(0xc000000000, {0x12cee30, 0xc000482fc0}, 0xc0007b6b00)
authentik-proxy-1 | /go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210 +0x1cf
authentik-proxy-1 | net/http.serverHandler.ServeHTTP({0xc0008cae40?}, {0x12cee30, 0xc000482fc0}, 0xc0007b6b00)
authentik-proxy-1 | /usr/local/go/src/net/http/server.go:2936 +0x316
authentik-proxy-1 | net/http.(*conn).serve(0xc00012efc0, {0x12cf600, 0xc0001998c0})
authentik-proxy-1 | /usr/local/go/src/net/http/server.go:1995 +0x612
authentik-proxy-1 | created by net/http.(*Server).Serve
authentik-proxy-1 | /usr/local/go/src/net/http/server.go:3089 +0x5ed
```
**Version and Deployment (please complete the following information):**
- authentik version: 2023.4.1
- Deployment: docker-compose
**Additional context**
The login works fine.
</issue>
<code>
[start of authentik/providers/oauth2/migrations/0016_alter_refreshtoken_token.py]
1 # Generated by Django 4.1.7 on 2023-05-06 16:18
2
3 from django.db import migrations, models
4
5 import authentik.providers.oauth2.models
6
7
8 class Migration(migrations.Migration):
9 dependencies = [
10 (
11 "authentik_providers_oauth2",
12 "0015_accesstoken_auth_time_authorizationcode_auth_time_and_more",
13 ),
14 ]
15
16 operations = [
17 migrations.AlterField(
18 model_name="refreshtoken",
19 name="token",
20 field=models.TextField(
21 default=authentik.providers.oauth2.models.generate_client_secret
22 ),
23 ),
24 ]
25
[end of authentik/providers/oauth2/migrations/0016_alter_refreshtoken_token.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/providers/oauth2/migrations/0016_alter_refreshtoken_token.py b/authentik/providers/oauth2/migrations/0016_alter_refreshtoken_token.py
--- a/authentik/providers/oauth2/migrations/0016_alter_refreshtoken_token.py
+++ b/authentik/providers/oauth2/migrations/0016_alter_refreshtoken_token.py
@@ -21,4 +21,26 @@
default=authentik.providers.oauth2.models.generate_client_secret
),
),
+ migrations.AlterField(
+ model_name="oauth2provider",
+ name="sub_mode",
+ field=models.TextField(
+ choices=[
+ ("hashed_user_id", "Based on the Hashed User ID"),
+ ("user_id", "Based on user ID"),
+ ("user_uuid", "Based on user UUID"),
+ ("user_username", "Based on the username"),
+ (
+ "user_email",
+ "Based on the User's Email. This is recommended over the UPN method.",
+ ),
+ (
+ "user_upn",
+ "Based on the User's UPN, only works if user has a 'upn' attribute set. Use this method only if you have different UPN and Mail domains.",
+ ),
+ ],
+ default="hashed_user_id",
+ help_text="Configure what data should be used as unique User Identifier. For most cases, the default should be fine.",
+ ),
+ ),
]
|
{"golden_diff": "diff --git a/authentik/providers/oauth2/migrations/0016_alter_refreshtoken_token.py b/authentik/providers/oauth2/migrations/0016_alter_refreshtoken_token.py\n--- a/authentik/providers/oauth2/migrations/0016_alter_refreshtoken_token.py\n+++ b/authentik/providers/oauth2/migrations/0016_alter_refreshtoken_token.py\n@@ -21,4 +21,26 @@\n default=authentik.providers.oauth2.models.generate_client_secret\n ),\n ),\n+ migrations.AlterField(\n+ model_name=\"oauth2provider\",\n+ name=\"sub_mode\",\n+ field=models.TextField(\n+ choices=[\n+ (\"hashed_user_id\", \"Based on the Hashed User ID\"),\n+ (\"user_id\", \"Based on user ID\"),\n+ (\"user_uuid\", \"Based on user UUID\"),\n+ (\"user_username\", \"Based on the username\"),\n+ (\n+ \"user_email\",\n+ \"Based on the User's Email. This is recommended over the UPN method.\",\n+ ),\n+ (\n+ \"user_upn\",\n+ \"Based on the User's UPN, only works if user has a 'upn' attribute set. Use this method only if you have different UPN and Mail domains.\",\n+ ),\n+ ],\n+ default=\"hashed_user_id\",\n+ help_text=\"Configure what data should be used as unique User Identifier. For most cases, the default should be fine.\",\n+ ),\n+ ),\n ]\n", "issue": "Authentik Outpost Proxy Bad Gateway error when sign_out\n**Describe the bug**\r\nI get a 502 \"Bad gateway\" error (and a go stacktrace) when I go on outpost sign_out url\r\n\r\n**To Reproduce**\r\n1. Set up a authentik outpost proxy\r\n2. Configure your app to login with a traefik forwardAuth middleware\r\n3. Login with success\r\n4. Got to the signout url `https://svc.my_domain.tld/outpost.goauthentik.io/sign_out`\r\n5. See error 502 \"Bad Gateway\"\r\n6. In the log `http: panic serving 172.18.0.2:36148: interface conversion: interface {} is nil, not application.Claims`\r\n\r\n**Expected behavior**\r\nA successful Authentik/App logout \r\n\r\n**Screenshots**\r\nN/A\r\n\r\n**Logs**\r\n```\r\nauthentik-proxy-1 | 2023/05/04 10:58:37 http: panic serving 172.18.0.2:36148: interface conversion: interface {} is nil, not application.Claims \r\nauthentik-proxy-1 | goroutine 38672 [running]: \r\nauthentik-proxy-1 | net/http.(*conn).serve.func1() \r\nauthentik-proxy-1 | /usr/local/go/src/net/http/server.go:1854 +0xbf \r\nauthentik-proxy-1 | panic({0x1029280, 0xc0007e0930}) \r\nauthentik-proxy-1 | /usr/local/go/src/runtime/panic.go:890 +0x263 \r\nauthentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2/application.(*Application).Logout(0xc000208700, {0xc0006ff6c0, 0x40}) \r\nauthentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/application/session.go:97 +0x169d \r\nauthentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2/application.(*Application).handleSignOut(0xc000208700, {0x12cea40, 0xc00039b460}, 0xc00043b480?) \r\nauthentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/application/application.go:274 +0x297 \r\nauthentik-proxy-1 | net/http.HandlerFunc.ServeHTTP(0x0?, {0x12cea40?, 0xc00039b460?}, 0x12?) \r\nauthentik-proxy-1 | /usr/local/go/src/net/http/server.go:2122 +0x2f \r\nauthentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2/application.NewApplication.func3.1({0x12cea40, 0xc00039b460}, 0xc0007b6f00) \r\nauthentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/application/application.go:187 +0x1f9 \r\nauthentik-proxy-1 | net/http.HandlerFunc.ServeHTTP(0xc0006ff6c0?, {0x12cea40?, 0xc00039b460?}, 0x0?) \r\nauthentik-proxy-1 | /usr/local/go/src/net/http/server.go:2122 +0x2f \r\nauthentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2/application.NewApplication.func2.1({0x12cea40, 0xc00039b460}, 0xc0007b6f00) \r\nauthentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/application/application.go:165 +0x222 \r\nauthentik-proxy-1 | net/http.HandlerFunc.ServeHTTP(0xc0008c6d20?, {0x12cea40?, 0xc00039b460?}, 0x0?) \r\nauthentik-proxy-1 | /usr/local/go/src/net/http/server.go:2122 +0x2f \r\nauthentik-proxy-1 | goauthentik.io/internal/utils/web.loggingHandler.ServeHTTP({{0x12cb9e0?, 0xc00039b420?}, 0xc00016f2d0?, 0xc0001256d0?}, {0x12cee30?, 0xc000482fc0}, 0xc0007b6f00) \r\nauthentik-proxy-1 | /go/src/goauthentik.io/internal/utils/web/middleware.go:98 +0x12c \r\nauthentik-proxy-1 | github.com/gorilla/mux.(*Router).ServeHTTP(0xc000000300, {0x12cee30, 0xc000482fc0}, 0xc0007b6d00) \r\nauthentik-proxy-1 | /go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210 +0x1cf \r\nauthentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2/application.(*Application).ServeHTTP(...) \r\nauthentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/application/application.go:250 \r\nauthentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2.(*ProxyServer).Handle(0xc00021c370, {0x12cee30, 0xc000482fc0}, 0xc0007b6d00) \r\nauthentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/handlers.go:129 +0xa25 \r\nauthentik-proxy-1 | net/http.HandlerFunc.ServeHTTP(0x1136b00?, {0x12cee30?, 0xc000482fc0?}, 0x7f59d0?) \r\nauthentik-proxy-1 | /usr/local/go/src/net/http/server.go:2122 +0x2f \r\nauthentik-proxy-1 | goauthentik.io/internal/outpost/proxyv2.NewProxyServer.func1.1({0x12cee30, 0xc000482fc0}, 0xc0003b9260?) \r\nauthentik-proxy-1 | /go/src/goauthentik.io/internal/outpost/proxyv2/proxyv2.go:46 +0x3c \r\nauthentik-proxy-1 | net/http.HandlerFunc.ServeHTTP(0xc0007b6c00?, {0x12cee30?, 0xc000482fc0?}, 0xc0008c79e8?) \r\nauthentik-proxy-1 | /usr/local/go/src/net/http/server.go:2122 +0x2f \r\nauthentik-proxy-1 | github.com/gorilla/mux.(*Router).ServeHTTP(0xc000000000, {0x12cee30, 0xc000482fc0}, 0xc0007b6b00) \r\nauthentik-proxy-1 | /go/pkg/mod/github.com/gorilla/[email protected]/mux.go:210 +0x1cf \r\nauthentik-proxy-1 | net/http.serverHandler.ServeHTTP({0xc0008cae40?}, {0x12cee30, 0xc000482fc0}, 0xc0007b6b00) \r\nauthentik-proxy-1 | /usr/local/go/src/net/http/server.go:2936 +0x316 \r\nauthentik-proxy-1 | net/http.(*conn).serve(0xc00012efc0, {0x12cf600, 0xc0001998c0}) \r\nauthentik-proxy-1 | /usr/local/go/src/net/http/server.go:1995 +0x612 \r\nauthentik-proxy-1 | created by net/http.(*Server).Serve \r\nauthentik-proxy-1 | /usr/local/go/src/net/http/server.go:3089 +0x5ed\r\n``` \r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version: 2023.4.1\r\n- Deployment: docker-compose\r\n\r\n**Additional context**\r\nThe login works fine.\r\n\n", "before_files": [{"content": "# Generated by Django 4.1.7 on 2023-05-06 16:18\n\nfrom django.db import migrations, models\n\nimport authentik.providers.oauth2.models\n\n\nclass Migration(migrations.Migration):\n dependencies = [\n (\n \"authentik_providers_oauth2\",\n \"0015_accesstoken_auth_time_authorizationcode_auth_time_and_more\",\n ),\n ]\n\n operations = [\n migrations.AlterField(\n model_name=\"refreshtoken\",\n name=\"token\",\n field=models.TextField(\n default=authentik.providers.oauth2.models.generate_client_secret\n ),\n ),\n ]\n", "path": "authentik/providers/oauth2/migrations/0016_alter_refreshtoken_token.py"}]}
| 2,927 | 338 |
gh_patches_debug_14526
|
rasdani/github-patches
|
git_diff
|
pfnet__pytorch-pfn-extras-372
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typing: ManualScheduleTrigger `points` should accept `int`
</issue>
<code>
[start of pytorch_pfn_extras/training/triggers/manual_schedule_trigger.py]
1 # mypy: ignore-errors
2
3 from typing import List, Union, TYPE_CHECKING
4
5 from pytorch_pfn_extras.training import trigger
6
7
8 if TYPE_CHECKING:
9 from pytorch_pfn_extras.training.manager import _BaseExtensionsManager
10 from pytorch_pfn_extras.training._trigger_util import UnitLiteral
11
12
13 class ManualScheduleTrigger(trigger.Trigger):
14
15 """Trigger invoked at specified point(s) of iterations or epochs.
16
17 This trigger accepts iterations or epochs indicated by given point(s).
18 There are two ways to specify the point(s): iteration and epoch.
19 ``iteration`` means the number of updates, while ``epoch`` means the number
20 of sweeps over the training dataset. Fractional values are allowed
21 if the point is a number of epochs; the trigger uses the ``iteration``
22 and ``epoch_detail`` attributes defined by the manager.
23
24 Args:
25 points (int, float, or list of int or float): time of the trigger.
26 Must be an integer or list of integer if unit is ``'iteration'``.
27 unit (str): Unit of the time specified by ``points``. It must be
28 either ``'iteration'`` or ``'epoch'``.
29
30 """
31
32 def __init__(self, points: Union[float, List[float]], unit: 'UnitLiteral'):
33 if unit not in ('epoch', 'iteration'):
34 raise ValueError(
35 'Trigger unit must be either \'epoch\' or \'iteration\'.')
36
37 self.points = (points if isinstance(points, list) else [points])
38 self.unit = unit
39
40 def __call__(self, manager: '_BaseExtensionsManager') -> bool:
41 """Decides whether the extension should be called on this iteration.
42
43 Args:
44 manager (~pytorch_pfn_extras.training.ExtensionsManager):
45 Manager object that this trigger is associated with.
46 The iteration information in this manager is used to
47 determine if the trigger should fire.
48
49 Returns:
50 bool: True if the corresponding extension should be invoked in this
51 iteration.
52
53 """
54 fire = self.may_fire(manager.iteration, manager._iters_per_epoch)
55 return fire
56
57 def may_fire(self, iteration: int, epoch_length: int) -> bool:
58 if self.unit == 'epoch':
59 fire = any(
60 int(p * epoch_length) == iteration for p in self.points)
61 else:
62 fire = any(p == iteration for p in self.points)
63 return fire
64
[end of pytorch_pfn_extras/training/triggers/manual_schedule_trigger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pytorch_pfn_extras/training/triggers/manual_schedule_trigger.py b/pytorch_pfn_extras/training/triggers/manual_schedule_trigger.py
--- a/pytorch_pfn_extras/training/triggers/manual_schedule_trigger.py
+++ b/pytorch_pfn_extras/training/triggers/manual_schedule_trigger.py
@@ -1,6 +1,6 @@
# mypy: ignore-errors
-from typing import List, Union, TYPE_CHECKING
+from typing import Sequence, Union, TYPE_CHECKING
from pytorch_pfn_extras.training import trigger
@@ -29,7 +29,7 @@
"""
- def __init__(self, points: Union[float, List[float]], unit: 'UnitLiteral'):
+ def __init__(self, points: Union[float, Sequence[float]], unit: 'UnitLiteral'):
if unit not in ('epoch', 'iteration'):
raise ValueError(
'Trigger unit must be either \'epoch\' or \'iteration\'.')
|
{"golden_diff": "diff --git a/pytorch_pfn_extras/training/triggers/manual_schedule_trigger.py b/pytorch_pfn_extras/training/triggers/manual_schedule_trigger.py\n--- a/pytorch_pfn_extras/training/triggers/manual_schedule_trigger.py\n+++ b/pytorch_pfn_extras/training/triggers/manual_schedule_trigger.py\n@@ -1,6 +1,6 @@\n # mypy: ignore-errors\n \n-from typing import List, Union, TYPE_CHECKING\n+from typing import Sequence, Union, TYPE_CHECKING\n \n from pytorch_pfn_extras.training import trigger\n \n@@ -29,7 +29,7 @@\n \n \"\"\"\n \n- def __init__(self, points: Union[float, List[float]], unit: 'UnitLiteral'):\n+ def __init__(self, points: Union[float, Sequence[float]], unit: 'UnitLiteral'):\n if unit not in ('epoch', 'iteration'):\n raise ValueError(\n 'Trigger unit must be either \\'epoch\\' or \\'iteration\\'.')\n", "issue": "Typing: ManualScheduleTrigger `points` should accept `int`\n\n", "before_files": [{"content": "# mypy: ignore-errors\n\nfrom typing import List, Union, TYPE_CHECKING\n\nfrom pytorch_pfn_extras.training import trigger\n\n\nif TYPE_CHECKING:\n from pytorch_pfn_extras.training.manager import _BaseExtensionsManager\n from pytorch_pfn_extras.training._trigger_util import UnitLiteral\n\n\nclass ManualScheduleTrigger(trigger.Trigger):\n\n \"\"\"Trigger invoked at specified point(s) of iterations or epochs.\n\n This trigger accepts iterations or epochs indicated by given point(s).\n There are two ways to specify the point(s): iteration and epoch.\n ``iteration`` means the number of updates, while ``epoch`` means the number\n of sweeps over the training dataset. Fractional values are allowed\n if the point is a number of epochs; the trigger uses the ``iteration``\n and ``epoch_detail`` attributes defined by the manager.\n\n Args:\n points (int, float, or list of int or float): time of the trigger.\n Must be an integer or list of integer if unit is ``'iteration'``.\n unit (str): Unit of the time specified by ``points``. It must be\n either ``'iteration'`` or ``'epoch'``.\n\n \"\"\"\n\n def __init__(self, points: Union[float, List[float]], unit: 'UnitLiteral'):\n if unit not in ('epoch', 'iteration'):\n raise ValueError(\n 'Trigger unit must be either \\'epoch\\' or \\'iteration\\'.')\n\n self.points = (points if isinstance(points, list) else [points])\n self.unit = unit\n\n def __call__(self, manager: '_BaseExtensionsManager') -> bool:\n \"\"\"Decides whether the extension should be called on this iteration.\n\n Args:\n manager (~pytorch_pfn_extras.training.ExtensionsManager):\n Manager object that this trigger is associated with.\n The iteration information in this manager is used to\n determine if the trigger should fire.\n\n Returns:\n bool: True if the corresponding extension should be invoked in this\n iteration.\n\n \"\"\"\n fire = self.may_fire(manager.iteration, manager._iters_per_epoch)\n return fire\n\n def may_fire(self, iteration: int, epoch_length: int) -> bool:\n if self.unit == 'epoch':\n fire = any(\n int(p * epoch_length) == iteration for p in self.points)\n else:\n fire = any(p == iteration for p in self.points)\n return fire\n", "path": "pytorch_pfn_extras/training/triggers/manual_schedule_trigger.py"}]}
| 1,216 | 210 |
gh_patches_debug_21898
|
rasdani/github-patches
|
git_diff
|
falconry__falcon-2008
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unclear which `HTTPError` was instantiated from `deprecated_args()` warnings
In Falcon 3.0.x, instantiating `HTTPError` or its subclasses with positional arguments (of course except the allowed ones) generates a `DeprecatedWarning` via the `deprecated_args()` decorator.
However, it is unclear from the warning which class/function was invoked, it just says "calls [with more than N] positional args are deprecated". Brought up by @laurent-chriqui (see the linked PR).
Ideally, as a developer, I would like the warning to read along the lines of
```
DeprecatedWarning: Calls to HTTPNotFound.__init__(...) with positional args are deprecated. Please specify them as keyword arguments instead.
```
</issue>
<code>
[start of falcon/util/deprecation.py]
1 # Copyright 2013 by Rackspace Hosting, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Miscellaneous deprecation utilities.
16
17 This module provides decorators to mark functions and classes as deprecated.
18 """
19
20 import functools
21 import warnings
22
23
24 __all__ = (
25 'DeprecatedWarning',
26 'deprecated',
27 'deprecated_args',
28 )
29
30
31 # NOTE(kgriffs): We don't want our deprecations to be ignored by default,
32 # so create our own type.
33 #
34 # TODO(kgriffs): Revisit this decision if users complain.
35 class DeprecatedWarning(UserWarning):
36 pass
37
38
39 def deprecated(instructions, is_property=False, method_name=None):
40 """Flag a method as deprecated.
41
42 This function returns a decorator which can be used to mark deprecated
43 functions. Applying this decorator will result in a warning being
44 emitted when the function is used.
45
46 Args:
47 instructions (str): Specific guidance for the developer, e.g.:
48 'Please migrate to add_proxy(...)'.
49 is_property (bool): If the deprecated object is a property. It
50 will omit the ``(...)`` from the generated documentation.
51 method_name (str, optional): Set to override the name of the
52 deprecated function or property in the generated
53 documentation (default ``None``). This is useful when
54 decorating an alias that carries the target's ``__name__``.
55
56 """
57
58 def decorator(func):
59
60 object_name = 'property' if is_property else 'function'
61 post_name = '' if is_property else '(...)'
62 message = 'Call to deprecated {} {}{}. {}'.format(
63 object_name, method_name or func.__name__, post_name, instructions
64 )
65
66 @functools.wraps(func)
67 def wrapper(*args, **kwargs):
68 warnings.warn(message, category=DeprecatedWarning, stacklevel=2)
69
70 return func(*args, **kwargs)
71
72 return wrapper
73
74 return decorator
75
76
77 def deprecated_args(*, allowed_positional, is_method=True):
78 """Flag a method call with positional args as deprecated.
79
80 Keyword Args:
81 allowed_positional (int): Number of allowed positional arguments
82 is_method (bool, optional): The decorated function is a method. Will
83 add one to the number of allowed positional args to account for
84 ``self``. Defaults to True.
85 """
86
87 template = (
88 'Calls with{} positional args are deprecated.'
89 ' Please specify them as keyword arguments instead.'
90 )
91 text = ' more than {}'.format(allowed_positional) if allowed_positional else ''
92 warn_text = template.format(text)
93 if is_method:
94 allowed_positional += 1
95
96 def deprecated_args(fn):
97 @functools.wraps(fn)
98 def wraps(*args, **kwargs):
99 if len(args) > allowed_positional:
100 warnings.warn(warn_text, DeprecatedWarning, stacklevel=2)
101 return fn(*args, **kwargs)
102
103 return wraps
104
105 return deprecated_args
106
[end of falcon/util/deprecation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/falcon/util/deprecation.py b/falcon/util/deprecation.py
--- a/falcon/util/deprecation.py
+++ b/falcon/util/deprecation.py
@@ -85,11 +85,11 @@
"""
template = (
- 'Calls with{} positional args are deprecated.'
+ 'Calls to {{fn}}(...) with{arg_text} positional args are deprecated.'
' Please specify them as keyword arguments instead.'
)
text = ' more than {}'.format(allowed_positional) if allowed_positional else ''
- warn_text = template.format(text)
+ warn_text = template.format(arg_text=text)
if is_method:
allowed_positional += 1
@@ -97,7 +97,11 @@
@functools.wraps(fn)
def wraps(*args, **kwargs):
if len(args) > allowed_positional:
- warnings.warn(warn_text, DeprecatedWarning, stacklevel=2)
+ warnings.warn(
+ warn_text.format(fn=fn.__qualname__),
+ DeprecatedWarning,
+ stacklevel=2,
+ )
return fn(*args, **kwargs)
return wraps
|
{"golden_diff": "diff --git a/falcon/util/deprecation.py b/falcon/util/deprecation.py\n--- a/falcon/util/deprecation.py\n+++ b/falcon/util/deprecation.py\n@@ -85,11 +85,11 @@\n \"\"\"\n \n template = (\n- 'Calls with{} positional args are deprecated.'\n+ 'Calls to {{fn}}(...) with{arg_text} positional args are deprecated.'\n ' Please specify them as keyword arguments instead.'\n )\n text = ' more than {}'.format(allowed_positional) if allowed_positional else ''\n- warn_text = template.format(text)\n+ warn_text = template.format(arg_text=text)\n if is_method:\n allowed_positional += 1\n \n@@ -97,7 +97,11 @@\n @functools.wraps(fn)\n def wraps(*args, **kwargs):\n if len(args) > allowed_positional:\n- warnings.warn(warn_text, DeprecatedWarning, stacklevel=2)\n+ warnings.warn(\n+ warn_text.format(fn=fn.__qualname__),\n+ DeprecatedWarning,\n+ stacklevel=2,\n+ )\n return fn(*args, **kwargs)\n \n return wraps\n", "issue": "Unclear which `HTTPError` was instantiated from `deprecated_args()` warnings\nIn Falcon 3.0.x, instantiating `HTTPError` or its subclasses with positional arguments (of course except the allowed ones) generates a `DeprecatedWarning` via the `deprecated_args()` decorator.\r\n\r\nHowever, it is unclear from the warning which class/function was invoked, it just says \"calls [with more than N] positional args are deprecated\". Brought up by @laurent-chriqui (see the linked PR).\r\n\r\nIdeally, as a developer, I would like the warning to read along the lines of\r\n```\r\nDeprecatedWarning: Calls to HTTPNotFound.__init__(...) with positional args are deprecated. Please specify them as keyword arguments instead.\r\n```\n", "before_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Miscellaneous deprecation utilities.\n\nThis module provides decorators to mark functions and classes as deprecated.\n\"\"\"\n\nimport functools\nimport warnings\n\n\n__all__ = (\n 'DeprecatedWarning',\n 'deprecated',\n 'deprecated_args',\n)\n\n\n# NOTE(kgriffs): We don't want our deprecations to be ignored by default,\n# so create our own type.\n#\n# TODO(kgriffs): Revisit this decision if users complain.\nclass DeprecatedWarning(UserWarning):\n pass\n\n\ndef deprecated(instructions, is_property=False, method_name=None):\n \"\"\"Flag a method as deprecated.\n\n This function returns a decorator which can be used to mark deprecated\n functions. Applying this decorator will result in a warning being\n emitted when the function is used.\n\n Args:\n instructions (str): Specific guidance for the developer, e.g.:\n 'Please migrate to add_proxy(...)'.\n is_property (bool): If the deprecated object is a property. It\n will omit the ``(...)`` from the generated documentation.\n method_name (str, optional): Set to override the name of the\n deprecated function or property in the generated\n documentation (default ``None``). This is useful when\n decorating an alias that carries the target's ``__name__``.\n\n \"\"\"\n\n def decorator(func):\n\n object_name = 'property' if is_property else 'function'\n post_name = '' if is_property else '(...)'\n message = 'Call to deprecated {} {}{}. {}'.format(\n object_name, method_name or func.__name__, post_name, instructions\n )\n\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n warnings.warn(message, category=DeprecatedWarning, stacklevel=2)\n\n return func(*args, **kwargs)\n\n return wrapper\n\n return decorator\n\n\ndef deprecated_args(*, allowed_positional, is_method=True):\n \"\"\"Flag a method call with positional args as deprecated.\n\n Keyword Args:\n allowed_positional (int): Number of allowed positional arguments\n is_method (bool, optional): The decorated function is a method. Will\n add one to the number of allowed positional args to account for\n ``self``. Defaults to True.\n \"\"\"\n\n template = (\n 'Calls with{} positional args are deprecated.'\n ' Please specify them as keyword arguments instead.'\n )\n text = ' more than {}'.format(allowed_positional) if allowed_positional else ''\n warn_text = template.format(text)\n if is_method:\n allowed_positional += 1\n\n def deprecated_args(fn):\n @functools.wraps(fn)\n def wraps(*args, **kwargs):\n if len(args) > allowed_positional:\n warnings.warn(warn_text, DeprecatedWarning, stacklevel=2)\n return fn(*args, **kwargs)\n\n return wraps\n\n return deprecated_args\n", "path": "falcon/util/deprecation.py"}]}
| 1,653 | 255 |
gh_patches_debug_60934
|
rasdani/github-patches
|
git_diff
|
superduper-io__superduper-1837
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG]: Variable inject for list values in a serialised component missing kwargs
c = Component()
c.dict() -> {some keys: [ {}, { 'v': Variable_type }] }
due to
```
def _replace_variables(x, db, **kwargs):
from .document import Document
if isinstance(x, dict):
return {
_replace_variables(k, db, **kwargs): _replace_variables(v, db, **kwargs)
for k, v in x.items()
}
if isinstance(x, (list, tuple)):
return [_replace_variables(v, db) for v in x] -> BUG (need **kwargs here)
if isinstance(x, Variable):
return x.set(db, **kwargs)
if isinstance(x, Document):
return x.set_variables(db, **kwargs)
return x
```
</issue>
<code>
[start of superduperdb/base/serializable.py]
1 import dataclasses as dc
2 import importlib
3 import typing as t
4 from copy import deepcopy
5
6 from superduperdb.base.leaf import Leaf
7 from superduperdb.misc.serialization import asdict
8
9
10 def _from_dict(r: t.Any, db: None = None) -> t.Any:
11 from superduperdb.base.document import Document
12 from superduperdb.components.datatype import File, LazyArtifact
13
14 if isinstance(r, Document):
15 r = r.unpack(db, leaves_to_keep=(LazyArtifact, File))
16 if isinstance(r, (list, tuple)):
17 return [_from_dict(i, db=db) for i in r]
18 if not isinstance(r, dict):
19 return r
20 if '_content' in r:
21 r = r['_content']
22 if 'cls' in r and 'module' in r and 'dict' in r:
23 module = importlib.import_module(r['module'])
24 cls_ = getattr(module, r['cls'])
25 kwargs = _from_dict(r['dict'])
26 kwargs_init = {k: v for k, v in kwargs.items() if k not in cls_.set_post_init}
27 kwargs_post_init = {k: v for k, v in kwargs.items() if k in cls_.set_post_init}
28 instance = cls_(**kwargs_init)
29 for k, v in kwargs_post_init.items():
30 setattr(instance, k, v)
31 return instance
32 else:
33 return {k: _from_dict(v, db=db) for k, v in r.items()}
34
35
36 class VariableError(Exception):
37 ...
38
39
40 def _find_variables(r):
41 if isinstance(r, dict):
42 return sum([_find_variables(v) for v in r.values()], [])
43 elif isinstance(r, (list, tuple)):
44 return sum([_find_variables(v) for v in r], [])
45 elif isinstance(r, Variable):
46 return [r]
47 return []
48
49
50 def _replace_variables(x, db, **kwargs):
51 from .document import Document
52
53 if isinstance(x, dict):
54 return {
55 _replace_variables(k, db, **kwargs): _replace_variables(v, db, **kwargs)
56 for k, v in x.items()
57 }
58 if isinstance(x, (list, tuple)):
59 return [_replace_variables(v, db) for v in x]
60 if isinstance(x, Variable):
61 return x.set(db, **kwargs)
62 if isinstance(x, Document):
63 return x.set_variables(db, **kwargs)
64 return x
65
66
67 @dc.dataclass
68 class Serializable(Leaf):
69 """
70 Base class for serializable objects. This class is used to serialize and
71 deserialize objects to and from JSON + Artifact instances.
72 """
73
74 set_post_init: t.ClassVar[t.Sequence] = ()
75
76 @property
77 def unique_id(self):
78 return str(hash(self.dict().encode()))
79
80 @property
81 def variables(self) -> t.List['Variable']:
82 out = {}
83 r = self.encode(leaf_types_to_keep=(Variable,))
84 v = _find_variables(r)
85 for var in v:
86 out[var.value] = var
87 return sorted(list(out.values()), key=lambda x: x.value)
88
89 def set_variables(self, db, **kwargs) -> 'Serializable':
90 """
91 Set free variables of self.
92
93 :param db:
94 """
95 r = self.encode(leaf_types_to_keep=(Variable,))
96 r = _replace_variables(r, db, **kwargs)
97 return self.decode(r)
98
99 def encode(
100 self,
101 leaf_types_to_keep: t.Sequence = (),
102 ):
103 r = dict(self.dict().encode(leaf_types_to_keep=leaf_types_to_keep))
104 r['leaf_type'] = 'serializable'
105 return {'_content': r}
106
107 @classmethod
108 def decode(cls, r, db: t.Optional[t.Any] = None):
109 return _from_dict(r, db=db)
110
111 def dict(self):
112 from superduperdb import Document
113
114 return Document(asdict(self))
115
116 def copy(self):
117 return deepcopy(self)
118
119
120 @dc.dataclass
121 class Variable(Serializable):
122 """
123 Mechanism for allowing "free variables" in a serializable object.
124 The idea is to allow a variable to be set at runtime, rather than
125 at object creation time.
126
127 :param value: The name of the variable to be set at runtime.
128 :param setter_callback: A callback function that takes the value, datalayer
129 and kwargs as input and returns the formatted
130 variable.
131 """
132
133 value: t.Any
134 setter_callback: dc.InitVar[t.Optional[t.Callable]] = None
135
136 def __post_init__(self, setter_callback):
137 self.setter_callback = setter_callback
138
139 def __repr__(self) -> str:
140 return '$' + str(self.value)
141
142 def __hash__(self) -> int:
143 return hash(self.value)
144
145 def set(self, db, **kwargs):
146 """
147 Get the intended value from the values of the global variables.
148
149 >>> Variable('number').set(db, number=1.5, other='test')
150 1.5
151
152 :param db: The datalayer instance.
153 :param kwargs: Variables to be used in the setter_callback
154 or as formatting variables.
155 """
156 if self.setter_callback is not None:
157 try:
158 return self.setter_callback(db, self.value, kwargs)
159 except Exception as e:
160 raise VariableError(
161 f'Could not set variable {self.value} '
162 f'based on {self.setter_callback} and **kwargs: {kwargs}'
163 ) from e
164 else:
165 assert isinstance(self.value, str)
166 return kwargs[self.value]
167
[end of superduperdb/base/serializable.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/superduperdb/base/serializable.py b/superduperdb/base/serializable.py
--- a/superduperdb/base/serializable.py
+++ b/superduperdb/base/serializable.py
@@ -56,7 +56,7 @@
for k, v in x.items()
}
if isinstance(x, (list, tuple)):
- return [_replace_variables(v, db) for v in x]
+ return [_replace_variables(v, db, **kwargs) for v in x]
if isinstance(x, Variable):
return x.set(db, **kwargs)
if isinstance(x, Document):
|
{"golden_diff": "diff --git a/superduperdb/base/serializable.py b/superduperdb/base/serializable.py\n--- a/superduperdb/base/serializable.py\n+++ b/superduperdb/base/serializable.py\n@@ -56,7 +56,7 @@\n for k, v in x.items()\n }\n if isinstance(x, (list, tuple)):\n- return [_replace_variables(v, db) for v in x]\n+ return [_replace_variables(v, db, **kwargs) for v in x]\n if isinstance(x, Variable):\n return x.set(db, **kwargs)\n if isinstance(x, Document):\n", "issue": "[BUG]: Variable inject for list values in a serialised component missing kwargs\nc = Component()\r\n\r\nc.dict() -> {some keys: [ {}, { 'v': Variable_type }] }\r\n\r\ndue to \r\n```\r\n\r\ndef _replace_variables(x, db, **kwargs):\r\n from .document import Document\r\n\r\n if isinstance(x, dict):\r\n return {\r\n _replace_variables(k, db, **kwargs): _replace_variables(v, db, **kwargs)\r\n for k, v in x.items()\r\n }\r\n if isinstance(x, (list, tuple)):\r\n return [_replace_variables(v, db) for v in x] -> BUG (need **kwargs here)\r\n if isinstance(x, Variable):\r\n return x.set(db, **kwargs)\r\n if isinstance(x, Document):\r\n return x.set_variables(db, **kwargs)\r\n return x\r\n\r\n```\n", "before_files": [{"content": "import dataclasses as dc\nimport importlib\nimport typing as t\nfrom copy import deepcopy\n\nfrom superduperdb.base.leaf import Leaf\nfrom superduperdb.misc.serialization import asdict\n\n\ndef _from_dict(r: t.Any, db: None = None) -> t.Any:\n from superduperdb.base.document import Document\n from superduperdb.components.datatype import File, LazyArtifact\n\n if isinstance(r, Document):\n r = r.unpack(db, leaves_to_keep=(LazyArtifact, File))\n if isinstance(r, (list, tuple)):\n return [_from_dict(i, db=db) for i in r]\n if not isinstance(r, dict):\n return r\n if '_content' in r:\n r = r['_content']\n if 'cls' in r and 'module' in r and 'dict' in r:\n module = importlib.import_module(r['module'])\n cls_ = getattr(module, r['cls'])\n kwargs = _from_dict(r['dict'])\n kwargs_init = {k: v for k, v in kwargs.items() if k not in cls_.set_post_init}\n kwargs_post_init = {k: v for k, v in kwargs.items() if k in cls_.set_post_init}\n instance = cls_(**kwargs_init)\n for k, v in kwargs_post_init.items():\n setattr(instance, k, v)\n return instance\n else:\n return {k: _from_dict(v, db=db) for k, v in r.items()}\n\n\nclass VariableError(Exception):\n ...\n\n\ndef _find_variables(r):\n if isinstance(r, dict):\n return sum([_find_variables(v) for v in r.values()], [])\n elif isinstance(r, (list, tuple)):\n return sum([_find_variables(v) for v in r], [])\n elif isinstance(r, Variable):\n return [r]\n return []\n\n\ndef _replace_variables(x, db, **kwargs):\n from .document import Document\n\n if isinstance(x, dict):\n return {\n _replace_variables(k, db, **kwargs): _replace_variables(v, db, **kwargs)\n for k, v in x.items()\n }\n if isinstance(x, (list, tuple)):\n return [_replace_variables(v, db) for v in x]\n if isinstance(x, Variable):\n return x.set(db, **kwargs)\n if isinstance(x, Document):\n return x.set_variables(db, **kwargs)\n return x\n\n\[email protected]\nclass Serializable(Leaf):\n \"\"\"\n Base class for serializable objects. This class is used to serialize and\n deserialize objects to and from JSON + Artifact instances.\n \"\"\"\n\n set_post_init: t.ClassVar[t.Sequence] = ()\n\n @property\n def unique_id(self):\n return str(hash(self.dict().encode()))\n\n @property\n def variables(self) -> t.List['Variable']:\n out = {}\n r = self.encode(leaf_types_to_keep=(Variable,))\n v = _find_variables(r)\n for var in v:\n out[var.value] = var\n return sorted(list(out.values()), key=lambda x: x.value)\n\n def set_variables(self, db, **kwargs) -> 'Serializable':\n \"\"\"\n Set free variables of self.\n\n :param db:\n \"\"\"\n r = self.encode(leaf_types_to_keep=(Variable,))\n r = _replace_variables(r, db, **kwargs)\n return self.decode(r)\n\n def encode(\n self,\n leaf_types_to_keep: t.Sequence = (),\n ):\n r = dict(self.dict().encode(leaf_types_to_keep=leaf_types_to_keep))\n r['leaf_type'] = 'serializable'\n return {'_content': r}\n\n @classmethod\n def decode(cls, r, db: t.Optional[t.Any] = None):\n return _from_dict(r, db=db)\n\n def dict(self):\n from superduperdb import Document\n\n return Document(asdict(self))\n\n def copy(self):\n return deepcopy(self)\n\n\[email protected]\nclass Variable(Serializable):\n \"\"\"\n Mechanism for allowing \"free variables\" in a serializable object.\n The idea is to allow a variable to be set at runtime, rather than\n at object creation time.\n\n :param value: The name of the variable to be set at runtime.\n :param setter_callback: A callback function that takes the value, datalayer\n and kwargs as input and returns the formatted\n variable.\n \"\"\"\n\n value: t.Any\n setter_callback: dc.InitVar[t.Optional[t.Callable]] = None\n\n def __post_init__(self, setter_callback):\n self.setter_callback = setter_callback\n\n def __repr__(self) -> str:\n return '$' + str(self.value)\n\n def __hash__(self) -> int:\n return hash(self.value)\n\n def set(self, db, **kwargs):\n \"\"\"\n Get the intended value from the values of the global variables.\n\n >>> Variable('number').set(db, number=1.5, other='test')\n 1.5\n\n :param db: The datalayer instance.\n :param kwargs: Variables to be used in the setter_callback\n or as formatting variables.\n \"\"\"\n if self.setter_callback is not None:\n try:\n return self.setter_callback(db, self.value, kwargs)\n except Exception as e:\n raise VariableError(\n f'Could not set variable {self.value} '\n f'based on {self.setter_callback} and **kwargs: {kwargs}'\n ) from e\n else:\n assert isinstance(self.value, str)\n return kwargs[self.value]\n", "path": "superduperdb/base/serializable.py"}]}
| 2,330 | 141 |
gh_patches_debug_26370
|
rasdani/github-patches
|
git_diff
|
internetarchive__openlibrary-3973
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python 3: Add Cover Image hangs
<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
### Evidence / Screenshot (if possible)
http://localhost:8080/works/OL6037022W/Remix?debug=true on Docker Python 3
* Or http://staging.openlibrary.org/works/OL6037022W/Remix?debug=true
Command to launch Open Library on Docker on Python 3:
```
docker-compose down ; \
PYENV_VERSION=3.9.0 docker-compose -f docker-compose.yml -f docker-compose.infogami-local.yml up -d ; \
docker-compose logs -f --tail=10 web
```
### Relevant url?
<!-- `https://openlibrary.org/...` -->
1. http://localhost:8080/works/OL6037022W/Remix?debug=true
2. On the image, click Add Cover Image
3. Browse and select an appropriate local image file and click Submit
4. Internal Server Error
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Go to ...
2. Do ...
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual:
* Expected:
### Details
- **Logged in (Y/N)?**
- **Browser type/version?**
- **Operating system?**
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
</issue>
<code>
[start of openlibrary/plugins/upstream/covers.py]
1 """Handle book cover/author photo upload.
2 """
3 import web
4 import simplejson
5
6 from infogami.utils import delegate
7 from infogami.utils.view import safeint
8 from openlibrary import accounts
9 from openlibrary.plugins.upstream.models import Image
10 from openlibrary.plugins.upstream.utils import get_coverstore_url, render_template
11
12 from six.moves import urllib
13
14
15 def setup():
16 pass
17
18 class add_cover(delegate.page):
19 path = "(/books/OL\d+M)/add-cover"
20 cover_category = "b"
21
22 def GET(self, key):
23 book = web.ctx.site.get(key)
24 return render_template('covers/add', book)
25
26 def POST(self, key):
27 book = web.ctx.site.get(key)
28 if not book:
29 raise web.notfound("")
30
31 i = web.input(file={}, url="")
32
33 # remove references to field storage objects
34 web.ctx.pop("_fieldstorage", None)
35
36 data = self.upload(key, i)
37 coverid = data.get('id')
38
39 if coverid:
40 self.save(book, coverid, url=i.url)
41 cover = Image(web.ctx.site, "b", coverid)
42 return render_template("covers/saved", cover)
43 else:
44 return render_template("covers/add", book, {'url': i.url}, data)
45
46 def upload(self, key, i):
47 """Uploads a cover to coverstore and returns the response."""
48 olid = key.split("/")[-1]
49
50 if i.file is not None and hasattr(i.file, 'value'):
51 data = i.file.value
52 else:
53 data = None
54
55 if i.url and i.url.strip() == "http://":
56 i.url = ""
57
58 user = accounts.get_current_user()
59 params = {
60 "author": user and user.key,
61 "data": data,
62 "source_url": i.url,
63 "olid": olid,
64 "ip": web.ctx.ip
65 }
66
67 upload_url = '%s/%s/upload2' % (
68 get_coverstore_url(), self.cover_category)
69
70 if upload_url.startswith("//"):
71 upload_url = "http:" + upload_url
72
73 try:
74 response = urllib.request.urlopen(upload_url, urllib.parse.urlencode(params))
75 out = response.read()
76 except urllib.error.HTTPError as e:
77 out = {'error': e.read()}
78
79 return web.storage(simplejson.loads(out))
80
81 def save(self, book, coverid, url=None):
82 book.covers = [coverid] + [cover.id for cover in book.get_covers()]
83 book._save("Added new cover", action="add-cover", data={"url": url})
84
85 class add_work_cover(add_cover):
86 path = "(/works/OL\d+W)/add-cover"
87 cover_category = "w"
88
89 def upload(self, key, i):
90 if "coverid" in i and safeint(i.coverid):
91 return web.storage(id=int(i.coverid))
92 else:
93 return add_cover.upload(self, key, i)
94
95 class add_photo(add_cover):
96 path = "(/authors/OL\d+A)/add-photo"
97 cover_category = "a"
98
99 def save(self, author, photoid, url=None):
100 author.photos = [photoid] + [photo.id for photo in author.get_photos()]
101 author._save("Added new photo", action="add-photo", data={"url": url})
102
103 class manage_covers(delegate.page):
104 path = "(/books/OL\d+M)/manage-covers"
105 def GET(self, key):
106 book = web.ctx.site.get(key)
107 if not book:
108 raise web.notfound()
109 return render_template("covers/manage", key, self.get_images(book))
110
111 def get_images(self, book):
112 return book.get_covers()
113
114 def get_image(self, book):
115 return book.get_cover()
116
117 def save_images(self, book, covers):
118 book.covers = covers
119 book._save('Update covers')
120
121 def POST(self, key):
122 book = web.ctx.site.get(key)
123 if not book:
124 raise web.notfound()
125
126 images = web.input(image=[]).image
127 if '-' in images:
128 images = [int(id) for id in images[:images.index('-')]]
129 self.save_images(book, images)
130 return render_template("covers/saved", self.get_image(book), showinfo=False)
131 else:
132 # ERROR
133 pass
134
135 class manage_work_covers(manage_covers):
136 path = "(/works/OL\d+W)/manage-covers"
137
138
139 class manage_photos(manage_covers):
140 path = "(/authors/OL\d+A)/manage-photos"
141
142 def get_images(self, author):
143 return author.get_photos()
144
145 def get_image(self, author):
146 return author.get_photo()
147
148 def save_images(self, author, photos):
149 author.photos = photos
150 author._save('Update photos')
151
[end of openlibrary/plugins/upstream/covers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openlibrary/plugins/upstream/covers.py b/openlibrary/plugins/upstream/covers.py
--- a/openlibrary/plugins/upstream/covers.py
+++ b/openlibrary/plugins/upstream/covers.py
@@ -1,7 +1,7 @@
"""Handle book cover/author photo upload.
"""
+import requests
import web
-import simplejson
from infogami.utils import delegate
from infogami.utils.view import safeint
@@ -9,8 +9,6 @@
from openlibrary.plugins.upstream.models import Image
from openlibrary.plugins.upstream.utils import get_coverstore_url, render_template
-from six.moves import urllib
-
def setup():
pass
@@ -71,12 +69,10 @@
upload_url = "http:" + upload_url
try:
- response = urllib.request.urlopen(upload_url, urllib.parse.urlencode(params))
- out = response.read()
- except urllib.error.HTTPError as e:
- out = {'error': e.read()}
-
- return web.storage(simplejson.loads(out))
+ payload = requests.compat.urlencode(params).encode('utf-8')
+ return web.storage(requests.post(upload_url, data=payload).json())
+ except requests.HTTPError as e:
+ return web.storage({'error': e.read()})
def save(self, book, coverid, url=None):
book.covers = [coverid] + [cover.id for cover in book.get_covers()]
|
{"golden_diff": "diff --git a/openlibrary/plugins/upstream/covers.py b/openlibrary/plugins/upstream/covers.py\n--- a/openlibrary/plugins/upstream/covers.py\n+++ b/openlibrary/plugins/upstream/covers.py\n@@ -1,7 +1,7 @@\n \"\"\"Handle book cover/author photo upload.\n \"\"\"\n+import requests\n import web\n-import simplejson\n \n from infogami.utils import delegate\n from infogami.utils.view import safeint\n@@ -9,8 +9,6 @@\n from openlibrary.plugins.upstream.models import Image\n from openlibrary.plugins.upstream.utils import get_coverstore_url, render_template\n \n-from six.moves import urllib\n-\n \n def setup():\n pass\n@@ -71,12 +69,10 @@\n upload_url = \"http:\" + upload_url\n \n try:\n- response = urllib.request.urlopen(upload_url, urllib.parse.urlencode(params))\n- out = response.read()\n- except urllib.error.HTTPError as e:\n- out = {'error': e.read()}\n-\n- return web.storage(simplejson.loads(out))\n+ payload = requests.compat.urlencode(params).encode('utf-8')\n+ return web.storage(requests.post(upload_url, data=payload).json())\n+ except requests.HTTPError as e:\n+ return web.storage({'error': e.read()})\n \n def save(self, book, coverid, url=None):\n book.covers = [coverid] + [cover.id for cover in book.get_covers()]\n", "issue": "Python 3: Add Cover Image hangs\n<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->\r\n\r\n### Evidence / Screenshot (if possible)\r\nhttp://localhost:8080/works/OL6037022W/Remix?debug=true on Docker Python 3\r\n* Or http://staging.openlibrary.org/works/OL6037022W/Remix?debug=true\r\n\r\nCommand to launch Open Library on Docker on Python 3:\r\n```\r\ndocker-compose down ; \\\r\n PYENV_VERSION=3.9.0 docker-compose -f docker-compose.yml -f docker-compose.infogami-local.yml up -d ; \\\r\n docker-compose logs -f --tail=10 web\r\n```\r\n### Relevant url?\r\n<!-- `https://openlibrary.org/...` -->\r\n1. http://localhost:8080/works/OL6037022W/Remix?debug=true\r\n2. On the image, click Add Cover Image\r\n3. Browse and select an appropriate local image file and click Submit\r\n4. Internal Server Error\r\n\r\n### Steps to Reproduce\r\n<!-- What steps caused you to find the bug? -->\r\n1. Go to ...\r\n2. Do ...\r\n\r\n<!-- What actually happened after these steps? What did you expect to happen? -->\r\n* Actual: \r\n* Expected: \r\n\r\n### Details\r\n\r\n- **Logged in (Y/N)?** \r\n- **Browser type/version?** \r\n- **Operating system?** \r\n- **Environment (prod/dev/local)?** prod\r\n<!-- If not sure, put prod -->\r\n\r\n### Proposal & Constraints\r\n<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\n### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\n", "before_files": [{"content": "\"\"\"Handle book cover/author photo upload.\n\"\"\"\nimport web\nimport simplejson\n\nfrom infogami.utils import delegate\nfrom infogami.utils.view import safeint\nfrom openlibrary import accounts\nfrom openlibrary.plugins.upstream.models import Image\nfrom openlibrary.plugins.upstream.utils import get_coverstore_url, render_template\n\nfrom six.moves import urllib\n\n\ndef setup():\n pass\n\nclass add_cover(delegate.page):\n path = \"(/books/OL\\d+M)/add-cover\"\n cover_category = \"b\"\n\n def GET(self, key):\n book = web.ctx.site.get(key)\n return render_template('covers/add', book)\n\n def POST(self, key):\n book = web.ctx.site.get(key)\n if not book:\n raise web.notfound(\"\")\n\n i = web.input(file={}, url=\"\")\n\n # remove references to field storage objects\n web.ctx.pop(\"_fieldstorage\", None)\n\n data = self.upload(key, i)\n coverid = data.get('id')\n\n if coverid:\n self.save(book, coverid, url=i.url)\n cover = Image(web.ctx.site, \"b\", coverid)\n return render_template(\"covers/saved\", cover)\n else:\n return render_template(\"covers/add\", book, {'url': i.url}, data)\n\n def upload(self, key, i):\n \"\"\"Uploads a cover to coverstore and returns the response.\"\"\"\n olid = key.split(\"/\")[-1]\n\n if i.file is not None and hasattr(i.file, 'value'):\n data = i.file.value\n else:\n data = None\n\n if i.url and i.url.strip() == \"http://\":\n i.url = \"\"\n\n user = accounts.get_current_user()\n params = {\n \"author\": user and user.key,\n \"data\": data,\n \"source_url\": i.url,\n \"olid\": olid,\n \"ip\": web.ctx.ip\n }\n\n upload_url = '%s/%s/upload2' % (\n get_coverstore_url(), self.cover_category)\n\n if upload_url.startswith(\"//\"):\n upload_url = \"http:\" + upload_url\n\n try:\n response = urllib.request.urlopen(upload_url, urllib.parse.urlencode(params))\n out = response.read()\n except urllib.error.HTTPError as e:\n out = {'error': e.read()}\n\n return web.storage(simplejson.loads(out))\n\n def save(self, book, coverid, url=None):\n book.covers = [coverid] + [cover.id for cover in book.get_covers()]\n book._save(\"Added new cover\", action=\"add-cover\", data={\"url\": url})\n\nclass add_work_cover(add_cover):\n path = \"(/works/OL\\d+W)/add-cover\"\n cover_category = \"w\"\n\n def upload(self, key, i):\n if \"coverid\" in i and safeint(i.coverid):\n return web.storage(id=int(i.coverid))\n else:\n return add_cover.upload(self, key, i)\n\nclass add_photo(add_cover):\n path = \"(/authors/OL\\d+A)/add-photo\"\n cover_category = \"a\"\n\n def save(self, author, photoid, url=None):\n author.photos = [photoid] + [photo.id for photo in author.get_photos()]\n author._save(\"Added new photo\", action=\"add-photo\", data={\"url\": url})\n\nclass manage_covers(delegate.page):\n path = \"(/books/OL\\d+M)/manage-covers\"\n def GET(self, key):\n book = web.ctx.site.get(key)\n if not book:\n raise web.notfound()\n return render_template(\"covers/manage\", key, self.get_images(book))\n\n def get_images(self, book):\n return book.get_covers()\n\n def get_image(self, book):\n return book.get_cover()\n\n def save_images(self, book, covers):\n book.covers = covers\n book._save('Update covers')\n\n def POST(self, key):\n book = web.ctx.site.get(key)\n if not book:\n raise web.notfound()\n\n images = web.input(image=[]).image\n if '-' in images:\n images = [int(id) for id in images[:images.index('-')]]\n self.save_images(book, images)\n return render_template(\"covers/saved\", self.get_image(book), showinfo=False)\n else:\n # ERROR\n pass\n\nclass manage_work_covers(manage_covers):\n path = \"(/works/OL\\d+W)/manage-covers\"\n\n\nclass manage_photos(manage_covers):\n path = \"(/authors/OL\\d+A)/manage-photos\"\n\n def get_images(self, author):\n return author.get_photos()\n\n def get_image(self, author):\n return author.get_photo()\n\n def save_images(self, author, photos):\n author.photos = photos\n author._save('Update photos')\n", "path": "openlibrary/plugins/upstream/covers.py"}]}
| 2,358 | 314 |
gh_patches_debug_4333
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-1416
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pytuple can occassionally return fewer than expected elements
* Faker version: 6.1.1
* OS: Mac OSX 10.15.5
Sporadically using pytuple may result in the tuple having less than the requested elements (`nb_elements`) even when `variable_nb_elements` is set to False.
This happens because pytuple relies on pyset, returning `tuple(self.pyset(...))`. Because it delegates to a set rather than a list, any duplicate numbers generated will result in the set, and following that the tuple, having fewer numbers than expected.
Suggest that the appropriate fix might be to use `pylist` instead of `pyset`
### Steps to reproduce
1. Specify `nb_elements = 3` in a call to pytuple, `variable_nb_elements= False`
2. Repeat until the tuple 'randomly' contains less than 3 elements
```python
import faker
fake = faker.Faker()
for x in range(10000):
random_tuple = fake.pytuple(nb_elements=3, variable_nb_elements=False, value_types=[int])
assert len(random_tuple) == 3, f"Tuple {random_tuple} not len 3 at iteration {x}"
```
### Expected behavior
When calling pytuple with `nb_elements = 3` and `variable_nb_elements = False` the tuple should always contain 3 elements, even if there are duplicate values.
### Actual behavior
Sporadically the tuple contains less than 3 elements.
</issue>
<code>
[start of faker/providers/python/__init__.py]
1 import string
2 import sys
3 import warnings
4
5 from decimal import Decimal
6
7 from .. import BaseProvider
8
9
10 class Provider(BaseProvider):
11 default_value_types = (
12 'str', 'str', 'str', 'str', 'float', 'int', 'int', 'decimal',
13 'date_time', 'uri', 'email',
14 )
15
16 def _check_signature(self, value_types, allowed_types):
17 if value_types is not None and not isinstance(value_types, (list, tuple)):
18 value_types = [value_types]
19 warnings.warn(
20 'Passing value types as positional arguments is going to be '
21 'deprecated. Pass them as a list or tuple instead.',
22 PendingDeprecationWarning,
23 )
24 if value_types is None:
25 value_types = ()
26 return tuple(value_types) + allowed_types
27
28 def pybool(self):
29 return self.random_int(0, 1) == 1
30
31 def pystr(self, min_chars=None, max_chars=20):
32 """
33 Generates a random string of upper and lowercase letters.
34 :type min_chars: int
35 :type max_chars: int
36 :return: String. Random of random length between min and max characters.
37 """
38 if min_chars is None:
39 return "".join(self.random_letters(length=max_chars))
40 else:
41 assert (
42 max_chars >= min_chars), "Maximum length must be greater than or equal to minimum length"
43 return "".join(
44 self.random_letters(
45 length=self.generator.random.randint(min_chars, max_chars),
46 ),
47 )
48
49 def pystr_format(self, string_format='?#-###{{random_int}}{{random_letter}}', letters=string.ascii_letters):
50 return self.bothify(self.generator.parse(string_format), letters=letters)
51
52 def pyfloat(self, left_digits=None, right_digits=None, positive=False,
53 min_value=None, max_value=None):
54 if left_digits is not None and left_digits < 0:
55 raise ValueError(
56 'A float number cannot have less than 0 digits in its '
57 'integer part')
58 if right_digits is not None and right_digits < 0:
59 raise ValueError(
60 'A float number cannot have less than 0 digits in its '
61 'fractional part')
62 if left_digits == 0 and right_digits == 0:
63 raise ValueError(
64 'A float number cannot have less than 0 digits in total')
65 if None not in (min_value, max_value) and min_value > max_value:
66 raise ValueError('Min value cannot be greater than max value')
67 if None not in (min_value, max_value) and min_value == max_value:
68 raise ValueError('Min and max value cannot be the same')
69 if positive and min_value is not None and min_value <= 0:
70 raise ValueError(
71 'Cannot combine positive=True with negative or zero min_value')
72
73 left_digits = left_digits if left_digits is not None else (
74 self.random_int(1, sys.float_info.dig))
75 right_digits = right_digits if right_digits is not None else (
76 self.random_int(0, sys.float_info.dig - left_digits))
77 sign = ''
78 if (min_value is not None) or (max_value is not None):
79 if max_value is not None and max_value < 0:
80 max_value += 1 # as the random_int will be generated up to max_value - 1
81 if min_value is not None and min_value < 0:
82 min_value += 1 # as we then append digits after the left_number
83 left_number = self._safe_random_int(
84 min_value, max_value, positive,
85 )
86 else:
87 sign = '+' if positive else self.random_element(('+', '-'))
88 left_number = self.random_number(left_digits)
89
90 result = float(f'{sign}{left_number}.{self.random_number(right_digits)}')
91 if positive and result == 0:
92 if right_digits:
93 result = float('0.' + '0' * (right_digits - 1) + '1')
94 else:
95 result += sys.float_info.epsilon
96 return result
97
98 def _safe_random_int(self, min_value, max_value, positive):
99 orig_min_value = min_value
100 orig_max_value = max_value
101
102 if min_value is None:
103 min_value = max_value - self.random_int()
104 if max_value is None:
105 max_value = min_value + self.random_int()
106 if positive:
107 min_value = max(min_value, 0)
108
109 if min_value == max_value:
110 return self._safe_random_int(orig_min_value, orig_max_value, positive)
111 else:
112 return self.random_int(min_value, max_value - 1)
113
114 def pyint(self, min_value=0, max_value=9999, step=1):
115 return self.generator.random_int(min_value, max_value, step=step)
116
117 def pydecimal(self, left_digits=None, right_digits=None, positive=False,
118 min_value=None, max_value=None):
119
120 float_ = self.pyfloat(
121 left_digits, right_digits, positive, min_value, max_value)
122 return Decimal(str(float_))
123
124 def pytuple(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
125 return tuple(
126 self.pyset(
127 nb_elements,
128 variable_nb_elements,
129 value_types,
130 *allowed_types))
131
132 def pyset(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
133 return set(
134 self._pyiterable(
135 nb_elements,
136 variable_nb_elements,
137 value_types,
138 *allowed_types))
139
140 def pylist(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
141 return list(
142 self._pyiterable(
143 nb_elements,
144 variable_nb_elements,
145 value_types,
146 *allowed_types))
147
148 def pyiterable(
149 self,
150 nb_elements=10,
151 variable_nb_elements=True,
152 value_types=None,
153 *allowed_types):
154 value_types = self._check_signature(value_types, allowed_types)
155 return self.random_element([self.pylist, self.pytuple, self.pyset])(
156 nb_elements, variable_nb_elements, value_types, *allowed_types)
157
158 def _random_type(self, type_list):
159 value_type = self.random_element(type_list)
160
161 method_name = f'py{value_type}'
162 if hasattr(self, method_name):
163 value_type = method_name
164
165 return self.generator.format(value_type)
166
167 def _pyiterable(
168 self,
169 nb_elements=10,
170 variable_nb_elements=True,
171 value_types=None,
172 *allowed_types):
173
174 value_types = self._check_signature(value_types, allowed_types)
175
176 value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()
177 for t in value_types
178 # avoid recursion
179 if t not in ['iterable', 'list', 'tuple', 'dict', 'set']]
180 if not value_types:
181 value_types = self.default_value_types
182
183 if variable_nb_elements:
184 nb_elements = self.randomize_nb_elements(nb_elements, min=1)
185
186 for _ in range(nb_elements):
187 yield self._random_type(value_types)
188
189 def pydict(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
190 """
191 Returns a dictionary.
192
193 :nb_elements: number of elements for dictionary
194 :variable_nb_elements: is use variable number of elements for dictionary
195 :value_types: type of dictionary values
196 """
197 if variable_nb_elements:
198 nb_elements = self.randomize_nb_elements(nb_elements, min=1)
199
200 return dict(zip(
201 self.generator.words(nb_elements, unique=True),
202 self._pyiterable(nb_elements, False, value_types, *allowed_types),
203 ))
204
205 def pystruct(self, count=10, value_types=None, *allowed_types):
206 value_types = self._check_signature(value_types, allowed_types)
207
208 value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()
209 for t in value_types
210 # avoid recursion
211 if t != 'struct']
212 if not value_types:
213 value_types = self.default_value_types
214
215 types = []
216 d = {}
217 nd = {}
218 for i in range(count):
219 d[self.generator.word()] = self._random_type(value_types)
220 types.append(self._random_type(value_types))
221 nd[self.generator.word()] = {i: self._random_type(value_types),
222 i + 1: [self._random_type(value_types),
223 self._random_type(value_types),
224 self._random_type(value_types)],
225 i + 2: {i: self._random_type(value_types),
226 i + 1: self._random_type(value_types),
227 i + 2: [self._random_type(value_types),
228 self._random_type(value_types)]}}
229 return types, d, nd
230
[end of faker/providers/python/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/faker/providers/python/__init__.py b/faker/providers/python/__init__.py
--- a/faker/providers/python/__init__.py
+++ b/faker/providers/python/__init__.py
@@ -123,7 +123,7 @@
def pytuple(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):
return tuple(
- self.pyset(
+ self._pyiterable(
nb_elements,
variable_nb_elements,
value_types,
|
{"golden_diff": "diff --git a/faker/providers/python/__init__.py b/faker/providers/python/__init__.py\n--- a/faker/providers/python/__init__.py\n+++ b/faker/providers/python/__init__.py\n@@ -123,7 +123,7 @@\n \n def pytuple(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n return tuple(\n- self.pyset(\n+ self._pyiterable(\n nb_elements,\n variable_nb_elements,\n value_types,\n", "issue": "pytuple can occassionally return fewer than expected elements\n* Faker version: 6.1.1\r\n* OS: Mac OSX 10.15.5\r\n\r\nSporadically using pytuple may result in the tuple having less than the requested elements (`nb_elements`) even when `variable_nb_elements` is set to False.\r\n\r\nThis happens because pytuple relies on pyset, returning `tuple(self.pyset(...))`. Because it delegates to a set rather than a list, any duplicate numbers generated will result in the set, and following that the tuple, having fewer numbers than expected.\r\n\r\nSuggest that the appropriate fix might be to use `pylist` instead of `pyset`\r\n\r\n### Steps to reproduce\r\n\r\n1. Specify `nb_elements = 3` in a call to pytuple, `variable_nb_elements= False`\r\n2. Repeat until the tuple 'randomly' contains less than 3 elements \r\n\r\n```python\r\nimport faker\r\n\r\nfake = faker.Faker()\r\n\r\nfor x in range(10000):\r\n random_tuple = fake.pytuple(nb_elements=3, variable_nb_elements=False, value_types=[int])\r\n assert len(random_tuple) == 3, f\"Tuple {random_tuple} not len 3 at iteration {x}\"\r\n```\r\n\r\n### Expected behavior\r\n\r\nWhen calling pytuple with `nb_elements = 3` and `variable_nb_elements = False` the tuple should always contain 3 elements, even if there are duplicate values.\r\n\r\n### Actual behavior\r\n\r\nSporadically the tuple contains less than 3 elements.\r\n\n", "before_files": [{"content": "import string\nimport sys\nimport warnings\n\nfrom decimal import Decimal\n\nfrom .. import BaseProvider\n\n\nclass Provider(BaseProvider):\n default_value_types = (\n 'str', 'str', 'str', 'str', 'float', 'int', 'int', 'decimal',\n 'date_time', 'uri', 'email',\n )\n\n def _check_signature(self, value_types, allowed_types):\n if value_types is not None and not isinstance(value_types, (list, tuple)):\n value_types = [value_types]\n warnings.warn(\n 'Passing value types as positional arguments is going to be '\n 'deprecated. Pass them as a list or tuple instead.',\n PendingDeprecationWarning,\n )\n if value_types is None:\n value_types = ()\n return tuple(value_types) + allowed_types\n\n def pybool(self):\n return self.random_int(0, 1) == 1\n\n def pystr(self, min_chars=None, max_chars=20):\n \"\"\"\n Generates a random string of upper and lowercase letters.\n :type min_chars: int\n :type max_chars: int\n :return: String. Random of random length between min and max characters.\n \"\"\"\n if min_chars is None:\n return \"\".join(self.random_letters(length=max_chars))\n else:\n assert (\n max_chars >= min_chars), \"Maximum length must be greater than or equal to minimum length\"\n return \"\".join(\n self.random_letters(\n length=self.generator.random.randint(min_chars, max_chars),\n ),\n )\n\n def pystr_format(self, string_format='?#-###{{random_int}}{{random_letter}}', letters=string.ascii_letters):\n return self.bothify(self.generator.parse(string_format), letters=letters)\n\n def pyfloat(self, left_digits=None, right_digits=None, positive=False,\n min_value=None, max_value=None):\n if left_digits is not None and left_digits < 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in its '\n 'integer part')\n if right_digits is not None and right_digits < 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in its '\n 'fractional part')\n if left_digits == 0 and right_digits == 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in total')\n if None not in (min_value, max_value) and min_value > max_value:\n raise ValueError('Min value cannot be greater than max value')\n if None not in (min_value, max_value) and min_value == max_value:\n raise ValueError('Min and max value cannot be the same')\n if positive and min_value is not None and min_value <= 0:\n raise ValueError(\n 'Cannot combine positive=True with negative or zero min_value')\n\n left_digits = left_digits if left_digits is not None else (\n self.random_int(1, sys.float_info.dig))\n right_digits = right_digits if right_digits is not None else (\n self.random_int(0, sys.float_info.dig - left_digits))\n sign = ''\n if (min_value is not None) or (max_value is not None):\n if max_value is not None and max_value < 0:\n max_value += 1 # as the random_int will be generated up to max_value - 1\n if min_value is not None and min_value < 0:\n min_value += 1 # as we then append digits after the left_number\n left_number = self._safe_random_int(\n min_value, max_value, positive,\n )\n else:\n sign = '+' if positive else self.random_element(('+', '-'))\n left_number = self.random_number(left_digits)\n\n result = float(f'{sign}{left_number}.{self.random_number(right_digits)}')\n if positive and result == 0:\n if right_digits:\n result = float('0.' + '0' * (right_digits - 1) + '1')\n else:\n result += sys.float_info.epsilon\n return result\n\n def _safe_random_int(self, min_value, max_value, positive):\n orig_min_value = min_value\n orig_max_value = max_value\n\n if min_value is None:\n min_value = max_value - self.random_int()\n if max_value is None:\n max_value = min_value + self.random_int()\n if positive:\n min_value = max(min_value, 0)\n\n if min_value == max_value:\n return self._safe_random_int(orig_min_value, orig_max_value, positive)\n else:\n return self.random_int(min_value, max_value - 1)\n\n def pyint(self, min_value=0, max_value=9999, step=1):\n return self.generator.random_int(min_value, max_value, step=step)\n\n def pydecimal(self, left_digits=None, right_digits=None, positive=False,\n min_value=None, max_value=None):\n\n float_ = self.pyfloat(\n left_digits, right_digits, positive, min_value, max_value)\n return Decimal(str(float_))\n\n def pytuple(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n return tuple(\n self.pyset(\n nb_elements,\n variable_nb_elements,\n value_types,\n *allowed_types))\n\n def pyset(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n return set(\n self._pyiterable(\n nb_elements,\n variable_nb_elements,\n value_types,\n *allowed_types))\n\n def pylist(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n return list(\n self._pyiterable(\n nb_elements,\n variable_nb_elements,\n value_types,\n *allowed_types))\n\n def pyiterable(\n self,\n nb_elements=10,\n variable_nb_elements=True,\n value_types=None,\n *allowed_types):\n value_types = self._check_signature(value_types, allowed_types)\n return self.random_element([self.pylist, self.pytuple, self.pyset])(\n nb_elements, variable_nb_elements, value_types, *allowed_types)\n\n def _random_type(self, type_list):\n value_type = self.random_element(type_list)\n\n method_name = f'py{value_type}'\n if hasattr(self, method_name):\n value_type = method_name\n\n return self.generator.format(value_type)\n\n def _pyiterable(\n self,\n nb_elements=10,\n variable_nb_elements=True,\n value_types=None,\n *allowed_types):\n\n value_types = self._check_signature(value_types, allowed_types)\n\n value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()\n for t in value_types\n # avoid recursion\n if t not in ['iterable', 'list', 'tuple', 'dict', 'set']]\n if not value_types:\n value_types = self.default_value_types\n\n if variable_nb_elements:\n nb_elements = self.randomize_nb_elements(nb_elements, min=1)\n\n for _ in range(nb_elements):\n yield self._random_type(value_types)\n\n def pydict(self, nb_elements=10, variable_nb_elements=True, value_types=None, *allowed_types):\n \"\"\"\n Returns a dictionary.\n\n :nb_elements: number of elements for dictionary\n :variable_nb_elements: is use variable number of elements for dictionary\n :value_types: type of dictionary values\n \"\"\"\n if variable_nb_elements:\n nb_elements = self.randomize_nb_elements(nb_elements, min=1)\n\n return dict(zip(\n self.generator.words(nb_elements, unique=True),\n self._pyiterable(nb_elements, False, value_types, *allowed_types),\n ))\n\n def pystruct(self, count=10, value_types=None, *allowed_types):\n value_types = self._check_signature(value_types, allowed_types)\n\n value_types = [t if isinstance(t, str) else getattr(t, '__name__', type(t).__name__).lower()\n for t in value_types\n # avoid recursion\n if t != 'struct']\n if not value_types:\n value_types = self.default_value_types\n\n types = []\n d = {}\n nd = {}\n for i in range(count):\n d[self.generator.word()] = self._random_type(value_types)\n types.append(self._random_type(value_types))\n nd[self.generator.word()] = {i: self._random_type(value_types),\n i + 1: [self._random_type(value_types),\n self._random_type(value_types),\n self._random_type(value_types)],\n i + 2: {i: self._random_type(value_types),\n i + 1: self._random_type(value_types),\n i + 2: [self._random_type(value_types),\n self._random_type(value_types)]}}\n return types, d, nd\n", "path": "faker/providers/python/__init__.py"}]}
| 3,391 | 115 |
gh_patches_debug_20226
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-6468
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Auto Email Report CSV Format Broken
When downloading (or sending) an auto email report with format "CSV" the following error occurs:
```
raceback (most recent call last):
File "/home/frappe/frappe-bench/apps/frappe/frappe/app.py", line 66, in application
response = frappe.api.handle()
File "/home/frappe/frappe-bench/apps/frappe/frappe/api.py", line 56, in handle
return frappe.handler.handle()
File "/home/frappe/frappe-bench/apps/frappe/frappe/handler.py", line 21, in handle
data = execute_cmd(cmd)
File "/home/frappe/frappe-bench/apps/frappe/frappe/handler.py", line 56, in execute_cmd
return frappe.call(method, **frappe.form_dict)
File "/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py", line 1007, in call
return fn(*args, **newargs)
File "/home/frappe/frappe-bench/apps/frappe/frappe/email/doctype/auto_email_report/auto_email_report.py", line 153, in download
data = auto_email_report.get_report_content()
File "/home/frappe/frappe-bench/apps/frappe/frappe/email/doctype/auto_email_report/auto_email_report.py", line 61, in get_report_content
filters = self.filters, as_dict=True)
File "/home/frappe/frappe-bench/apps/frappe/frappe/core/doctype/report/report.py", line 152, in get_data
order_by = _format(self.ref_doctype, 'modified') + ' desc'
TypeError: _format() takes exactly 1 argument (2 given)
```
We're using the latest frappe 11.0.3-beta.21.
</issue>
<code>
[start of frappe/core/doctype/report/report.py]
1 # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
2 # MIT License. See license.txt
3
4 from __future__ import unicode_literals
5 import frappe
6 import json
7 from frappe import _
8 import frappe.desk.query_report
9 from frappe.utils import cint
10 from frappe.model.document import Document
11 from frappe.modules.export_file import export_to_files
12 from frappe.modules import make_boilerplate
13 from frappe.core.doctype.page.page import delete_custom_role
14 from frappe.core.doctype.custom_role.custom_role import get_custom_allowed_roles
15 from six import iteritems
16
17
18 class Report(Document):
19 def validate(self):
20 """only administrator can save standard report"""
21 if not self.module:
22 self.module = frappe.db.get_value("DocType", self.ref_doctype, "module")
23
24 if not self.is_standard:
25 self.is_standard = "No"
26 if frappe.session.user=="Administrator" and getattr(frappe.local.conf, 'developer_mode',0)==1:
27 self.is_standard = "Yes"
28
29 if self.is_standard == "No" and frappe.db.get_value("Report", self.name, "is_standard") == "Yes":
30 frappe.throw(_("Cannot edit a standard report. Please duplicate and create a new report"))
31
32 if self.is_standard == "Yes" and frappe.session.user!="Administrator":
33 frappe.throw(_("Only Administrator can save a standard report. Please rename and save."))
34
35 if self.report_type in ("Query Report", "Script Report") \
36 and frappe.session.user!="Administrator":
37 frappe.throw(_("Only Administrator allowed to create Query / Script Reports"))
38
39 if self.report_type == "Report Builder":
40 self.update_report_json()
41
42 def before_insert(self):
43 self.set_doctype_roles()
44
45 def on_update(self):
46 self.export_doc()
47
48 def on_trash(self):
49 delete_custom_role('report', self.name)
50
51 def set_doctype_roles(self):
52 if not self.get('roles') and self.is_standard == 'No':
53 meta = frappe.get_meta(self.ref_doctype)
54 roles = [{'role': d.role} for d in meta.permissions if d.permlevel==0]
55 self.set('roles', roles)
56
57 def is_permitted(self):
58 """Returns true if Has Role is not set or the user is allowed."""
59 from frappe.utils import has_common
60
61 allowed = [d.role for d in frappe.get_all("Has Role", fields=["role"],
62 filters={"parent": self.name})]
63
64 custom_roles = get_custom_allowed_roles('report', self.name)
65 allowed.extend(custom_roles)
66
67 if not allowed:
68 return True
69
70 roles = frappe.get_roles()
71
72 if has_common(roles, allowed):
73 return True
74
75 def update_report_json(self):
76 if not self.json:
77 self.json = '{}'
78
79 if self.json:
80 data = json.loads(self.json)
81 data["add_total_row"] = self.add_total_row
82 self.json = json.dumps(data)
83
84 def export_doc(self):
85 if frappe.flags.in_import:
86 return
87
88 if self.is_standard == 'Yes' and (frappe.local.conf.get('developer_mode') or 0) == 1:
89 export_to_files(record_list=[['Report', self.name]],
90 record_module=self.module, create_init=True)
91
92 self.create_report_py()
93
94 def create_report_py(self):
95 if self.report_type == "Script Report":
96 make_boilerplate("controller.py", self, {"name": self.name})
97 make_boilerplate("controller.js", self, {"name": self.name})
98
99 def get_data(self, filters=None, limit=None, user=None, as_dict=False):
100 columns = []
101 out = []
102
103 if self.report_type in ('Query Report', 'Script Report'):
104 # query and script reports
105 data = frappe.desk.query_report.run(self.name, filters=filters, user=user)
106 for d in data.get('columns'):
107 if isinstance(d, dict):
108 col = frappe._dict(d)
109 if not col.fieldname:
110 col.fieldname = col.label
111 columns.append(col)
112 else:
113 fieldtype, options = "Data", None
114 parts = d.split(':')
115 if len(parts) > 1:
116 if parts[1]:
117 fieldtype, options = parts[1], None
118 if fieldtype and '/' in fieldtype:
119 fieldtype, options = fieldtype.split('/')
120
121 columns.append(frappe._dict(label=parts[0], fieldtype=fieldtype, fieldname=parts[0]))
122
123 out += data.get('result')
124 else:
125 # standard report
126 params = json.loads(self.json)
127
128 if params.get('columns'):
129 columns = params.get('columns')
130 else:
131 columns = [['name', self.ref_doctype]]
132 for df in frappe.get_meta(self.ref_doctype).fields:
133 if df.in_list_view:
134 columns.append([df.fieldname, self.ref_doctype])
135
136 _filters = params.get('filters') or []
137
138 if filters:
139 for key, value in iteritems(filters):
140 condition, _value = '=', value
141 if isinstance(value, (list, tuple)):
142 condition, _value = value
143 _filters.append([key, condition, _value])
144
145 def _format(parts):
146 # sort by is saved as DocType.fieldname, covert it to sql
147 return '`tab{0}`.`{1}`'.format(*parts)
148
149 if params.get('sort_by'):
150 order_by = _format(params.get('sort_by').split('.')) + ' ' + params.get('sort_order')
151 else:
152 order_by = _format(self.ref_doctype, 'modified') + ' desc'
153
154 if params.get('sort_by_next'):
155 order_by += ', ' + _format(params.get('sort_by_next').split('.')) + ' ' + params.get('sort_order_next')
156
157 result = frappe.get_list(self.ref_doctype,
158 fields = [_format([c[1], c[0]]) for c in columns],
159 filters=_filters,
160 order_by = order_by,
161 as_list=True,
162 limit=limit,
163 user=user)
164
165 _columns = []
166 for column in columns:
167 meta = frappe.get_meta(column[1])
168 field = [meta.get_field(column[0]) or frappe._dict(label=meta.get_label(column[0]), fieldname=column[0])]
169 _columns.extend(field)
170 columns = _columns
171
172 out = out + [list(d) for d in result]
173
174 if as_dict:
175 data = []
176 for row in out:
177 if isinstance(row, (list, tuple)):
178 _row = frappe._dict()
179 for i, val in enumerate(row):
180 _row[columns[i].get('fieldname')] = val
181 elif isinstance(row, dict):
182 # no need to convert from dict to dict
183 _row = frappe._dict(row)
184 data.append(_row)
185 else:
186 data = out
187 return columns, data
188
189
190 @Document.whitelist
191 def toggle_disable(self, disable):
192 self.db_set("disabled", cint(disable))
193
[end of frappe/core/doctype/report/report.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/frappe/core/doctype/report/report.py b/frappe/core/doctype/report/report.py
--- a/frappe/core/doctype/report/report.py
+++ b/frappe/core/doctype/report/report.py
@@ -127,6 +127,8 @@
if params.get('columns'):
columns = params.get('columns')
+ elif params.get('fields'):
+ columns = params.get('fields')
else:
columns = [['name', self.ref_doctype]]
for df in frappe.get_meta(self.ref_doctype).fields:
@@ -149,7 +151,7 @@
if params.get('sort_by'):
order_by = _format(params.get('sort_by').split('.')) + ' ' + params.get('sort_order')
else:
- order_by = _format(self.ref_doctype, 'modified') + ' desc'
+ order_by = _format([self.ref_doctype, 'modified']) + ' desc'
if params.get('sort_by_next'):
order_by += ', ' + _format(params.get('sort_by_next').split('.')) + ' ' + params.get('sort_order_next')
|
{"golden_diff": "diff --git a/frappe/core/doctype/report/report.py b/frappe/core/doctype/report/report.py\n--- a/frappe/core/doctype/report/report.py\n+++ b/frappe/core/doctype/report/report.py\n@@ -127,6 +127,8 @@\n \n \t\t\tif params.get('columns'):\n \t\t\t\tcolumns = params.get('columns')\n+\t\t\telif params.get('fields'):\n+\t\t\t\tcolumns = params.get('fields')\n \t\t\telse:\n \t\t\t\tcolumns = [['name', self.ref_doctype]]\n \t\t\t\tfor df in frappe.get_meta(self.ref_doctype).fields:\n@@ -149,7 +151,7 @@\n \t\t\tif params.get('sort_by'):\n \t\t\t\torder_by = _format(params.get('sort_by').split('.')) + ' ' + params.get('sort_order')\n \t\t\telse:\n-\t\t\t\torder_by = _format(self.ref_doctype, 'modified') + ' desc'\n+\t\t\t\torder_by = _format([self.ref_doctype, 'modified']) + ' desc'\n \n \t\t\tif params.get('sort_by_next'):\n \t\t\t\torder_by += ', ' + _format(params.get('sort_by_next').split('.')) + ' ' + params.get('sort_order_next')\n", "issue": "Auto Email Report CSV Format Broken\nWhen downloading (or sending) an auto email report with format \"CSV\" the following error occurs:\r\n\r\n```\r\nraceback (most recent call last):\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/app.py\", line 66, in application\r\n response = frappe.api.handle()\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/api.py\", line 56, in handle\r\n return frappe.handler.handle()\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/handler.py\", line 21, in handle\r\n data = execute_cmd(cmd)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/handler.py\", line 56, in execute_cmd\r\n return frappe.call(method, **frappe.form_dict)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/__init__.py\", line 1007, in call\r\n return fn(*args, **newargs)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/email/doctype/auto_email_report/auto_email_report.py\", line 153, in download\r\n data = auto_email_report.get_report_content()\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/email/doctype/auto_email_report/auto_email_report.py\", line 61, in get_report_content\r\n filters = self.filters, as_dict=True)\r\n File \"/home/frappe/frappe-bench/apps/frappe/frappe/core/doctype/report/report.py\", line 152, in get_data\r\n order_by = _format(self.ref_doctype, 'modified') + ' desc'\r\nTypeError: _format() takes exactly 1 argument (2 given)\r\n```\r\n\r\nWe're using the latest frappe 11.0.3-beta.21.\n", "before_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# MIT License. See license.txt\n\nfrom __future__ import unicode_literals\nimport frappe\nimport json\nfrom frappe import _\nimport frappe.desk.query_report\nfrom frappe.utils import cint\nfrom frappe.model.document import Document\nfrom frappe.modules.export_file import export_to_files\nfrom frappe.modules import make_boilerplate\nfrom frappe.core.doctype.page.page import delete_custom_role\nfrom frappe.core.doctype.custom_role.custom_role import get_custom_allowed_roles\nfrom six import iteritems\n\n\nclass Report(Document):\n\tdef validate(self):\n\t\t\"\"\"only administrator can save standard report\"\"\"\n\t\tif not self.module:\n\t\t\tself.module = frappe.db.get_value(\"DocType\", self.ref_doctype, \"module\")\n\n\t\tif not self.is_standard:\n\t\t\tself.is_standard = \"No\"\n\t\t\tif frappe.session.user==\"Administrator\" and getattr(frappe.local.conf, 'developer_mode',0)==1:\n\t\t\t\tself.is_standard = \"Yes\"\n\n\t\tif self.is_standard == \"No\" and frappe.db.get_value(\"Report\", self.name, \"is_standard\") == \"Yes\":\n\t\t\tfrappe.throw(_(\"Cannot edit a standard report. Please duplicate and create a new report\"))\n\n\t\tif self.is_standard == \"Yes\" and frappe.session.user!=\"Administrator\":\n\t\t\tfrappe.throw(_(\"Only Administrator can save a standard report. Please rename and save.\"))\n\n\t\tif self.report_type in (\"Query Report\", \"Script Report\") \\\n\t\t\tand frappe.session.user!=\"Administrator\":\n\t\t\tfrappe.throw(_(\"Only Administrator allowed to create Query / Script Reports\"))\n\n\t\tif self.report_type == \"Report Builder\":\n\t\t\tself.update_report_json()\n\n\tdef before_insert(self):\n\t\tself.set_doctype_roles()\n\n\tdef on_update(self):\n\t\tself.export_doc()\n\n\tdef on_trash(self):\n\t\tdelete_custom_role('report', self.name)\n\n\tdef set_doctype_roles(self):\n\t\tif not self.get('roles') and self.is_standard == 'No':\n\t\t\tmeta = frappe.get_meta(self.ref_doctype)\n\t\t\troles = [{'role': d.role} for d in meta.permissions if d.permlevel==0]\n\t\t\tself.set('roles', roles)\n\n\tdef is_permitted(self):\n\t\t\"\"\"Returns true if Has Role is not set or the user is allowed.\"\"\"\n\t\tfrom frappe.utils import has_common\n\n\t\tallowed = [d.role for d in frappe.get_all(\"Has Role\", fields=[\"role\"],\n\t\t\tfilters={\"parent\": self.name})]\n\n\t\tcustom_roles = get_custom_allowed_roles('report', self.name)\n\t\tallowed.extend(custom_roles)\n\n\t\tif not allowed:\n\t\t\treturn True\n\n\t\troles = frappe.get_roles()\n\n\t\tif has_common(roles, allowed):\n\t\t\treturn True\n\n\tdef update_report_json(self):\n\t\tif not self.json:\n\t\t\tself.json = '{}'\n\n\t\tif self.json:\n\t\t\tdata = json.loads(self.json)\n\t\t\tdata[\"add_total_row\"] = self.add_total_row\n\t\t\tself.json = json.dumps(data)\n\n\tdef export_doc(self):\n\t\tif frappe.flags.in_import:\n\t\t\treturn\n\n\t\tif self.is_standard == 'Yes' and (frappe.local.conf.get('developer_mode') or 0) == 1:\n\t\t\texport_to_files(record_list=[['Report', self.name]],\n\t\t\t\trecord_module=self.module, create_init=True)\n\n\t\t\tself.create_report_py()\n\n\tdef create_report_py(self):\n\t\tif self.report_type == \"Script Report\":\n\t\t\tmake_boilerplate(\"controller.py\", self, {\"name\": self.name})\n\t\t\tmake_boilerplate(\"controller.js\", self, {\"name\": self.name})\n\n\tdef get_data(self, filters=None, limit=None, user=None, as_dict=False):\n\t\tcolumns = []\n\t\tout = []\n\n\t\tif self.report_type in ('Query Report', 'Script Report'):\n\t\t\t# query and script reports\n\t\t\tdata = frappe.desk.query_report.run(self.name, filters=filters, user=user)\n\t\t\tfor d in data.get('columns'):\n\t\t\t\tif isinstance(d, dict):\n\t\t\t\t\tcol = frappe._dict(d)\n\t\t\t\t\tif not col.fieldname:\n\t\t\t\t\t\tcol.fieldname = col.label\n\t\t\t\t\tcolumns.append(col)\n\t\t\t\telse:\n\t\t\t\t\tfieldtype, options = \"Data\", None\n\t\t\t\t\tparts = d.split(':')\n\t\t\t\t\tif len(parts) > 1:\n\t\t\t\t\t\tif parts[1]:\n\t\t\t\t\t\t\tfieldtype, options = parts[1], None\n\t\t\t\t\t\t\tif fieldtype and '/' in fieldtype:\n\t\t\t\t\t\t\t\tfieldtype, options = fieldtype.split('/')\n\n\t\t\t\t\tcolumns.append(frappe._dict(label=parts[0], fieldtype=fieldtype, fieldname=parts[0]))\n\n\t\t\tout += data.get('result')\n\t\telse:\n\t\t\t# standard report\n\t\t\tparams = json.loads(self.json)\n\n\t\t\tif params.get('columns'):\n\t\t\t\tcolumns = params.get('columns')\n\t\t\telse:\n\t\t\t\tcolumns = [['name', self.ref_doctype]]\n\t\t\t\tfor df in frappe.get_meta(self.ref_doctype).fields:\n\t\t\t\t\tif df.in_list_view:\n\t\t\t\t\t\tcolumns.append([df.fieldname, self.ref_doctype])\n\n\t\t\t_filters = params.get('filters') or []\n\n\t\t\tif filters:\n\t\t\t\tfor key, value in iteritems(filters):\n\t\t\t\t\tcondition, _value = '=', value\n\t\t\t\t\tif isinstance(value, (list, tuple)):\n\t\t\t\t\t\tcondition, _value = value\n\t\t\t\t\t_filters.append([key, condition, _value])\n\n\t\t\tdef _format(parts):\n\t\t\t\t# sort by is saved as DocType.fieldname, covert it to sql\n\t\t\t\treturn '`tab{0}`.`{1}`'.format(*parts)\n\n\t\t\tif params.get('sort_by'):\n\t\t\t\torder_by = _format(params.get('sort_by').split('.')) + ' ' + params.get('sort_order')\n\t\t\telse:\n\t\t\t\torder_by = _format(self.ref_doctype, 'modified') + ' desc'\n\n\t\t\tif params.get('sort_by_next'):\n\t\t\t\torder_by += ', ' + _format(params.get('sort_by_next').split('.')) + ' ' + params.get('sort_order_next')\n\n\t\t\tresult = frappe.get_list(self.ref_doctype,\n\t\t\t\tfields = [_format([c[1], c[0]]) for c in columns],\n\t\t\t\tfilters=_filters,\n\t\t\t\torder_by = order_by,\n\t\t\t\tas_list=True,\n\t\t\t\tlimit=limit,\n\t\t\t\tuser=user)\n\n\t\t\t_columns = []\n\t\t\tfor column in columns:\n\t\t\t\tmeta = frappe.get_meta(column[1])\n\t\t\t\tfield = [meta.get_field(column[0]) or frappe._dict(label=meta.get_label(column[0]), fieldname=column[0])]\n\t\t\t\t_columns.extend(field)\n\t\t\tcolumns = _columns\n\n\t\t\tout = out + [list(d) for d in result]\n\n\t\tif as_dict:\n\t\t\tdata = []\n\t\t\tfor row in out:\n\t\t\t\tif isinstance(row, (list, tuple)):\n\t\t\t\t\t_row = frappe._dict()\n\t\t\t\t\tfor i, val in enumerate(row):\n\t\t\t\t\t\t_row[columns[i].get('fieldname')] = val\n\t\t\t\telif isinstance(row, dict):\n\t\t\t\t\t# no need to convert from dict to dict\n\t\t\t\t\t_row = frappe._dict(row)\n\t\t\t\tdata.append(_row)\n\t\telse:\n\t\t\tdata = out\n\t\treturn columns, data\n\n\n\[email protected]\n\tdef toggle_disable(self, disable):\n\t\tself.db_set(\"disabled\", cint(disable))\n", "path": "frappe/core/doctype/report/report.py"}]}
| 2,993 | 252 |
gh_patches_debug_2845
|
rasdani/github-patches
|
git_diff
|
mne-tools__mne-python-4664
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UserWarning in decoding_rsa example
Running this example, I get the following warning
decoding_rsa.py:94: RuntimeWarning: More events than colors available. You should pass a list of unique colors.
</issue>
<code>
[start of examples/decoding/decoding_rsa.py]
1 """
2
3 .. _rsa_noplot:
4
5 ====================================
6 Representational Similarity Analysis
7 ====================================
8
9 Representational Similarity Analysis is used to perform summary statistics
10 on supervised classifications where the number of classes is relatively high.
11 It consists in characterizing the structure of the confusion matrix to infer
12 the similarity between brain responses and serves as a proxy for characterizing
13 the space of mental representations [1]_ [2]_ [3]_.
14
15 In this example, we perform RSA on responses to 24 object images (among
16 a list of 92 images). Subjects were presented with images of human, animal
17 and inanimate objects [4]_. Here we use the 24 unique images of faces
18 and body parts.
19
20 .. note:: this example will download a very large (~6GB) file, so we will not
21 build the images below.
22
23 References
24 ----------
25
26 .. [1] Shepard, R. "Multidimensional scaling, tree-fitting, and clustering."
27 Science 210.4468 (1980): 390-398.
28 .. [2] Laakso, A. & Cottrell, G.. "Content and cluster analysis:
29 assessing representational similarity in neural systems." Philosophical
30 psychology 13.1 (2000): 47-76.
31 .. [3] Kriegeskorte, N., Marieke, M., & Bandettini. P. "Representational
32 similarity analysis-connecting the branches of systems neuroscience."
33 Frontiers in systems neuroscience 2 (2008): 4.
34 .. [4] Cichy, R. M., Pantazis, D., & Oliva, A. "Resolving human object
35 recognition in space and time." Nature neuroscience (2014): 17(3),
36 455-462.
37 """
38
39 # Authors: Jean-Remi King <[email protected]>
40 # Jaakko Leppakangas <[email protected]>
41 # Alexandre Gramfort <[email protected]>
42 #
43 # License: BSD (3-clause)
44
45 import os.path as op
46 import numpy as np
47 from pandas import read_csv
48 import matplotlib.pyplot as plt
49
50 from sklearn.model_selection import StratifiedKFold
51 from sklearn.pipeline import make_pipeline
52 from sklearn.preprocessing import StandardScaler
53 from sklearn.linear_model import LogisticRegression
54 from sklearn.metrics import roc_auc_score
55 from sklearn.manifold import MDS
56
57 import mne
58 from mne.io import read_raw_fif, concatenate_raws
59 from mne.datasets import visual_92_categories
60
61 print(__doc__)
62
63 data_path = visual_92_categories.data_path()
64
65 # Define stimulus - trigger mapping
66 fname = op.join(data_path, 'visual_stimuli.csv')
67 conds = read_csv(fname)
68 print(conds.head(5))
69
70 ##############################################################################
71 # Let's restrict the number of conditions to speed up computation
72 max_trigger = 24
73 conds = conds[:max_trigger] # take only the first 24 rows
74
75 ##############################################################################
76 # Define stimulus - trigger mapping
77 conditions = []
78 for c in conds.values:
79 cond_tags = list(c[:2])
80 cond_tags += [('not-' if i == 0 else '') + conds.columns[k]
81 for k, i in enumerate(c[2:], 2)]
82 conditions.append('/'.join(map(str, cond_tags)))
83 print(conditions[:10])
84
85 ##############################################################################
86 # Let's make the event_id dictionary
87 event_id = dict(zip(conditions, conds.trigger + 1))
88 event_id['0/human bodypart/human/not-face/animal/natural']
89
90 ##############################################################################
91 # Read MEG data
92 n_runs = 4 # 4 for full data (use less to speed up computations)
93 fname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')
94 raws = [read_raw_fif(fname % block) for block in range(n_runs)]
95 raw = concatenate_raws(raws)
96
97 events = mne.find_events(raw, min_duration=.002)
98
99 events = events[events[:, 2] <= max_trigger]
100 mne.viz.plot_events(events, sfreq=raw.info['sfreq'])
101
102 ##############################################################################
103 # Epoch data
104 picks = mne.pick_types(raw.info, meg=True)
105 epochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,
106 picks=picks, tmin=-.1, tmax=.500, preload=True)
107
108 ##############################################################################
109 # Let's plot some conditions
110 epochs['face'].average().plot()
111 epochs['not-face'].average().plot()
112
113 ##############################################################################
114 # Representational Similarity Analysis (RSA) is a neuroimaging-specific
115 # appelation to refer to statistics applied to the confusion matrix
116 # also referred to as the representational dissimilarity matrices (RDM).
117 #
118 # Compared to the approach from Cichy et al. we'll use a multiclass
119 # classifier (Multinomial Logistic Regression) while the paper uses
120 # all pairwise binary classification task to make the RDM.
121 # Also we use here the ROC-AUC as performance metric while the
122 # paper uses accuracy. Finally here for the sake of time we use
123 # RSA on a window of data while Cichy et al. did it for all time
124 # instants separately.
125
126 # Classify using the average signal in the window 50ms to 300ms
127 # to focus the classifier on the time interval with best SNR.
128 clf = make_pipeline(StandardScaler(),
129 LogisticRegression(C=1, solver='lbfgs'))
130 X = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2)
131 y = epochs.events[:, 2]
132
133 classes = set(y)
134 cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
135
136 # Compute confusion matrix for each cross-validation fold
137 y_pred = np.zeros((len(y), len(classes)))
138 for train, test in cv.split(X, y):
139 # Fit
140 clf.fit(X[train], y[train])
141 # Probabilistic prediction (necessary for ROC-AUC scoring metric)
142 y_pred[test] = clf.predict_proba(X[test])
143
144 ##############################################################################
145 # Compute confusion matrix using ROC-AUC
146 confusion = np.zeros((len(classes), len(classes)))
147 for ii, train_class in enumerate(classes):
148 for jj in range(ii, len(classes)):
149 confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj])
150 confusion[jj, ii] = confusion[ii, jj]
151
152 ##############################################################################
153 # Plot
154 labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6
155 fig, ax = plt.subplots(1)
156 im = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7])
157 ax.set_yticks(range(len(classes)))
158 ax.set_yticklabels(labels)
159 ax.set_xticks(range(len(classes)))
160 ax.set_xticklabels(labels, rotation=40, ha='left')
161 ax.axhline(11.5, color='k')
162 ax.axvline(11.5, color='k')
163 plt.colorbar(im)
164 plt.tight_layout()
165 plt.show()
166
167 ##############################################################################
168 # Confusion matrix related to mental representations have been historically
169 # summarized with dimensionality reduction using multi-dimensional scaling [1].
170 # See how the face samples cluster together.
171 fig, ax = plt.subplots(1)
172 mds = MDS(2, random_state=0, dissimilarity='precomputed')
173 chance = 0.5
174 summary = mds.fit_transform(chance - confusion)
175 cmap = plt.get_cmap('rainbow')
176 colors = ['r', 'b']
177 names = list(conds['condition'].values)
178 for color, name in zip(colors, set(names)):
179 sel = np.where([this_name == name for this_name in names])[0]
180 size = 500 if name == 'human face' else 100
181 ax.scatter(summary[sel, 0], summary[sel, 1], s=size,
182 facecolors=color, label=name, edgecolors='k')
183 ax.axis('off')
184 ax.legend(loc='lower right', scatterpoints=1, ncol=2)
185 plt.tight_layout()
186 plt.show()
187
[end of examples/decoding/decoding_rsa.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/decoding/decoding_rsa.py b/examples/decoding/decoding_rsa.py
--- a/examples/decoding/decoding_rsa.py
+++ b/examples/decoding/decoding_rsa.py
@@ -97,7 +97,6 @@
events = mne.find_events(raw, min_duration=.002)
events = events[events[:, 2] <= max_trigger]
-mne.viz.plot_events(events, sfreq=raw.info['sfreq'])
##############################################################################
# Epoch data
|
{"golden_diff": "diff --git a/examples/decoding/decoding_rsa.py b/examples/decoding/decoding_rsa.py\n--- a/examples/decoding/decoding_rsa.py\n+++ b/examples/decoding/decoding_rsa.py\n@@ -97,7 +97,6 @@\n events = mne.find_events(raw, min_duration=.002)\n \n events = events[events[:, 2] <= max_trigger]\n-mne.viz.plot_events(events, sfreq=raw.info['sfreq'])\n \n ##############################################################################\n # Epoch data\n", "issue": "UserWarning in decoding_rsa example\nRunning this example, I get the following warning \r\n \r\n decoding_rsa.py:94: RuntimeWarning: More events than colors available. You should pass a list of unique colors.\n", "before_files": [{"content": "\"\"\"\n\n.. _rsa_noplot:\n\n====================================\nRepresentational Similarity Analysis\n====================================\n\nRepresentational Similarity Analysis is used to perform summary statistics\non supervised classifications where the number of classes is relatively high.\nIt consists in characterizing the structure of the confusion matrix to infer\nthe similarity between brain responses and serves as a proxy for characterizing\nthe space of mental representations [1]_ [2]_ [3]_.\n\nIn this example, we perform RSA on responses to 24 object images (among\na list of 92 images). Subjects were presented with images of human, animal\nand inanimate objects [4]_. Here we use the 24 unique images of faces\nand body parts.\n\n.. note:: this example will download a very large (~6GB) file, so we will not\n build the images below.\n\nReferences\n----------\n\n.. [1] Shepard, R. \"Multidimensional scaling, tree-fitting, and clustering.\"\n Science 210.4468 (1980): 390-398.\n.. [2] Laakso, A. & Cottrell, G.. \"Content and cluster analysis:\n assessing representational similarity in neural systems.\" Philosophical\n psychology 13.1 (2000): 47-76.\n.. [3] Kriegeskorte, N., Marieke, M., & Bandettini. P. \"Representational\n similarity analysis-connecting the branches of systems neuroscience.\"\n Frontiers in systems neuroscience 2 (2008): 4.\n.. [4] Cichy, R. M., Pantazis, D., & Oliva, A. \"Resolving human object\n recognition in space and time.\" Nature neuroscience (2014): 17(3),\n 455-462.\n\"\"\"\n\n# Authors: Jean-Remi King <[email protected]>\n# Jaakko Leppakangas <[email protected]>\n# Alexandre Gramfort <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport numpy as np\nfrom pandas import read_csv\nimport matplotlib.pyplot as plt\n\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.manifold import MDS\n\nimport mne\nfrom mne.io import read_raw_fif, concatenate_raws\nfrom mne.datasets import visual_92_categories\n\nprint(__doc__)\n\ndata_path = visual_92_categories.data_path()\n\n# Define stimulus - trigger mapping\nfname = op.join(data_path, 'visual_stimuli.csv')\nconds = read_csv(fname)\nprint(conds.head(5))\n\n##############################################################################\n# Let's restrict the number of conditions to speed up computation\nmax_trigger = 24\nconds = conds[:max_trigger] # take only the first 24 rows\n\n##############################################################################\n# Define stimulus - trigger mapping\nconditions = []\nfor c in conds.values:\n cond_tags = list(c[:2])\n cond_tags += [('not-' if i == 0 else '') + conds.columns[k]\n for k, i in enumerate(c[2:], 2)]\n conditions.append('/'.join(map(str, cond_tags)))\nprint(conditions[:10])\n\n##############################################################################\n# Let's make the event_id dictionary\nevent_id = dict(zip(conditions, conds.trigger + 1))\nevent_id['0/human bodypart/human/not-face/animal/natural']\n\n##############################################################################\n# Read MEG data\nn_runs = 4 # 4 for full data (use less to speed up computations)\nfname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')\nraws = [read_raw_fif(fname % block) for block in range(n_runs)]\nraw = concatenate_raws(raws)\n\nevents = mne.find_events(raw, min_duration=.002)\n\nevents = events[events[:, 2] <= max_trigger]\nmne.viz.plot_events(events, sfreq=raw.info['sfreq'])\n\n##############################################################################\n# Epoch data\npicks = mne.pick_types(raw.info, meg=True)\nepochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,\n picks=picks, tmin=-.1, tmax=.500, preload=True)\n\n##############################################################################\n# Let's plot some conditions\nepochs['face'].average().plot()\nepochs['not-face'].average().plot()\n\n##############################################################################\n# Representational Similarity Analysis (RSA) is a neuroimaging-specific\n# appelation to refer to statistics applied to the confusion matrix\n# also referred to as the representational dissimilarity matrices (RDM).\n#\n# Compared to the approach from Cichy et al. we'll use a multiclass\n# classifier (Multinomial Logistic Regression) while the paper uses\n# all pairwise binary classification task to make the RDM.\n# Also we use here the ROC-AUC as performance metric while the\n# paper uses accuracy. Finally here for the sake of time we use\n# RSA on a window of data while Cichy et al. did it for all time\n# instants separately.\n\n# Classify using the average signal in the window 50ms to 300ms\n# to focus the classifier on the time interval with best SNR.\nclf = make_pipeline(StandardScaler(),\n LogisticRegression(C=1, solver='lbfgs'))\nX = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2)\ny = epochs.events[:, 2]\n\nclasses = set(y)\ncv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)\n\n# Compute confusion matrix for each cross-validation fold\ny_pred = np.zeros((len(y), len(classes)))\nfor train, test in cv.split(X, y):\n # Fit\n clf.fit(X[train], y[train])\n # Probabilistic prediction (necessary for ROC-AUC scoring metric)\n y_pred[test] = clf.predict_proba(X[test])\n\n##############################################################################\n# Compute confusion matrix using ROC-AUC\nconfusion = np.zeros((len(classes), len(classes)))\nfor ii, train_class in enumerate(classes):\n for jj in range(ii, len(classes)):\n confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj])\n confusion[jj, ii] = confusion[ii, jj]\n\n##############################################################################\n# Plot\nlabels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6\nfig, ax = plt.subplots(1)\nim = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7])\nax.set_yticks(range(len(classes)))\nax.set_yticklabels(labels)\nax.set_xticks(range(len(classes)))\nax.set_xticklabels(labels, rotation=40, ha='left')\nax.axhline(11.5, color='k')\nax.axvline(11.5, color='k')\nplt.colorbar(im)\nplt.tight_layout()\nplt.show()\n\n##############################################################################\n# Confusion matrix related to mental representations have been historically\n# summarized with dimensionality reduction using multi-dimensional scaling [1].\n# See how the face samples cluster together.\nfig, ax = plt.subplots(1)\nmds = MDS(2, random_state=0, dissimilarity='precomputed')\nchance = 0.5\nsummary = mds.fit_transform(chance - confusion)\ncmap = plt.get_cmap('rainbow')\ncolors = ['r', 'b']\nnames = list(conds['condition'].values)\nfor color, name in zip(colors, set(names)):\n sel = np.where([this_name == name for this_name in names])[0]\n size = 500 if name == 'human face' else 100\n ax.scatter(summary[sel, 0], summary[sel, 1], s=size,\n facecolors=color, label=name, edgecolors='k')\nax.axis('off')\nax.legend(loc='lower right', scatterpoints=1, ncol=2)\nplt.tight_layout()\nplt.show()\n", "path": "examples/decoding/decoding_rsa.py"}]}
| 2,832 | 113 |
gh_patches_debug_2979
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-6426
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid HTML for select element
This html is generated by the Python form code.
template:
https://github.com/pypa/warehouse/blob/master/warehouse/templates/manage/roles.html
field:
`{{ form.role_name }}`
ERROR: The first child “option” element of a “select” element with a “required” attribute, and without a “multiple” attribute, and without a “size” attribute whose value is greater than “1”, must have either an empty “value” attribute, or must have no text content. Consider either adding a placeholder option label, or adding a “size” attribute with a value equal to the number of “option” elements. (433)
Reference:
https://maxdesign.com.au/articles/select-required/
</issue>
<code>
[start of warehouse/manage/forms.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import json
14
15 import wtforms
16
17 import warehouse.utils.otp as otp
18 import warehouse.utils.webauthn as webauthn
19
20 from warehouse import forms
21 from warehouse.accounts.forms import (
22 NewEmailMixin,
23 NewPasswordMixin,
24 PasswordMixin,
25 TOTPValueMixin,
26 WebAuthnCredentialMixin,
27 )
28
29
30 class RoleNameMixin:
31
32 role_name = wtforms.SelectField(
33 "Select role",
34 choices=[("Maintainer", "Maintainer"), ("Owner", "Owner")],
35 validators=[wtforms.validators.DataRequired(message="Select role")],
36 )
37
38
39 class UsernameMixin:
40
41 username = wtforms.StringField(
42 validators=[wtforms.validators.DataRequired(message="Specify username")]
43 )
44
45 def validate_username(self, field):
46 userid = self.user_service.find_userid(field.data)
47
48 if userid is None:
49 raise wtforms.validators.ValidationError(
50 "No user found with that username. Try again."
51 )
52
53
54 class CreateRoleForm(RoleNameMixin, UsernameMixin, forms.Form):
55 def __init__(self, *args, user_service, **kwargs):
56 super().__init__(*args, **kwargs)
57 self.user_service = user_service
58
59
60 class ChangeRoleForm(RoleNameMixin, forms.Form):
61 pass
62
63
64 class SaveAccountForm(forms.Form):
65
66 __params__ = ["name"]
67
68 name = wtforms.StringField()
69
70
71 class AddEmailForm(NewEmailMixin, forms.Form):
72
73 __params__ = ["email"]
74
75 def __init__(self, *args, user_service, user_id, **kwargs):
76 super().__init__(*args, **kwargs)
77 self.user_service = user_service
78 self.user_id = user_id
79
80
81 class ChangePasswordForm(PasswordMixin, NewPasswordMixin, forms.Form):
82
83 __params__ = ["password", "new_password", "password_confirm"]
84
85 def __init__(self, *args, user_service, **kwargs):
86 super().__init__(*args, **kwargs)
87 self.user_service = user_service
88
89
90 class DeleteTOTPForm(UsernameMixin, forms.Form):
91
92 __params__ = ["confirm_username"]
93
94 def __init__(self, *args, user_service, **kwargs):
95 super().__init__(*args, **kwargs)
96 self.user_service = user_service
97
98
99 class ProvisionTOTPForm(TOTPValueMixin, forms.Form):
100
101 __params__ = ["totp_value"]
102
103 def __init__(self, *args, totp_secret, **kwargs):
104 super().__init__(*args, **kwargs)
105 self.totp_secret = totp_secret
106
107 def validate_totp_value(self, field):
108 totp_value = field.data.encode("utf8")
109 if not otp.verify_totp(self.totp_secret, totp_value):
110 raise wtforms.validators.ValidationError("Invalid TOTP code. Try again?")
111
112
113 class DeleteWebAuthnForm(forms.Form):
114 __params__ = ["confirm_device_name"]
115
116 label = wtforms.StringField(
117 validators=[
118 wtforms.validators.DataRequired(message="Specify a device name"),
119 wtforms.validators.Length(
120 max=64, message=("Label must be 64 characters or less")
121 ),
122 ]
123 )
124
125 def __init__(self, *args, user_service, user_id, **kwargs):
126 super().__init__(*args, **kwargs)
127 self.user_service = user_service
128 self.user_id = user_id
129
130 def validate_label(self, field):
131 label = field.data
132
133 webauthn = self.user_service.get_webauthn_by_label(self.user_id, label)
134 if webauthn is None:
135 raise wtforms.validators.ValidationError("No WebAuthn key with given label")
136 self.webauthn = webauthn
137
138
139 class ProvisionWebAuthnForm(WebAuthnCredentialMixin, forms.Form):
140 __params__ = ["label", "credential"]
141
142 label = wtforms.StringField(
143 validators=[
144 wtforms.validators.DataRequired(message="Specify a label"),
145 wtforms.validators.Length(
146 max=64, message=("Label must be 64 characters or less")
147 ),
148 ]
149 )
150
151 def __init__(
152 self, *args, user_service, user_id, challenge, rp_id, origin, **kwargs
153 ):
154 super().__init__(*args, **kwargs)
155 self.user_service = user_service
156 self.user_id = user_id
157 self.challenge = challenge
158 self.rp_id = rp_id
159 self.origin = origin
160
161 def validate_credential(self, field):
162 try:
163 credential_dict = json.loads(field.data.encode("utf8"))
164 except json.JSONDecodeError:
165 raise wtforms.validators.ValidationError(
166 "Invalid WebAuthn credential: Bad payload"
167 )
168
169 try:
170 validated_credential = self.user_service.verify_webauthn_credential(
171 credential_dict,
172 challenge=self.challenge,
173 rp_id=self.rp_id,
174 origin=self.origin,
175 )
176 except webauthn.RegistrationRejectedException as e:
177 raise wtforms.validators.ValidationError(str(e))
178
179 self.validated_credential = validated_credential
180
181 def validate_label(self, field):
182 label = field.data
183
184 if self.user_service.get_webauthn_by_label(self.user_id, label) is not None:
185 raise wtforms.validators.ValidationError(f"Label '{label}' already in use")
186
187
188 class CreateMacaroonForm(forms.Form):
189 __params__ = ["description", "token_scope"]
190
191 def __init__(self, *args, user_id, macaroon_service, project_names, **kwargs):
192 super().__init__(*args, **kwargs)
193 self.user_id = user_id
194 self.macaroon_service = macaroon_service
195 self.project_names = project_names
196
197 description = wtforms.StringField(
198 validators=[
199 wtforms.validators.DataRequired(message="Specify a token name"),
200 wtforms.validators.Length(
201 max=100, message="Description must be 100 characters or less"
202 ),
203 ]
204 )
205
206 token_scope = wtforms.StringField(
207 validators=[wtforms.validators.DataRequired(message="Specify the token scope")]
208 )
209
210 def validate_description(self, field):
211 description = field.data
212
213 if (
214 self.macaroon_service.get_macaroon_by_description(self.user_id, description)
215 is not None
216 ):
217 raise wtforms.validators.ValidationError("API token name already in use")
218
219 def validate_token_scope(self, field):
220 scope = field.data
221
222 try:
223 _, scope_kind = scope.split(":", 1)
224 except ValueError:
225 raise wtforms.ValidationError(f"Unknown token scope: {scope}")
226
227 if scope_kind == "unspecified":
228 raise wtforms.ValidationError(f"Specify the token scope")
229
230 if scope_kind == "user":
231 self.validated_scope = scope_kind
232 return
233
234 try:
235 scope_kind, scope_value = scope_kind.split(":", 1)
236 except ValueError:
237 raise wtforms.ValidationError(f"Unknown token scope: {scope}")
238
239 if scope_kind != "project":
240 raise wtforms.ValidationError(f"Unknown token scope: {scope}")
241 if scope_value not in self.project_names:
242 raise wtforms.ValidationError(
243 f"Unknown or invalid project name: {scope_value}"
244 )
245
246 self.validated_scope = {"projects": [scope_value]}
247
248
249 class DeleteMacaroonForm(forms.Form):
250 __params__ = ["macaroon_id"]
251
252 macaroon_id = wtforms.StringField(
253 validators=[wtforms.validators.DataRequired(message="Identifier required")]
254 )
255
256 def __init__(self, *args, macaroon_service, **kwargs):
257 super().__init__(*args, **kwargs)
258 self.macaroon_service = macaroon_service
259
260 def validate_macaroon_id(self, field):
261 macaroon_id = field.data
262 if self.macaroon_service.find_macaroon(macaroon_id) is None:
263 raise wtforms.validators.ValidationError("No such macaroon")
264
[end of warehouse/manage/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/manage/forms.py b/warehouse/manage/forms.py
--- a/warehouse/manage/forms.py
+++ b/warehouse/manage/forms.py
@@ -31,7 +31,7 @@
role_name = wtforms.SelectField(
"Select role",
- choices=[("Maintainer", "Maintainer"), ("Owner", "Owner")],
+ choices=[("", "Select role"), ("Maintainer", "Maintainer"), ("Owner", "Owner")],
validators=[wtforms.validators.DataRequired(message="Select role")],
)
|
{"golden_diff": "diff --git a/warehouse/manage/forms.py b/warehouse/manage/forms.py\n--- a/warehouse/manage/forms.py\n+++ b/warehouse/manage/forms.py\n@@ -31,7 +31,7 @@\n \n role_name = wtforms.SelectField(\n \"Select role\",\n- choices=[(\"Maintainer\", \"Maintainer\"), (\"Owner\", \"Owner\")],\n+ choices=[(\"\", \"Select role\"), (\"Maintainer\", \"Maintainer\"), (\"Owner\", \"Owner\")],\n validators=[wtforms.validators.DataRequired(message=\"Select role\")],\n )\n", "issue": "Invalid HTML for select element\nThis html is generated by the Python form code.\r\n\r\ntemplate:\r\nhttps://github.com/pypa/warehouse/blob/master/warehouse/templates/manage/roles.html\r\n\r\nfield:\r\n`{{ form.role_name }}`\r\n\r\nERROR: The first child \u201coption\u201d element of a \u201cselect\u201d element with a \u201crequired\u201d attribute, and without a \u201cmultiple\u201d attribute, and without a \u201csize\u201d attribute whose value is greater than \u201c1\u201d, must have either an empty \u201cvalue\u201d attribute, or must have no text content. Consider either adding a placeholder option label, or adding a \u201csize\u201d attribute with a value equal to the number of \u201coption\u201d elements. (433)\r\n\r\nReference:\r\nhttps://maxdesign.com.au/articles/select-required/\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json\n\nimport wtforms\n\nimport warehouse.utils.otp as otp\nimport warehouse.utils.webauthn as webauthn\n\nfrom warehouse import forms\nfrom warehouse.accounts.forms import (\n NewEmailMixin,\n NewPasswordMixin,\n PasswordMixin,\n TOTPValueMixin,\n WebAuthnCredentialMixin,\n)\n\n\nclass RoleNameMixin:\n\n role_name = wtforms.SelectField(\n \"Select role\",\n choices=[(\"Maintainer\", \"Maintainer\"), (\"Owner\", \"Owner\")],\n validators=[wtforms.validators.DataRequired(message=\"Select role\")],\n )\n\n\nclass UsernameMixin:\n\n username = wtforms.StringField(\n validators=[wtforms.validators.DataRequired(message=\"Specify username\")]\n )\n\n def validate_username(self, field):\n userid = self.user_service.find_userid(field.data)\n\n if userid is None:\n raise wtforms.validators.ValidationError(\n \"No user found with that username. Try again.\"\n )\n\n\nclass CreateRoleForm(RoleNameMixin, UsernameMixin, forms.Form):\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass ChangeRoleForm(RoleNameMixin, forms.Form):\n pass\n\n\nclass SaveAccountForm(forms.Form):\n\n __params__ = [\"name\"]\n\n name = wtforms.StringField()\n\n\nclass AddEmailForm(NewEmailMixin, forms.Form):\n\n __params__ = [\"email\"]\n\n def __init__(self, *args, user_service, user_id, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n\n\nclass ChangePasswordForm(PasswordMixin, NewPasswordMixin, forms.Form):\n\n __params__ = [\"password\", \"new_password\", \"password_confirm\"]\n\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass DeleteTOTPForm(UsernameMixin, forms.Form):\n\n __params__ = [\"confirm_username\"]\n\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass ProvisionTOTPForm(TOTPValueMixin, forms.Form):\n\n __params__ = [\"totp_value\"]\n\n def __init__(self, *args, totp_secret, **kwargs):\n super().__init__(*args, **kwargs)\n self.totp_secret = totp_secret\n\n def validate_totp_value(self, field):\n totp_value = field.data.encode(\"utf8\")\n if not otp.verify_totp(self.totp_secret, totp_value):\n raise wtforms.validators.ValidationError(\"Invalid TOTP code. Try again?\")\n\n\nclass DeleteWebAuthnForm(forms.Form):\n __params__ = [\"confirm_device_name\"]\n\n label = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(message=\"Specify a device name\"),\n wtforms.validators.Length(\n max=64, message=(\"Label must be 64 characters or less\")\n ),\n ]\n )\n\n def __init__(self, *args, user_service, user_id, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n\n def validate_label(self, field):\n label = field.data\n\n webauthn = self.user_service.get_webauthn_by_label(self.user_id, label)\n if webauthn is None:\n raise wtforms.validators.ValidationError(\"No WebAuthn key with given label\")\n self.webauthn = webauthn\n\n\nclass ProvisionWebAuthnForm(WebAuthnCredentialMixin, forms.Form):\n __params__ = [\"label\", \"credential\"]\n\n label = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(message=\"Specify a label\"),\n wtforms.validators.Length(\n max=64, message=(\"Label must be 64 characters or less\")\n ),\n ]\n )\n\n def __init__(\n self, *args, user_service, user_id, challenge, rp_id, origin, **kwargs\n ):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n self.user_id = user_id\n self.challenge = challenge\n self.rp_id = rp_id\n self.origin = origin\n\n def validate_credential(self, field):\n try:\n credential_dict = json.loads(field.data.encode(\"utf8\"))\n except json.JSONDecodeError:\n raise wtforms.validators.ValidationError(\n \"Invalid WebAuthn credential: Bad payload\"\n )\n\n try:\n validated_credential = self.user_service.verify_webauthn_credential(\n credential_dict,\n challenge=self.challenge,\n rp_id=self.rp_id,\n origin=self.origin,\n )\n except webauthn.RegistrationRejectedException as e:\n raise wtforms.validators.ValidationError(str(e))\n\n self.validated_credential = validated_credential\n\n def validate_label(self, field):\n label = field.data\n\n if self.user_service.get_webauthn_by_label(self.user_id, label) is not None:\n raise wtforms.validators.ValidationError(f\"Label '{label}' already in use\")\n\n\nclass CreateMacaroonForm(forms.Form):\n __params__ = [\"description\", \"token_scope\"]\n\n def __init__(self, *args, user_id, macaroon_service, project_names, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_id = user_id\n self.macaroon_service = macaroon_service\n self.project_names = project_names\n\n description = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(message=\"Specify a token name\"),\n wtforms.validators.Length(\n max=100, message=\"Description must be 100 characters or less\"\n ),\n ]\n )\n\n token_scope = wtforms.StringField(\n validators=[wtforms.validators.DataRequired(message=\"Specify the token scope\")]\n )\n\n def validate_description(self, field):\n description = field.data\n\n if (\n self.macaroon_service.get_macaroon_by_description(self.user_id, description)\n is not None\n ):\n raise wtforms.validators.ValidationError(\"API token name already in use\")\n\n def validate_token_scope(self, field):\n scope = field.data\n\n try:\n _, scope_kind = scope.split(\":\", 1)\n except ValueError:\n raise wtforms.ValidationError(f\"Unknown token scope: {scope}\")\n\n if scope_kind == \"unspecified\":\n raise wtforms.ValidationError(f\"Specify the token scope\")\n\n if scope_kind == \"user\":\n self.validated_scope = scope_kind\n return\n\n try:\n scope_kind, scope_value = scope_kind.split(\":\", 1)\n except ValueError:\n raise wtforms.ValidationError(f\"Unknown token scope: {scope}\")\n\n if scope_kind != \"project\":\n raise wtforms.ValidationError(f\"Unknown token scope: {scope}\")\n if scope_value not in self.project_names:\n raise wtforms.ValidationError(\n f\"Unknown or invalid project name: {scope_value}\"\n )\n\n self.validated_scope = {\"projects\": [scope_value]}\n\n\nclass DeleteMacaroonForm(forms.Form):\n __params__ = [\"macaroon_id\"]\n\n macaroon_id = wtforms.StringField(\n validators=[wtforms.validators.DataRequired(message=\"Identifier required\")]\n )\n\n def __init__(self, *args, macaroon_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.macaroon_service = macaroon_service\n\n def validate_macaroon_id(self, field):\n macaroon_id = field.data\n if self.macaroon_service.find_macaroon(macaroon_id) is None:\n raise wtforms.validators.ValidationError(\"No such macaroon\")\n", "path": "warehouse/manage/forms.py"}]}
| 3,205 | 118 |
gh_patches_debug_4344
|
rasdani/github-patches
|
git_diff
|
google__turbinia-743
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update google-cloud-logging package
We need to use version >= 2 because of dftimewolf dependencies, and we need to use <=2.0.2 for the google cloud error package, but when trying to use that version I get the following:
```
$ turbiniactl -d server
Traceback (most recent call last):
File "/home/aaronpeterson/.local/share/virtualenvs/turbinia-aeSTftCa/bin/turbiniactl", line 11, in <module>
load_entry_point('turbinia', 'console_scripts', 'turbiniactl')()
File "/home/aaronpeterson/src/turbinia/turbinia/turbiniactl.py", line 428, in main
from turbinia.lib import google_cloud
File "/home/aaronpeterson/src/turbinia/turbinia/lib/google_cloud.py", line 33, in <module>
from google.cloud.logging import _helpers
ImportError: cannot import name '_helpers' from 'google.cloud.logging' (/home/aaronpeterson/.local/share/virtualenvs/turbinia-aeSTftCa/lib/python3.8/site-packages/google/cloud/logging/__init__.py)
```
</issue>
<code>
[start of turbinia/lib/google_cloud.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2017 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Google Cloud resources library."""
16
17 from __future__ import unicode_literals
18
19 import datetime
20 from datetime import timedelta
21 from turbinia.config import DATETIME_FORMAT
22 import logging
23 import os
24 import json
25
26 from google.cloud import logging as cloud_logging
27 from google.cloud import error_reporting
28 from google.cloud import exceptions
29 from google.api_core import exceptions as google_api_exceptions
30 from googleapiclient.errors import HttpError
31
32 from turbinia import TurbiniaException
33 from google.cloud.logging import _helpers
34 from google.cloud.logging.handlers.transports.background_thread import _Worker
35
36 logger = logging.getLogger('turbinia')
37
38
39 def setup_stackdriver_handler(project_id, origin):
40 """Set up Google Cloud Stackdriver Logging
41
42 The Google Cloud Logging library will attach itself as a
43 handler to the default Python logging module.
44
45 Attributes:
46 project_id: The name of the Google Cloud project.
47 origin: Where the log is originating from.(i.e. server, worker)
48 Raises:
49 TurbiniaException: When an error occurs enabling GCP Stackdriver Logging.
50 """
51
52 # Patching cloud logging to allow custom fields
53 def my_enqueue(
54 self, record, message, resource=None, labels=None, trace=None,
55 span_id=None):
56 queue_entry = {
57 "info": {
58 "message": message,
59 "python_logger": record.name,
60 "origin": origin
61 },
62 "severity": _helpers._normalize_severity(record.levelno),
63 "resource": resource,
64 "labels": labels,
65 "trace": trace,
66 "span_id": span_id,
67 "timestamp": datetime.datetime.utcfromtimestamp(record.created),
68 }
69
70 self._queue.put_nowait(queue_entry)
71
72 _Worker.enqueue = my_enqueue
73
74 try:
75 client = cloud_logging.Client(project=project_id)
76 cloud_handler = cloud_logging.handlers.CloudLoggingHandler(client)
77 logger.addHandler(cloud_handler)
78
79 except exceptions.GoogleCloudError as exception:
80 msg = 'Error enabling Stackdriver Logging: {0:s}'.format(str(exception))
81 raise TurbiniaException(msg)
82
83
84 def setup_stackdriver_traceback(project_id):
85 """Set up Google Cloud Error Reporting
86
87 This method will enable Google Cloud Error Reporting.
88 All exceptions that occur within a Turbinia Task will be logged.
89
90 Attributes:
91 project_id: The name of the Google Cloud project.
92 Raises:
93 TurbiniaException: When an error occurs enabling GCP Error Reporting.
94 """
95 try:
96 client = error_reporting.Client(project=project_id)
97 except exceptions.GoogleCloudError as exception:
98 msg = 'Error enabling GCP Error Reporting: {0:s}'.format(str(exception))
99 raise TurbiniaException(msg)
100 return client
101
102
103 def get_logs(project_id, output_dir=None, days=1, query=None):
104 """Copies stackdriver logs to a local directory.
105
106 Attributes:
107 project_id: The name of the Google Cloud project.
108 output_dir: The directory where logs are stored.
109 query: Query to use to pull stackdriver logs.
110 days: number of days we want history for.
111 Raises:
112 TurbiniaException: When an error happens pulling the logs.
113 """
114 if not query:
115 query = 'jsonPayload.python_logger="turbinia"'
116 start_time = datetime.datetime.now() - timedelta(days=days)
117 start_string = start_time.strftime(DATETIME_FORMAT)
118 complete_query = '{0:s} timestamp>="{1:s}"'.format(query, start_string)
119 if output_dir:
120 file_path = os.path.join(
121 output_dir, 'turbinia_stackdriver_logs_{0:s}.jsonl'.format(
122 datetime.datetime.now().strftime('%s')))
123 output_file = open(file_path, 'w')
124 logger.info('Writing the logs to {0:s}'.format(file_path))
125 try:
126 client = cloud_logging.Client(project=project_id)
127 logger.info(
128 'Collecting the stackdriver logs with the following query: {0:s}'
129 .format(complete_query))
130
131 for entry in client.list_entries(order_by=cloud_logging.DESCENDING,
132 filter_=complete_query):
133 if not output_dir:
134 logger.info(json.dumps(entry.to_api_repr()))
135 else:
136 output_file.write(json.dumps(entry.to_api_repr()))
137 output_file.write('\n')
138 if output_dir:
139 output_file.close()
140 except google_api_exceptions.InvalidArgument as exception:
141 msg = 'Unable to parse query {0!s} with error {1!s}'.format(
142 query, exception)
143 raise TurbiniaException(msg)
144 except HttpError as exception:
145 msg = 'HTTP error querying logs. Make sure you have the right access on the project.{0!s}'.format(
146 exception)
147 raise TurbiniaException(msg)
148 except google_api_exceptions.GoogleAPIError as exception:
149 msg = 'Something went wrong with the API. {0!s}'.format(exception)
150 raise TurbiniaException(msg)
151
[end of turbinia/lib/google_cloud.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/turbinia/lib/google_cloud.py b/turbinia/lib/google_cloud.py
--- a/turbinia/lib/google_cloud.py
+++ b/turbinia/lib/google_cloud.py
@@ -30,8 +30,8 @@
from googleapiclient.errors import HttpError
from turbinia import TurbiniaException
-from google.cloud.logging import _helpers
-from google.cloud.logging.handlers.transports.background_thread import _Worker
+from google.cloud.logging_v2 import _helpers
+from google.cloud.logging_v2.handlers.transports.background_thread import _Worker
logger = logging.getLogger('turbinia')
|
{"golden_diff": "diff --git a/turbinia/lib/google_cloud.py b/turbinia/lib/google_cloud.py\n--- a/turbinia/lib/google_cloud.py\n+++ b/turbinia/lib/google_cloud.py\n@@ -30,8 +30,8 @@\n from googleapiclient.errors import HttpError\n \n from turbinia import TurbiniaException\n-from google.cloud.logging import _helpers\n-from google.cloud.logging.handlers.transports.background_thread import _Worker\n+from google.cloud.logging_v2 import _helpers\n+from google.cloud.logging_v2.handlers.transports.background_thread import _Worker\n \n logger = logging.getLogger('turbinia')\n", "issue": "Update google-cloud-logging package\nWe need to use version >= 2 because of dftimewolf dependencies, and we need to use <=2.0.2 for the google cloud error package, but when trying to use that version I get the following:\r\n\r\n```\r\n$ turbiniactl -d server\r\nTraceback (most recent call last):\r\n File \"/home/aaronpeterson/.local/share/virtualenvs/turbinia-aeSTftCa/bin/turbiniactl\", line 11, in <module>\r\n load_entry_point('turbinia', 'console_scripts', 'turbiniactl')()\r\n File \"/home/aaronpeterson/src/turbinia/turbinia/turbiniactl.py\", line 428, in main\r\n from turbinia.lib import google_cloud\r\n File \"/home/aaronpeterson/src/turbinia/turbinia/lib/google_cloud.py\", line 33, in <module>\r\n from google.cloud.logging import _helpers\r\nImportError: cannot import name '_helpers' from 'google.cloud.logging' (/home/aaronpeterson/.local/share/virtualenvs/turbinia-aeSTftCa/lib/python3.8/site-packages/google/cloud/logging/__init__.py)\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2017 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Google Cloud resources library.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport datetime\nfrom datetime import timedelta\nfrom turbinia.config import DATETIME_FORMAT\nimport logging\nimport os\nimport json\n\nfrom google.cloud import logging as cloud_logging\nfrom google.cloud import error_reporting\nfrom google.cloud import exceptions\nfrom google.api_core import exceptions as google_api_exceptions\nfrom googleapiclient.errors import HttpError\n\nfrom turbinia import TurbiniaException\nfrom google.cloud.logging import _helpers\nfrom google.cloud.logging.handlers.transports.background_thread import _Worker\n\nlogger = logging.getLogger('turbinia')\n\n\ndef setup_stackdriver_handler(project_id, origin):\n \"\"\"Set up Google Cloud Stackdriver Logging\n\n The Google Cloud Logging library will attach itself as a\n handler to the default Python logging module.\n\n Attributes:\n project_id: The name of the Google Cloud project.\n origin: Where the log is originating from.(i.e. server, worker)\n Raises:\n TurbiniaException: When an error occurs enabling GCP Stackdriver Logging.\n \"\"\"\n\n # Patching cloud logging to allow custom fields\n def my_enqueue(\n self, record, message, resource=None, labels=None, trace=None,\n span_id=None):\n queue_entry = {\n \"info\": {\n \"message\": message,\n \"python_logger\": record.name,\n \"origin\": origin\n },\n \"severity\": _helpers._normalize_severity(record.levelno),\n \"resource\": resource,\n \"labels\": labels,\n \"trace\": trace,\n \"span_id\": span_id,\n \"timestamp\": datetime.datetime.utcfromtimestamp(record.created),\n }\n\n self._queue.put_nowait(queue_entry)\n\n _Worker.enqueue = my_enqueue\n\n try:\n client = cloud_logging.Client(project=project_id)\n cloud_handler = cloud_logging.handlers.CloudLoggingHandler(client)\n logger.addHandler(cloud_handler)\n\n except exceptions.GoogleCloudError as exception:\n msg = 'Error enabling Stackdriver Logging: {0:s}'.format(str(exception))\n raise TurbiniaException(msg)\n\n\ndef setup_stackdriver_traceback(project_id):\n \"\"\"Set up Google Cloud Error Reporting\n\n This method will enable Google Cloud Error Reporting.\n All exceptions that occur within a Turbinia Task will be logged.\n\n Attributes:\n project_id: The name of the Google Cloud project.\n Raises:\n TurbiniaException: When an error occurs enabling GCP Error Reporting.\n \"\"\"\n try:\n client = error_reporting.Client(project=project_id)\n except exceptions.GoogleCloudError as exception:\n msg = 'Error enabling GCP Error Reporting: {0:s}'.format(str(exception))\n raise TurbiniaException(msg)\n return client\n\n\ndef get_logs(project_id, output_dir=None, days=1, query=None):\n \"\"\"Copies stackdriver logs to a local directory.\n\n Attributes:\n project_id: The name of the Google Cloud project.\n output_dir: The directory where logs are stored.\n query: Query to use to pull stackdriver logs. \n days: number of days we want history for.\n Raises:\n TurbiniaException: When an error happens pulling the logs.\n \"\"\"\n if not query:\n query = 'jsonPayload.python_logger=\"turbinia\"'\n start_time = datetime.datetime.now() - timedelta(days=days)\n start_string = start_time.strftime(DATETIME_FORMAT)\n complete_query = '{0:s} timestamp>=\"{1:s}\"'.format(query, start_string)\n if output_dir:\n file_path = os.path.join(\n output_dir, 'turbinia_stackdriver_logs_{0:s}.jsonl'.format(\n datetime.datetime.now().strftime('%s')))\n output_file = open(file_path, 'w')\n logger.info('Writing the logs to {0:s}'.format(file_path))\n try:\n client = cloud_logging.Client(project=project_id)\n logger.info(\n 'Collecting the stackdriver logs with the following query: {0:s}'\n .format(complete_query))\n\n for entry in client.list_entries(order_by=cloud_logging.DESCENDING,\n filter_=complete_query):\n if not output_dir:\n logger.info(json.dumps(entry.to_api_repr()))\n else:\n output_file.write(json.dumps(entry.to_api_repr()))\n output_file.write('\\n')\n if output_dir:\n output_file.close()\n except google_api_exceptions.InvalidArgument as exception:\n msg = 'Unable to parse query {0!s} with error {1!s}'.format(\n query, exception)\n raise TurbiniaException(msg)\n except HttpError as exception:\n msg = 'HTTP error querying logs. Make sure you have the right access on the project.{0!s}'.format(\n exception)\n raise TurbiniaException(msg)\n except google_api_exceptions.GoogleAPIError as exception:\n msg = 'Something went wrong with the API. {0!s}'.format(exception)\n raise TurbiniaException(msg)\n", "path": "turbinia/lib/google_cloud.py"}]}
| 2,344 | 133 |
gh_patches_debug_3352
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-3395
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change `author_email` in setup.py
https://github.com/streamlink/streamlink/blob/08e582580f3411b2de2c368f8b0cc7108264f990/setup.py#L83
@gravyboat
you've registered `[email protected]` a couple of years ago, right? Can this be used instead?
What's the email address of the `streamlink` account on pypi?
https://pypi.org/user/streamlink/
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 import codecs
3 from os import environ, path
4 from sys import argv, path as sys_path
5
6 from setuptools import find_packages, setup
7
8 import versioneer
9
10
11 deps = [
12 "requests>=2.21.0,<3.0",
13 "isodate",
14 "websocket-client",
15 # Support for SOCKS proxies
16 "PySocks!=1.5.7,>=1.5.6",
17 ]
18
19 # for encrypted streams
20 if environ.get("STREAMLINK_USE_PYCRYPTO"):
21 deps.append("pycrypto")
22 else:
23 # this version of pycryptodome is known to work and has a Windows wheel for py2.7, py3.3-3.6
24 deps.append("pycryptodome>=3.4.3,<4")
25
26 # for localization
27 if environ.get("STREAMLINK_USE_PYCOUNTRY"):
28 deps.append("pycountry")
29 else:
30 deps.append("iso-639")
31 deps.append("iso3166")
32
33 # When we build an egg for the Win32 bootstrap we don"t want dependency
34 # information built into it.
35 if environ.get("NO_DEPS"):
36 deps = []
37
38 this_directory = path.abspath(path.dirname(__file__))
39 srcdir = path.join(this_directory, "src/")
40 sys_path.insert(0, srcdir)
41
42 with codecs.open(path.join(this_directory, "README.md"), 'r', "utf8") as f:
43 long_description = f.read()
44
45
46 def is_wheel_for_windows():
47 if "bdist_wheel" in argv:
48 names = ["win32", "win-amd64", "cygwin"]
49 length = len(argv)
50 for pos in range(argv.index("bdist_wheel") + 1, length):
51 if argv[pos] == "--plat-name" and pos + 1 < length:
52 return argv[pos + 1] in names
53 elif argv[pos][:12] == "--plat-name=":
54 return argv[pos][12:] in names
55 return False
56
57
58 entry_points = {
59 "console_scripts": ["streamlink=streamlink_cli.main:main"]
60 }
61
62 if is_wheel_for_windows():
63 entry_points["gui_scripts"] = ["streamlinkw=streamlink_cli.main:main"]
64
65
66 setup(name="streamlink",
67 version=versioneer.get_version(),
68 cmdclass=versioneer.get_cmdclass(),
69 description="Streamlink is a command-line utility that extracts streams "
70 "from various services and pipes them into a video player of "
71 "choice.",
72 long_description=long_description,
73 long_description_content_type="text/markdown",
74 url="https://github.com/streamlink/streamlink",
75 project_urls={
76 "Documentation": "https://streamlink.github.io/",
77 "Tracker": "https://github.com/streamlink/streamlink/issues",
78 "Source": "https://github.com/streamlink/streamlink",
79 "Funding": "https://opencollective.com/streamlink"
80 },
81 author="Streamlink",
82 # temp until we have a mailing list / global email
83 author_email="[email protected]",
84 license="Simplified BSD",
85 packages=find_packages("src"),
86 package_dir={"": "src"},
87 entry_points=entry_points,
88 install_requires=deps,
89 test_suite="tests",
90 python_requires=">=3.6, <4",
91 classifiers=["Development Status :: 5 - Production/Stable",
92 "License :: OSI Approved :: BSD License",
93 "Environment :: Console",
94 "Intended Audience :: End Users/Desktop",
95 "Operating System :: POSIX",
96 "Operating System :: Microsoft :: Windows",
97 "Operating System :: MacOS",
98 "Programming Language :: Python :: 3",
99 "Programming Language :: Python :: 3 :: Only",
100 "Programming Language :: Python :: 3.6",
101 "Programming Language :: Python :: 3.7",
102 "Programming Language :: Python :: 3.8",
103 "Programming Language :: Python :: 3.9",
104 "Topic :: Internet :: WWW/HTTP",
105 "Topic :: Multimedia :: Sound/Audio",
106 "Topic :: Multimedia :: Video",
107 "Topic :: Utilities"])
108
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -80,7 +80,7 @@
},
author="Streamlink",
# temp until we have a mailing list / global email
- author_email="[email protected]",
+ author_email="[email protected]",
license="Simplified BSD",
packages=find_packages("src"),
package_dir={"": "src"},
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -80,7 +80,7 @@\n },\n author=\"Streamlink\",\n # temp until we have a mailing list / global email\n- author_email=\"[email protected]\",\n+ author_email=\"[email protected]\",\n license=\"Simplified BSD\",\n packages=find_packages(\"src\"),\n package_dir={\"\": \"src\"},\n", "issue": "Change `author_email` in setup.py\nhttps://github.com/streamlink/streamlink/blob/08e582580f3411b2de2c368f8b0cc7108264f990/setup.py#L83\r\n\r\n@gravyboat \r\nyou've registered `[email protected]` a couple of years ago, right? Can this be used instead?\r\n\r\nWhat's the email address of the `streamlink` account on pypi?\r\nhttps://pypi.org/user/streamlink/\n", "before_files": [{"content": "#!/usr/bin/env python\nimport codecs\nfrom os import environ, path\nfrom sys import argv, path as sys_path\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\n\ndeps = [\n \"requests>=2.21.0,<3.0\",\n \"isodate\",\n \"websocket-client\",\n # Support for SOCKS proxies\n \"PySocks!=1.5.7,>=1.5.6\",\n]\n\n# for encrypted streams\nif environ.get(\"STREAMLINK_USE_PYCRYPTO\"):\n deps.append(\"pycrypto\")\nelse:\n # this version of pycryptodome is known to work and has a Windows wheel for py2.7, py3.3-3.6\n deps.append(\"pycryptodome>=3.4.3,<4\")\n\n# for localization\nif environ.get(\"STREAMLINK_USE_PYCOUNTRY\"):\n deps.append(\"pycountry\")\nelse:\n deps.append(\"iso-639\")\n deps.append(\"iso3166\")\n\n# When we build an egg for the Win32 bootstrap we don\"t want dependency\n# information built into it.\nif environ.get(\"NO_DEPS\"):\n deps = []\n\nthis_directory = path.abspath(path.dirname(__file__))\nsrcdir = path.join(this_directory, \"src/\")\nsys_path.insert(0, srcdir)\n\nwith codecs.open(path.join(this_directory, \"README.md\"), 'r', \"utf8\") as f:\n long_description = f.read()\n\n\ndef is_wheel_for_windows():\n if \"bdist_wheel\" in argv:\n names = [\"win32\", \"win-amd64\", \"cygwin\"]\n length = len(argv)\n for pos in range(argv.index(\"bdist_wheel\") + 1, length):\n if argv[pos] == \"--plat-name\" and pos + 1 < length:\n return argv[pos + 1] in names\n elif argv[pos][:12] == \"--plat-name=\":\n return argv[pos][12:] in names\n return False\n\n\nentry_points = {\n \"console_scripts\": [\"streamlink=streamlink_cli.main:main\"]\n}\n\nif is_wheel_for_windows():\n entry_points[\"gui_scripts\"] = [\"streamlinkw=streamlink_cli.main:main\"]\n\n\nsetup(name=\"streamlink\",\n version=versioneer.get_version(),\n cmdclass=versioneer.get_cmdclass(),\n description=\"Streamlink is a command-line utility that extracts streams \"\n \"from various services and pipes them into a video player of \"\n \"choice.\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/streamlink/streamlink\",\n project_urls={\n \"Documentation\": \"https://streamlink.github.io/\",\n \"Tracker\": \"https://github.com/streamlink/streamlink/issues\",\n \"Source\": \"https://github.com/streamlink/streamlink\",\n \"Funding\": \"https://opencollective.com/streamlink\"\n },\n author=\"Streamlink\",\n # temp until we have a mailing list / global email\n author_email=\"[email protected]\",\n license=\"Simplified BSD\",\n packages=find_packages(\"src\"),\n package_dir={\"\": \"src\"},\n entry_points=entry_points,\n install_requires=deps,\n test_suite=\"tests\",\n python_requires=\">=3.6, <4\",\n classifiers=[\"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: BSD License\",\n \"Environment :: Console\",\n \"Intended Audience :: End Users/Desktop\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Multimedia :: Sound/Audio\",\n \"Topic :: Multimedia :: Video\",\n \"Topic :: Utilities\"])\n", "path": "setup.py"}]}
| 1,757 | 102 |
gh_patches_debug_30635
|
rasdani/github-patches
|
git_diff
|
cocotb__cocotb-1848
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document cocotb module variables
The `cocotb` module has some important variables which aren't documented:
- [ ] SIM_NAME
- [ ] SIM_VERSION
- [ ] RANDOM_SEED
- [ ] log
- [X] scheduler
- [ ] regression_manager
- [X] plusargs
- [ ] argv/argc
- [ ] LANGUAGE
</issue>
<code>
[start of cocotb/__init__.py]
1 # Copyright (c) 2013 Potential Ventures Ltd
2 # Copyright (c) 2013 SolarFlare Communications Inc
3 # All rights reserved.
4
5 # Redistribution and use in source and binary forms, with or without
6 # modification, are permitted provided that the following conditions are met:
7 # * Redistributions of source code must retain the above copyright
8 # notice, this list of conditions and the following disclaimer.
9 # * Redistributions in binary form must reproduce the above copyright
10 # notice, this list of conditions and the following disclaimer in the
11 # documentation and/or other materials provided with the distribution.
12 # * Neither the name of Potential Ventures Ltd,
13 # SolarFlare Communications Inc nor the
14 # names of its contributors may be used to endorse or promote products
15 # derived from this software without specific prior written permission.
16
17 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
18 # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19 # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 # DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
21 # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
22 # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
24 # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
26 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27
28 """
29 Cocotb is a coroutine, cosimulation framework for writing testbenches in Python.
30
31 See https://docs.cocotb.org for full documentation
32 """
33 import os
34 import sys
35 import logging
36 import threading
37 import random
38 import time
39 import warnings
40
41 import cocotb._os_compat # must appear first, before the first import of cocotb.simulator
42 import cocotb.handle
43 import cocotb.log
44 from cocotb.scheduler import Scheduler
45 from cocotb.regression import RegressionManager
46
47
48 # Things we want in the cocotb namespace
49 from cocotb.decorators import test, coroutine, hook, function, external # noqa: F401
50
51 from ._version import __version__
52
53
54 def _setup_logging():
55 global log
56
57 def _reopen_stream_with_buffering(stream_name):
58 try:
59 if not getattr(sys, stream_name).isatty():
60 setattr(sys, stream_name, os.fdopen(getattr(sys, stream_name).fileno(), 'w', 1))
61 return True
62 return False
63 except Exception as e:
64 return e
65
66 # If stdout/stderr are not TTYs, Python may not have opened them with line
67 # buffering. In that case, try to reopen them with line buffering
68 # explicitly enabled. This ensures that prints such as stack traces always
69 # appear. Continue silently if this fails.
70 _stdout_buffer_result = _reopen_stream_with_buffering('stdout')
71 _stderr_buffer_result = _reopen_stream_with_buffering('stderr')
72
73 # Don't set the logging up until we've attempted to fix the standard IO,
74 # otherwise it will end up connected to the unfixed IO.
75 cocotb.log.default_config()
76 log = logging.getLogger(__name__)
77
78 # we can't log these things until the logging is set up!
79 if _stderr_buffer_result is True:
80 log.debug("Reopened stderr with line buffering")
81 if _stdout_buffer_result is True:
82 log.debug("Reopened stdout with line buffering")
83 if isinstance(_stdout_buffer_result, Exception) or isinstance(_stderr_buffer_result, Exception):
84 if isinstance(_stdout_buffer_result, Exception):
85 log.warning("Failed to ensure that stdout is line buffered", exc_info=_stdout_buffer_result)
86 if isinstance(_stderr_buffer_result, Exception):
87 log.warning("Failed to ensure that stderr is line buffered", exc_info=_stderr_buffer_result)
88 log.warning("Some stack traces may not appear because of this.")
89
90 del _stderr_buffer_result, _stdout_buffer_result
91
92
93 # Singleton scheduler instance
94 # NB this cheekily ensures a singleton since we're replacing the reference
95 # so that cocotb.scheduler gives you the singleton instance and not the
96 # scheduler package
97
98 scheduler = None
99 """The global scheduler instance."""
100
101 regression_manager = None
102
103 plusargs = {}
104 """A dictionary of "plusargs" handed to the simulation."""
105
106
107 def fork(coro):
108 """ Schedule a coroutine to be run concurrently. See :ref:`coroutines` for details on it's use. """
109 return scheduler.add(coro)
110
111
112 # FIXME is this really required?
113 _rlock = threading.RLock()
114
115 LANGUAGE = os.getenv("TOPLEVEL_LANG")
116
117
118 def mem_debug(port):
119 import cocotb.memdebug
120 cocotb.memdebug.start(port)
121
122
123 def _initialise_testbench(argv_):
124 """Initialize testbench.
125
126 This function is called after the simulator has elaborated all
127 entities and is ready to run the test.
128
129 The test must be defined by the environment variables
130 :envvar:`MODULE` and :envvar:`TESTCASE`.
131
132 The environment variable :envvar:`COCOTB_HOOKS`, if present, contains a
133 comma-separated list of modules to be executed before the first test.
134 """
135 _rlock.acquire()
136
137 global argc, argv
138 argv = argv_
139 argc = len(argv)
140
141 root_name = os.getenv("TOPLEVEL")
142 if root_name is not None:
143 if root_name == "":
144 root_name = None
145 elif '.' in root_name:
146 # Skip any library component of the toplevel
147 root_name = root_name.split(".", 1)[1]
148
149 # sys.path normally includes "" (the current directory), but does not appear to when python is embedded.
150 # Add it back because users expect to be able to import files in their test directory.
151 # TODO: move this to gpi_embed.cpp
152 sys.path.insert(0, "")
153
154 _setup_logging()
155
156 # From https://www.python.org/dev/peps/pep-0565/#recommended-filter-settings-for-test-runners
157 # If the user doesn't want to see these, they can always change the global
158 # warning settings in their test module.
159 if not sys.warnoptions:
160 warnings.simplefilter("default")
161
162 from cocotb import simulator
163
164 global SIM_NAME, SIM_VERSION
165 SIM_NAME = simulator.get_simulator_product()
166 SIM_VERSION = simulator.get_simulator_version()
167
168 cocotb.log.info("Running on {} version {}".format(SIM_NAME, SIM_VERSION))
169
170 memcheck_port = os.getenv('MEMCHECK')
171 if memcheck_port is not None:
172 mem_debug(int(memcheck_port))
173
174 log.info("Running tests with cocotb v%s from %s" %
175 (__version__, os.path.dirname(__file__)))
176
177 # Create the base handle type
178
179 process_plusargs()
180
181 global scheduler
182 scheduler = Scheduler()
183
184 # Seed the Python random number generator to make this repeatable
185 global RANDOM_SEED
186 RANDOM_SEED = os.getenv('RANDOM_SEED')
187
188 if RANDOM_SEED is None:
189 if 'ntb_random_seed' in plusargs:
190 RANDOM_SEED = eval(plusargs['ntb_random_seed'])
191 elif 'seed' in plusargs:
192 RANDOM_SEED = eval(plusargs['seed'])
193 else:
194 RANDOM_SEED = int(time.time())
195 log.info("Seeding Python random module with %d" % (RANDOM_SEED))
196 else:
197 RANDOM_SEED = int(RANDOM_SEED)
198 log.info("Seeding Python random module with supplied seed %d" % (RANDOM_SEED))
199 random.seed(RANDOM_SEED)
200
201 # Setup DUT object
202 from cocotb import simulator
203
204 handle = simulator.get_root_handle(root_name)
205 if not handle:
206 raise RuntimeError("Can not find root handle ({})".format(root_name))
207
208 dut = cocotb.handle.SimHandle(handle)
209
210 # start Regression Manager
211 global regression_manager
212 regression_manager = RegressionManager.from_discovery(dut)
213 regression_manager.execute()
214
215 _rlock.release()
216 return True
217
218
219 def _sim_event(level, message):
220 """Function that can be called externally to signal an event."""
221 SIM_INFO = 0
222 SIM_TEST_FAIL = 1
223 SIM_FAIL = 2
224 from cocotb.result import TestFailure, SimFailure
225
226 if level is SIM_TEST_FAIL:
227 scheduler.log.error("Failing test at simulator request")
228 scheduler.finish_test(TestFailure("Failure from external source: %s" %
229 message))
230 elif level is SIM_FAIL:
231 # We simply return here as the simulator will exit
232 # so no cleanup is needed
233 msg = ("Failing test at simulator request before test run completion: "
234 "%s" % message)
235 scheduler.log.error(msg)
236 scheduler.finish_scheduler(SimFailure(msg))
237 else:
238 scheduler.log.error("Unsupported sim event")
239
240 return True
241
242
243 def process_plusargs():
244
245 global plusargs
246
247 plusargs = {}
248
249 for option in cocotb.argv:
250 if option.startswith('+'):
251 if option.find('=') != -1:
252 (name, value) = option[1:].split('=')
253 plusargs[name] = value
254 else:
255 plusargs[option[1:]] = True
256
[end of cocotb/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cocotb/__init__.py b/cocotb/__init__.py
--- a/cocotb/__init__.py
+++ b/cocotb/__init__.py
@@ -37,6 +37,7 @@
import random
import time
import warnings
+from typing import Dict, List, Union
import cocotb._os_compat # must appear first, before the first import of cocotb.simulator
import cocotb.handle
@@ -95,13 +96,35 @@
# so that cocotb.scheduler gives you the singleton instance and not the
# scheduler package
-scheduler = None
+scheduler = None # type: cocotb.scheduler.Scheduler
"""The global scheduler instance."""
-regression_manager = None
+regression_manager = None # type: cocotb.regression.RegressionManager
+"""The global regression manager instance."""
-plusargs = {}
-"""A dictionary of "plusargs" handed to the simulation."""
+argv = None # type: List[str]
+"""The argument list as seen by the simulator"""
+
+argc = None # type: int
+"""The length of :data:`cocotb.argv`"""
+
+plusargs = None # type: Dict[str, Union[bool, str]]
+"""A dictionary of "plusargs" handed to the simulation. See :make:var:`PLUSARGS` for details."""
+
+LANGUAGE = os.getenv("TOPLEVEL_LANG") # type: str
+"""The value of :make:var:`TOPLEVEL_LANG`"""
+
+SIM_NAME = None # type: str
+"""The running simulator product information. ``None`` if :mod:`cocotb` was not loaded from a simulator"""
+
+SIM_VERSION = None # type: str
+"""The version of the running simulator. ``None`` if :mod:`cocotb` was not loaded from a simulator"""
+
+RANDOM_SEED = None # type: int
+"""
+The value passed to the Python default random number generator.
+See :envvar:`RANDOM_SEED` for details on how the value is computed.
+"""
def fork(coro):
@@ -112,8 +135,6 @@
# FIXME is this really required?
_rlock = threading.RLock()
-LANGUAGE = os.getenv("TOPLEVEL_LANG")
-
def mem_debug(port):
import cocotb.memdebug
|
{"golden_diff": "diff --git a/cocotb/__init__.py b/cocotb/__init__.py\n--- a/cocotb/__init__.py\n+++ b/cocotb/__init__.py\n@@ -37,6 +37,7 @@\n import random\n import time\n import warnings\n+from typing import Dict, List, Union\n \n import cocotb._os_compat # must appear first, before the first import of cocotb.simulator\n import cocotb.handle\n@@ -95,13 +96,35 @@\n # so that cocotb.scheduler gives you the singleton instance and not the\n # scheduler package\n \n-scheduler = None\n+scheduler = None # type: cocotb.scheduler.Scheduler\n \"\"\"The global scheduler instance.\"\"\"\n \n-regression_manager = None\n+regression_manager = None # type: cocotb.regression.RegressionManager\n+\"\"\"The global regression manager instance.\"\"\"\n \n-plusargs = {}\n-\"\"\"A dictionary of \"plusargs\" handed to the simulation.\"\"\"\n+argv = None # type: List[str]\n+\"\"\"The argument list as seen by the simulator\"\"\"\n+\n+argc = None # type: int\n+\"\"\"The length of :data:`cocotb.argv`\"\"\"\n+\n+plusargs = None # type: Dict[str, Union[bool, str]]\n+\"\"\"A dictionary of \"plusargs\" handed to the simulation. See :make:var:`PLUSARGS` for details.\"\"\"\n+\n+LANGUAGE = os.getenv(\"TOPLEVEL_LANG\") # type: str\n+\"\"\"The value of :make:var:`TOPLEVEL_LANG`\"\"\"\n+\n+SIM_NAME = None # type: str\n+\"\"\"The running simulator product information. ``None`` if :mod:`cocotb` was not loaded from a simulator\"\"\"\n+\n+SIM_VERSION = None # type: str\n+\"\"\"The version of the running simulator. ``None`` if :mod:`cocotb` was not loaded from a simulator\"\"\"\n+\n+RANDOM_SEED = None # type: int\n+\"\"\"\n+The value passed to the Python default random number generator.\n+See :envvar:`RANDOM_SEED` for details on how the value is computed.\n+\"\"\"\n \n \n def fork(coro):\n@@ -112,8 +135,6 @@\n # FIXME is this really required?\n _rlock = threading.RLock()\n \n-LANGUAGE = os.getenv(\"TOPLEVEL_LANG\")\n-\n \n def mem_debug(port):\n import cocotb.memdebug\n", "issue": "Document cocotb module variables\nThe `cocotb` module has some important variables which aren't documented:\r\n\r\n- [ ] SIM_NAME\r\n- [ ] SIM_VERSION\r\n- [ ] RANDOM_SEED\r\n- [ ] log\r\n- [X] scheduler\r\n- [ ] regression_manager\r\n- [X] plusargs\r\n- [ ] argv/argc\r\n- [ ] LANGUAGE\n", "before_files": [{"content": "# Copyright (c) 2013 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nCocotb is a coroutine, cosimulation framework for writing testbenches in Python.\n\nSee https://docs.cocotb.org for full documentation\n\"\"\"\nimport os\nimport sys\nimport logging\nimport threading\nimport random\nimport time\nimport warnings\n\nimport cocotb._os_compat # must appear first, before the first import of cocotb.simulator\nimport cocotb.handle\nimport cocotb.log\nfrom cocotb.scheduler import Scheduler\nfrom cocotb.regression import RegressionManager\n\n\n# Things we want in the cocotb namespace\nfrom cocotb.decorators import test, coroutine, hook, function, external # noqa: F401\n\nfrom ._version import __version__\n\n\ndef _setup_logging():\n global log\n\n def _reopen_stream_with_buffering(stream_name):\n try:\n if not getattr(sys, stream_name).isatty():\n setattr(sys, stream_name, os.fdopen(getattr(sys, stream_name).fileno(), 'w', 1))\n return True\n return False\n except Exception as e:\n return e\n\n # If stdout/stderr are not TTYs, Python may not have opened them with line\n # buffering. In that case, try to reopen them with line buffering\n # explicitly enabled. This ensures that prints such as stack traces always\n # appear. Continue silently if this fails.\n _stdout_buffer_result = _reopen_stream_with_buffering('stdout')\n _stderr_buffer_result = _reopen_stream_with_buffering('stderr')\n\n # Don't set the logging up until we've attempted to fix the standard IO,\n # otherwise it will end up connected to the unfixed IO.\n cocotb.log.default_config()\n log = logging.getLogger(__name__)\n\n # we can't log these things until the logging is set up!\n if _stderr_buffer_result is True:\n log.debug(\"Reopened stderr with line buffering\")\n if _stdout_buffer_result is True:\n log.debug(\"Reopened stdout with line buffering\")\n if isinstance(_stdout_buffer_result, Exception) or isinstance(_stderr_buffer_result, Exception):\n if isinstance(_stdout_buffer_result, Exception):\n log.warning(\"Failed to ensure that stdout is line buffered\", exc_info=_stdout_buffer_result)\n if isinstance(_stderr_buffer_result, Exception):\n log.warning(\"Failed to ensure that stderr is line buffered\", exc_info=_stderr_buffer_result)\n log.warning(\"Some stack traces may not appear because of this.\")\n\n del _stderr_buffer_result, _stdout_buffer_result\n\n\n# Singleton scheduler instance\n# NB this cheekily ensures a singleton since we're replacing the reference\n# so that cocotb.scheduler gives you the singleton instance and not the\n# scheduler package\n\nscheduler = None\n\"\"\"The global scheduler instance.\"\"\"\n\nregression_manager = None\n\nplusargs = {}\n\"\"\"A dictionary of \"plusargs\" handed to the simulation.\"\"\"\n\n\ndef fork(coro):\n \"\"\" Schedule a coroutine to be run concurrently. See :ref:`coroutines` for details on it's use. \"\"\"\n return scheduler.add(coro)\n\n\n# FIXME is this really required?\n_rlock = threading.RLock()\n\nLANGUAGE = os.getenv(\"TOPLEVEL_LANG\")\n\n\ndef mem_debug(port):\n import cocotb.memdebug\n cocotb.memdebug.start(port)\n\n\ndef _initialise_testbench(argv_):\n \"\"\"Initialize testbench.\n\n This function is called after the simulator has elaborated all\n entities and is ready to run the test.\n\n The test must be defined by the environment variables\n :envvar:`MODULE` and :envvar:`TESTCASE`.\n\n The environment variable :envvar:`COCOTB_HOOKS`, if present, contains a\n comma-separated list of modules to be executed before the first test.\n \"\"\"\n _rlock.acquire()\n\n global argc, argv\n argv = argv_\n argc = len(argv)\n\n root_name = os.getenv(\"TOPLEVEL\")\n if root_name is not None:\n if root_name == \"\":\n root_name = None\n elif '.' in root_name:\n # Skip any library component of the toplevel\n root_name = root_name.split(\".\", 1)[1]\n\n # sys.path normally includes \"\" (the current directory), but does not appear to when python is embedded.\n # Add it back because users expect to be able to import files in their test directory.\n # TODO: move this to gpi_embed.cpp\n sys.path.insert(0, \"\")\n\n _setup_logging()\n\n # From https://www.python.org/dev/peps/pep-0565/#recommended-filter-settings-for-test-runners\n # If the user doesn't want to see these, they can always change the global\n # warning settings in their test module.\n if not sys.warnoptions:\n warnings.simplefilter(\"default\")\n\n from cocotb import simulator\n\n global SIM_NAME, SIM_VERSION\n SIM_NAME = simulator.get_simulator_product()\n SIM_VERSION = simulator.get_simulator_version()\n\n cocotb.log.info(\"Running on {} version {}\".format(SIM_NAME, SIM_VERSION))\n\n memcheck_port = os.getenv('MEMCHECK')\n if memcheck_port is not None:\n mem_debug(int(memcheck_port))\n\n log.info(\"Running tests with cocotb v%s from %s\" %\n (__version__, os.path.dirname(__file__)))\n\n # Create the base handle type\n\n process_plusargs()\n\n global scheduler\n scheduler = Scheduler()\n\n # Seed the Python random number generator to make this repeatable\n global RANDOM_SEED\n RANDOM_SEED = os.getenv('RANDOM_SEED')\n\n if RANDOM_SEED is None:\n if 'ntb_random_seed' in plusargs:\n RANDOM_SEED = eval(plusargs['ntb_random_seed'])\n elif 'seed' in plusargs:\n RANDOM_SEED = eval(plusargs['seed'])\n else:\n RANDOM_SEED = int(time.time())\n log.info(\"Seeding Python random module with %d\" % (RANDOM_SEED))\n else:\n RANDOM_SEED = int(RANDOM_SEED)\n log.info(\"Seeding Python random module with supplied seed %d\" % (RANDOM_SEED))\n random.seed(RANDOM_SEED)\n\n # Setup DUT object\n from cocotb import simulator\n\n handle = simulator.get_root_handle(root_name)\n if not handle:\n raise RuntimeError(\"Can not find root handle ({})\".format(root_name))\n\n dut = cocotb.handle.SimHandle(handle)\n\n # start Regression Manager\n global regression_manager\n regression_manager = RegressionManager.from_discovery(dut)\n regression_manager.execute()\n\n _rlock.release()\n return True\n\n\ndef _sim_event(level, message):\n \"\"\"Function that can be called externally to signal an event.\"\"\"\n SIM_INFO = 0\n SIM_TEST_FAIL = 1\n SIM_FAIL = 2\n from cocotb.result import TestFailure, SimFailure\n\n if level is SIM_TEST_FAIL:\n scheduler.log.error(\"Failing test at simulator request\")\n scheduler.finish_test(TestFailure(\"Failure from external source: %s\" %\n message))\n elif level is SIM_FAIL:\n # We simply return here as the simulator will exit\n # so no cleanup is needed\n msg = (\"Failing test at simulator request before test run completion: \"\n \"%s\" % message)\n scheduler.log.error(msg)\n scheduler.finish_scheduler(SimFailure(msg))\n else:\n scheduler.log.error(\"Unsupported sim event\")\n\n return True\n\n\ndef process_plusargs():\n\n global plusargs\n\n plusargs = {}\n\n for option in cocotb.argv:\n if option.startswith('+'):\n if option.find('=') != -1:\n (name, value) = option[1:].split('=')\n plusargs[name] = value\n else:\n plusargs[option[1:]] = True\n", "path": "cocotb/__init__.py"}]}
| 3,326 | 528 |
gh_patches_debug_35919
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-1424
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Lutris takes extremely long time to parse very long string keys in registry
If user.reg contains very long (we a talking megabytes long here) string key it takes a very long time to start anything in this wineprefix with lutris. With ~15MB key lutris was taking ~30 minutes. It happens before starting wine, in "preparing to start" phase.
P.S. If you ask how does this happens - Battletech ny HBS writes some analytics into string key in registry. And it only adds to it, never cleans.
</issue>
<code>
[start of lutris/util/wine/registry.py]
1 import os
2 import re
3 from collections import OrderedDict
4 from datetime import datetime
5 from lutris.util.log import logger
6 from lutris.util import system
7 from lutris.util.wine.wine import WINE_DEFAULT_ARCH
8
9 (
10 REG_NONE,
11 REG_SZ,
12 REG_EXPAND_SZ,
13 REG_BINARY,
14 REG_DWORD,
15 REG_DWORD_BIG_ENDIAN,
16 REG_LINK,
17 REG_MULTI_SZ,
18 ) = range(8)
19
20 DATA_TYPES = {
21 '"': REG_SZ,
22 'str:"': REG_SZ,
23 'str(2):"': REG_EXPAND_SZ,
24 'str(7):"': REG_MULTI_SZ,
25 "hex": REG_BINARY,
26 "dword": REG_DWORD,
27 }
28
29
30 class WindowsFileTime:
31 """Utility class to deal with Windows FILETIME structures.
32
33 See: https://msdn.microsoft.com/en-us/library/ms724284(v=vs.85).aspx
34 """
35
36 ticks_per_seconds = 10000000 # 1 tick every 100 nanoseconds
37 epoch_delta = 11644473600 # 3600 * 24 * ((1970 - 1601) * 365 + 89)
38
39 def __init__(self, timestamp=None):
40 self.timestamp = timestamp
41
42 def __repr__(self):
43 return "<{}>: {}".format(self.__class__.__name__, self.timestamp)
44
45 @classmethod
46 def from_hex(cls, hexvalue):
47 timestamp = int(hexvalue, 16)
48 return WindowsFileTime(timestamp)
49
50 def to_hex(self):
51 return "{:x}".format(self.timestamp)
52
53 @classmethod
54 def from_unix_timestamp(cls, timestamp):
55 timestamp = timestamp + cls.epoch_delta
56 timestamp = int(timestamp * cls.ticks_per_seconds)
57 return WindowsFileTime(timestamp)
58
59 def to_unix_timestamp(self):
60 if not self.timestamp:
61 raise ValueError("No timestamp set")
62 unix_ts = self.timestamp / self.ticks_per_seconds
63 unix_ts = unix_ts - self.epoch_delta
64 return unix_ts
65
66 def to_date_time(self):
67 return datetime.fromtimestamp(self.to_unix_timestamp())
68
69
70 class WineRegistry:
71 version_header = "WINE REGISTRY Version "
72 relative_to_header = ";; All keys relative to "
73
74 def __init__(self, reg_filename=None):
75 self.arch = WINE_DEFAULT_ARCH
76 self.version = 2
77 self.relative_to = "\\\\User\\\\S-1-5-21-0-0-0-1000"
78 self.keys = OrderedDict()
79 self.reg_filename = reg_filename
80 if reg_filename:
81 if not system.path_exists(reg_filename):
82 logger.error("Unexisting registry %s", reg_filename)
83 self.parse_reg_file(reg_filename)
84
85 @property
86 def prefix_path(self):
87 """Return the Wine prefix path (where the .reg files are located)"""
88 if self.reg_filename:
89 return os.path.dirname(self.reg_filename)
90
91 @staticmethod
92 def get_raw_registry(reg_filename):
93 """Return an array of the unprocessed contents of a registry file"""
94 if not system.path_exists(reg_filename):
95 return []
96 with open(reg_filename, "r") as reg_file:
97
98 try:
99 registry_content = reg_file.readlines()
100 except Exception: # pylint: disable=broad-except
101 logger.exception(
102 "Failed to registry read %s, please send attach this file in a bug report",
103 reg_filename
104 )
105 registry_content = []
106 return registry_content
107
108 def parse_reg_file(self, reg_filename):
109 registry_lines = self.get_raw_registry(reg_filename)
110 current_key = None
111 add_next_to_value = False
112 for line in registry_lines:
113 line = line.rstrip("\n") # Remove trailing newlines
114
115 if line.startswith(self.version_header):
116 self.version = int(line[len(self.version_header):])
117 continue
118
119 if line.startswith(self.relative_to_header):
120 self.relative_to = line[len(self.relative_to_header):]
121 continue
122
123 if line.startswith("#arch"):
124 self.arch = line.split("=")[1]
125 continue
126
127 if line.startswith("["):
128 current_key = WineRegistryKey(key_def=line)
129 self.keys[current_key.name] = current_key
130 continue
131
132 if current_key:
133 if add_next_to_value:
134 current_key.add_to_last(line)
135 else:
136 current_key.parse(line)
137 add_next_to_value = line.endswith("\\")
138
139 def render(self):
140 content = "{}{}\n".format(self.version_header, self.version)
141 content += "{}{}\n\n".format(self.relative_to_header, self.relative_to)
142 content += "#arch={}\n".format(self.arch)
143 for key in self.keys:
144 content += "\n"
145 content += self.keys[key].render()
146 return content
147
148 def save(self, path=None):
149 """Write the registry to a file"""
150 if not path:
151 path = self.reg_filename
152 if not path:
153 raise OSError("No filename provided")
154 with open(path, "w") as registry_file:
155 registry_file.write(self.render())
156
157 def query(self, path, subkey):
158 key = self.keys.get(path)
159 if key:
160 return key.get_subkey(subkey)
161
162 def set_value(self, path, subkey, value):
163 key = self.keys.get(path)
164 if not key:
165 key = WineRegistryKey(path=path)
166 self.keys[key.name] = key
167 key.set_subkey(subkey, value)
168
169 def clear_key(self, path):
170 """Removes all subkeys from a key"""
171 key = self.keys.get(path)
172 if not key:
173 return
174 key.subkeys.clear()
175
176 def clear_subkeys(self, path, keys):
177 """Remove some subkeys from a key"""
178 key = self.keys.get(path)
179 if not key:
180 return
181 for subkey in list(key.subkeys.keys()):
182 if subkey not in keys:
183 continue
184 key.subkeys.pop(subkey)
185
186 def get_unix_path(self, windows_path):
187 windows_path = windows_path.replace("\\\\", "/")
188 if not self.prefix_path:
189 return
190 drives_path = os.path.join(self.prefix_path, "dosdevices")
191 if not system.path_exists(drives_path):
192 return
193 letter, relpath = windows_path.split(":", 1)
194 relpath = relpath.strip("/")
195 drive_link = os.path.join(drives_path, letter.lower() + ":")
196 try:
197 drive_path = os.readlink(drive_link)
198 except FileNotFoundError:
199 logger.error("Unable to read link for %s", drive_link)
200 return
201
202 if not os.path.isabs(drive_path):
203 drive_path = os.path.join(drives_path, drive_path)
204 return os.path.join(drive_path, relpath)
205
206
207 class WineRegistryKey:
208 def __init__(self, key_def=None, path=None):
209
210 self.subkeys = OrderedDict()
211 self.metas = OrderedDict()
212
213 if path:
214 # Key is created by path, it's a new key
215 timestamp = datetime.now().timestamp()
216 self.name = path
217 self.raw_name = "[{}]".format(path.replace("/", "\\\\"))
218 self.raw_timestamp = " ".join(str(timestamp).split("."))
219
220 windows_timestamp = WindowsFileTime.from_unix_timestamp(timestamp)
221 self.metas["time"] = windows_timestamp.to_hex()
222 else:
223 # Existing key loaded from file
224 self.raw_name, self.raw_timestamp = re.split(
225 re.compile(r"(?<=[^\\]\]) "), key_def, maxsplit=1
226 )
227 self.name = self.raw_name.replace("\\\\", "/").strip("[]")
228
229 # Parse timestamp either as int or float
230 ts_parts = self.raw_timestamp.strip().split()
231 if len(ts_parts) == 1:
232 self.timestamp = int(ts_parts[0])
233 else:
234 self.timestamp = float("{}.{}".format(ts_parts[0], ts_parts[1]))
235
236 def __str__(self):
237 return "{0} {1}".format(self.raw_name, self.raw_timestamp)
238
239 def parse(self, line):
240 """Parse a registry line, populating meta and subkeys"""
241 if len(line) < 4:
242 # Line is too short, nothing to parse
243 return
244
245 if line.startswith("#"):
246 self.add_meta(line)
247 elif line.startswith('"'):
248 try:
249 key, value = re.split(re.compile(r"(?<![^\\]\\\")="), line, maxsplit=1)
250 except ValueError as ex:
251 logger.error("Unable to parse line %s", line)
252 logger.exception(ex)
253 return
254 key = key[1:-1]
255 self.subkeys[key] = value
256 elif line.startswith("@"):
257 key, value = line.split("=", 1)
258 self.subkeys["default"] = value
259
260 def add_to_last(self, line):
261 last_subkey = list(self.subkeys.keys())[-1]
262 self.subkeys[last_subkey] += "\n{}".format(line)
263
264 def render(self):
265 """Return the content of the key in the wine .reg format"""
266 content = self.raw_name + " " + self.raw_timestamp + "\n"
267 for key, value in self.metas.items():
268 if value is None:
269 content += "#{}\n".format(key)
270 else:
271 content += "#{}={}\n".format(key, value)
272 for key, value in self.subkeys.items():
273 if key == "default":
274 key = "@"
275 else:
276 key = '"{}"'.format(key)
277 content += "{}={}\n".format(key, value)
278 return content
279
280 def render_value(self, value):
281 if isinstance(value, int):
282 return "dword:{:08x}".format(value)
283 if isinstance(value, str):
284 return '"{}"'.format(value)
285 raise NotImplementedError("TODO")
286
287 def add_meta(self, meta_line):
288 if not meta_line.startswith("#"):
289 raise ValueError("Key metas should start with '#'")
290 meta_line = meta_line[1:]
291 parts = meta_line.split("=")
292 if len(parts) == 2:
293 key = parts[0]
294 value = parts[1]
295 elif len(parts) == 1:
296 key = parts[0]
297 value = None
298 else:
299 raise ValueError("Invalid meta line '{}'".format(meta_line))
300 self.metas[key] = value
301
302 def get_meta(self, name):
303 return self.metas.get(name)
304
305 def set_subkey(self, name, value):
306 self.subkeys[name] = self.render_value(value)
307
308 def get_subkey(self, name):
309 if name not in self.subkeys:
310 return None
311 value = self.subkeys[name]
312 if value.startswith('"') and value.endswith('"'):
313 return value[1:-1]
314 if value.startswith("dword:"):
315 return int(value[6:], 16)
316 raise ValueError("Handle %s" % value)
317
[end of lutris/util/wine/registry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lutris/util/wine/registry.py b/lutris/util/wine/registry.py
--- a/lutris/util/wine/registry.py
+++ b/lutris/util/wine/registry.py
@@ -109,32 +109,29 @@
registry_lines = self.get_raw_registry(reg_filename)
current_key = None
add_next_to_value = False
+ additional_values = []
for line in registry_lines:
- line = line.rstrip("\n") # Remove trailing newlines
-
- if line.startswith(self.version_header):
- self.version = int(line[len(self.version_header):])
- continue
-
- if line.startswith(self.relative_to_header):
- self.relative_to = line[len(self.relative_to_header):]
- continue
-
- if line.startswith("#arch"):
- self.arch = line.split("=")[1]
- continue
-
- if line.startswith("["):
- current_key = WineRegistryKey(key_def=line)
- self.keys[current_key.name] = current_key
- continue
+ line = line.rstrip("\n")
if current_key:
if add_next_to_value:
- current_key.add_to_last(line)
- else:
+ additional_values.append(line)
+ elif not add_next_to_value:
+ if additional_values:
+ additional_values = '\n'.join(additional_values)
+ current_key.add_to_last(additional_values)
+ additional_values = []
current_key.parse(line)
add_next_to_value = line.endswith("\\")
+ elif line.startswith("["):
+ current_key = WineRegistryKey(key_def=line)
+ self.keys[current_key.name] = current_key
+ elif line.startswith(self.version_header):
+ self.version = int(line[len(self.version_header):])
+ elif line.startswith(self.relative_to_header):
+ self.relative_to = line[len(self.relative_to_header):]
+ elif line.startswith("#arch"):
+ self.arch = line.split("=")[1]
def render(self):
content = "{}{}\n".format(self.version_header, self.version)
@@ -258,7 +255,7 @@
self.subkeys["default"] = value
def add_to_last(self, line):
- last_subkey = list(self.subkeys.keys())[-1]
+ last_subkey = next(reversed(self.subkeys))
self.subkeys[last_subkey] += "\n{}".format(line)
def render(self):
|
{"golden_diff": "diff --git a/lutris/util/wine/registry.py b/lutris/util/wine/registry.py\n--- a/lutris/util/wine/registry.py\n+++ b/lutris/util/wine/registry.py\n@@ -109,32 +109,29 @@\n registry_lines = self.get_raw_registry(reg_filename)\n current_key = None\n add_next_to_value = False\n+ additional_values = []\n for line in registry_lines:\n- line = line.rstrip(\"\\n\") # Remove trailing newlines\n-\n- if line.startswith(self.version_header):\n- self.version = int(line[len(self.version_header):])\n- continue\n-\n- if line.startswith(self.relative_to_header):\n- self.relative_to = line[len(self.relative_to_header):]\n- continue\n-\n- if line.startswith(\"#arch\"):\n- self.arch = line.split(\"=\")[1]\n- continue\n-\n- if line.startswith(\"[\"):\n- current_key = WineRegistryKey(key_def=line)\n- self.keys[current_key.name] = current_key\n- continue\n+ line = line.rstrip(\"\\n\")\n \n if current_key:\n if add_next_to_value:\n- current_key.add_to_last(line)\n- else:\n+ additional_values.append(line)\n+ elif not add_next_to_value:\n+ if additional_values:\n+ additional_values = '\\n'.join(additional_values)\n+ current_key.add_to_last(additional_values)\n+ additional_values = []\n current_key.parse(line)\n add_next_to_value = line.endswith(\"\\\\\")\n+ elif line.startswith(\"[\"):\n+ current_key = WineRegistryKey(key_def=line)\n+ self.keys[current_key.name] = current_key\n+ elif line.startswith(self.version_header):\n+ self.version = int(line[len(self.version_header):])\n+ elif line.startswith(self.relative_to_header):\n+ self.relative_to = line[len(self.relative_to_header):]\n+ elif line.startswith(\"#arch\"):\n+ self.arch = line.split(\"=\")[1]\n \n def render(self):\n content = \"{}{}\\n\".format(self.version_header, self.version)\n@@ -258,7 +255,7 @@\n self.subkeys[\"default\"] = value\n \n def add_to_last(self, line):\n- last_subkey = list(self.subkeys.keys())[-1]\n+ last_subkey = next(reversed(self.subkeys))\n self.subkeys[last_subkey] += \"\\n{}\".format(line)\n \n def render(self):\n", "issue": "Lutris takes extremely long time to parse very long string keys in registry\nIf user.reg contains very long (we a talking megabytes long here) string key it takes a very long time to start anything in this wineprefix with lutris. With ~15MB key lutris was taking ~30 minutes. It happens before starting wine, in \"preparing to start\" phase.\r\n\r\nP.S. If you ask how does this happens - Battletech ny HBS writes some analytics into string key in registry. And it only adds to it, never cleans.\n", "before_files": [{"content": "import os\nimport re\nfrom collections import OrderedDict\nfrom datetime import datetime\nfrom lutris.util.log import logger\nfrom lutris.util import system\nfrom lutris.util.wine.wine import WINE_DEFAULT_ARCH\n\n(\n REG_NONE,\n REG_SZ,\n REG_EXPAND_SZ,\n REG_BINARY,\n REG_DWORD,\n REG_DWORD_BIG_ENDIAN,\n REG_LINK,\n REG_MULTI_SZ,\n) = range(8)\n\nDATA_TYPES = {\n '\"': REG_SZ,\n 'str:\"': REG_SZ,\n 'str(2):\"': REG_EXPAND_SZ,\n 'str(7):\"': REG_MULTI_SZ,\n \"hex\": REG_BINARY,\n \"dword\": REG_DWORD,\n}\n\n\nclass WindowsFileTime:\n \"\"\"Utility class to deal with Windows FILETIME structures.\n\n See: https://msdn.microsoft.com/en-us/library/ms724284(v=vs.85).aspx\n \"\"\"\n\n ticks_per_seconds = 10000000 # 1 tick every 100 nanoseconds\n epoch_delta = 11644473600 # 3600 * 24 * ((1970 - 1601) * 365 + 89)\n\n def __init__(self, timestamp=None):\n self.timestamp = timestamp\n\n def __repr__(self):\n return \"<{}>: {}\".format(self.__class__.__name__, self.timestamp)\n\n @classmethod\n def from_hex(cls, hexvalue):\n timestamp = int(hexvalue, 16)\n return WindowsFileTime(timestamp)\n\n def to_hex(self):\n return \"{:x}\".format(self.timestamp)\n\n @classmethod\n def from_unix_timestamp(cls, timestamp):\n timestamp = timestamp + cls.epoch_delta\n timestamp = int(timestamp * cls.ticks_per_seconds)\n return WindowsFileTime(timestamp)\n\n def to_unix_timestamp(self):\n if not self.timestamp:\n raise ValueError(\"No timestamp set\")\n unix_ts = self.timestamp / self.ticks_per_seconds\n unix_ts = unix_ts - self.epoch_delta\n return unix_ts\n\n def to_date_time(self):\n return datetime.fromtimestamp(self.to_unix_timestamp())\n\n\nclass WineRegistry:\n version_header = \"WINE REGISTRY Version \"\n relative_to_header = \";; All keys relative to \"\n\n def __init__(self, reg_filename=None):\n self.arch = WINE_DEFAULT_ARCH\n self.version = 2\n self.relative_to = \"\\\\\\\\User\\\\\\\\S-1-5-21-0-0-0-1000\"\n self.keys = OrderedDict()\n self.reg_filename = reg_filename\n if reg_filename:\n if not system.path_exists(reg_filename):\n logger.error(\"Unexisting registry %s\", reg_filename)\n self.parse_reg_file(reg_filename)\n\n @property\n def prefix_path(self):\n \"\"\"Return the Wine prefix path (where the .reg files are located)\"\"\"\n if self.reg_filename:\n return os.path.dirname(self.reg_filename)\n\n @staticmethod\n def get_raw_registry(reg_filename):\n \"\"\"Return an array of the unprocessed contents of a registry file\"\"\"\n if not system.path_exists(reg_filename):\n return []\n with open(reg_filename, \"r\") as reg_file:\n\n try:\n registry_content = reg_file.readlines()\n except Exception: # pylint: disable=broad-except\n logger.exception(\n \"Failed to registry read %s, please send attach this file in a bug report\",\n reg_filename\n )\n registry_content = []\n return registry_content\n\n def parse_reg_file(self, reg_filename):\n registry_lines = self.get_raw_registry(reg_filename)\n current_key = None\n add_next_to_value = False\n for line in registry_lines:\n line = line.rstrip(\"\\n\") # Remove trailing newlines\n\n if line.startswith(self.version_header):\n self.version = int(line[len(self.version_header):])\n continue\n\n if line.startswith(self.relative_to_header):\n self.relative_to = line[len(self.relative_to_header):]\n continue\n\n if line.startswith(\"#arch\"):\n self.arch = line.split(\"=\")[1]\n continue\n\n if line.startswith(\"[\"):\n current_key = WineRegistryKey(key_def=line)\n self.keys[current_key.name] = current_key\n continue\n\n if current_key:\n if add_next_to_value:\n current_key.add_to_last(line)\n else:\n current_key.parse(line)\n add_next_to_value = line.endswith(\"\\\\\")\n\n def render(self):\n content = \"{}{}\\n\".format(self.version_header, self.version)\n content += \"{}{}\\n\\n\".format(self.relative_to_header, self.relative_to)\n content += \"#arch={}\\n\".format(self.arch)\n for key in self.keys:\n content += \"\\n\"\n content += self.keys[key].render()\n return content\n\n def save(self, path=None):\n \"\"\"Write the registry to a file\"\"\"\n if not path:\n path = self.reg_filename\n if not path:\n raise OSError(\"No filename provided\")\n with open(path, \"w\") as registry_file:\n registry_file.write(self.render())\n\n def query(self, path, subkey):\n key = self.keys.get(path)\n if key:\n return key.get_subkey(subkey)\n\n def set_value(self, path, subkey, value):\n key = self.keys.get(path)\n if not key:\n key = WineRegistryKey(path=path)\n self.keys[key.name] = key\n key.set_subkey(subkey, value)\n\n def clear_key(self, path):\n \"\"\"Removes all subkeys from a key\"\"\"\n key = self.keys.get(path)\n if not key:\n return\n key.subkeys.clear()\n\n def clear_subkeys(self, path, keys):\n \"\"\"Remove some subkeys from a key\"\"\"\n key = self.keys.get(path)\n if not key:\n return\n for subkey in list(key.subkeys.keys()):\n if subkey not in keys:\n continue\n key.subkeys.pop(subkey)\n\n def get_unix_path(self, windows_path):\n windows_path = windows_path.replace(\"\\\\\\\\\", \"/\")\n if not self.prefix_path:\n return\n drives_path = os.path.join(self.prefix_path, \"dosdevices\")\n if not system.path_exists(drives_path):\n return\n letter, relpath = windows_path.split(\":\", 1)\n relpath = relpath.strip(\"/\")\n drive_link = os.path.join(drives_path, letter.lower() + \":\")\n try:\n drive_path = os.readlink(drive_link)\n except FileNotFoundError:\n logger.error(\"Unable to read link for %s\", drive_link)\n return\n\n if not os.path.isabs(drive_path):\n drive_path = os.path.join(drives_path, drive_path)\n return os.path.join(drive_path, relpath)\n\n\nclass WineRegistryKey:\n def __init__(self, key_def=None, path=None):\n\n self.subkeys = OrderedDict()\n self.metas = OrderedDict()\n\n if path:\n # Key is created by path, it's a new key\n timestamp = datetime.now().timestamp()\n self.name = path\n self.raw_name = \"[{}]\".format(path.replace(\"/\", \"\\\\\\\\\"))\n self.raw_timestamp = \" \".join(str(timestamp).split(\".\"))\n\n windows_timestamp = WindowsFileTime.from_unix_timestamp(timestamp)\n self.metas[\"time\"] = windows_timestamp.to_hex()\n else:\n # Existing key loaded from file\n self.raw_name, self.raw_timestamp = re.split(\n re.compile(r\"(?<=[^\\\\]\\]) \"), key_def, maxsplit=1\n )\n self.name = self.raw_name.replace(\"\\\\\\\\\", \"/\").strip(\"[]\")\n\n # Parse timestamp either as int or float\n ts_parts = self.raw_timestamp.strip().split()\n if len(ts_parts) == 1:\n self.timestamp = int(ts_parts[0])\n else:\n self.timestamp = float(\"{}.{}\".format(ts_parts[0], ts_parts[1]))\n\n def __str__(self):\n return \"{0} {1}\".format(self.raw_name, self.raw_timestamp)\n\n def parse(self, line):\n \"\"\"Parse a registry line, populating meta and subkeys\"\"\"\n if len(line) < 4:\n # Line is too short, nothing to parse\n return\n\n if line.startswith(\"#\"):\n self.add_meta(line)\n elif line.startswith('\"'):\n try:\n key, value = re.split(re.compile(r\"(?<![^\\\\]\\\\\\\")=\"), line, maxsplit=1)\n except ValueError as ex:\n logger.error(\"Unable to parse line %s\", line)\n logger.exception(ex)\n return\n key = key[1:-1]\n self.subkeys[key] = value\n elif line.startswith(\"@\"):\n key, value = line.split(\"=\", 1)\n self.subkeys[\"default\"] = value\n\n def add_to_last(self, line):\n last_subkey = list(self.subkeys.keys())[-1]\n self.subkeys[last_subkey] += \"\\n{}\".format(line)\n\n def render(self):\n \"\"\"Return the content of the key in the wine .reg format\"\"\"\n content = self.raw_name + \" \" + self.raw_timestamp + \"\\n\"\n for key, value in self.metas.items():\n if value is None:\n content += \"#{}\\n\".format(key)\n else:\n content += \"#{}={}\\n\".format(key, value)\n for key, value in self.subkeys.items():\n if key == \"default\":\n key = \"@\"\n else:\n key = '\"{}\"'.format(key)\n content += \"{}={}\\n\".format(key, value)\n return content\n\n def render_value(self, value):\n if isinstance(value, int):\n return \"dword:{:08x}\".format(value)\n if isinstance(value, str):\n return '\"{}\"'.format(value)\n raise NotImplementedError(\"TODO\")\n\n def add_meta(self, meta_line):\n if not meta_line.startswith(\"#\"):\n raise ValueError(\"Key metas should start with '#'\")\n meta_line = meta_line[1:]\n parts = meta_line.split(\"=\")\n if len(parts) == 2:\n key = parts[0]\n value = parts[1]\n elif len(parts) == 1:\n key = parts[0]\n value = None\n else:\n raise ValueError(\"Invalid meta line '{}'\".format(meta_line))\n self.metas[key] = value\n\n def get_meta(self, name):\n return self.metas.get(name)\n\n def set_subkey(self, name, value):\n self.subkeys[name] = self.render_value(value)\n\n def get_subkey(self, name):\n if name not in self.subkeys:\n return None\n value = self.subkeys[name]\n if value.startswith('\"') and value.endswith('\"'):\n return value[1:-1]\n if value.startswith(\"dword:\"):\n return int(value[6:], 16)\n raise ValueError(\"Handle %s\" % value)\n", "path": "lutris/util/wine/registry.py"}]}
| 3,896 | 536 |
gh_patches_debug_33078
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-1641
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Django Signals integration breaks on partial objects for python <3.10
### How do you use Sentry?
Self-hosted/on-premise
### Version
1.9.9
### Steps to Reproduce
1. Use python older than 3.10.
2. Register a partial function as a signal handler.
### Expected Result
Signal is traced correctly.
### Actual Result
Exception is raised from `_get_receiver_name` function as `partial` objects don't have `__module__` before python 3.10 (and even there it's undocumented from what I can see).
It fails in our tests where we don't even register any signals so either Django itself or some kind of integration (Sentry?) registers such signals by default.
The whole signals integration is missing a `capture_internal_exceptions` context too I believe.
</issue>
<code>
[start of sentry_sdk/integrations/django/signals_handlers.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import
3
4 from django.dispatch import Signal
5
6 from sentry_sdk import Hub
7 from sentry_sdk._types import MYPY
8
9
10 if MYPY:
11 from typing import Any
12 from typing import Callable
13 from typing import List
14
15
16 def patch_signals():
17 # type: () -> None
18 """Patch django signal receivers to create a span"""
19
20 old_live_receivers = Signal._live_receivers
21
22 def _get_receiver_name(receiver):
23 # type: (Callable[..., Any]) -> str
24 name = receiver.__module__ + "."
25 if hasattr(receiver, "__name__"):
26 return name + receiver.__name__
27 return name + str(receiver)
28
29 def _sentry_live_receivers(self, sender):
30 # type: (Signal, Any) -> List[Callable[..., Any]]
31 hub = Hub.current
32 receivers = old_live_receivers(self, sender)
33
34 def sentry_receiver_wrapper(receiver):
35 # type: (Callable[..., Any]) -> Callable[..., Any]
36 def wrapper(*args, **kwargs):
37 # type: (Any, Any) -> Any
38 with hub.start_span(
39 op="django.signals",
40 description=_get_receiver_name(receiver),
41 ) as span:
42 span.set_data("signal", _get_receiver_name(receiver))
43 return receiver(*args, **kwargs)
44
45 return wrapper
46
47 for idx, receiver in enumerate(receivers):
48 receivers[idx] = sentry_receiver_wrapper(receiver)
49
50 return receivers
51
52 Signal._live_receivers = _sentry_live_receivers
53
[end of sentry_sdk/integrations/django/signals_handlers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry_sdk/integrations/django/signals_handlers.py b/sentry_sdk/integrations/django/signals_handlers.py
--- a/sentry_sdk/integrations/django/signals_handlers.py
+++ b/sentry_sdk/integrations/django/signals_handlers.py
@@ -13,19 +13,32 @@
from typing import List
+def _get_receiver_name(receiver):
+ # type: (Callable[..., Any]) -> str
+ name = ""
+
+ if hasattr(receiver, "__qualname__"):
+ name += receiver.__qualname__
+ elif hasattr(receiver, "__name__"): # Python 2.7 has no __qualname__
+ name += receiver.__name__
+
+ if (
+ name == ""
+ ): # certain functions (like partials) dont have a name so return the string representation
+ return str(receiver)
+
+ if hasattr(receiver, "__module__"): # prepend with module, if there is one
+ name = receiver.__module__ + "." + name
+
+ return name
+
+
def patch_signals():
# type: () -> None
"""Patch django signal receivers to create a span"""
old_live_receivers = Signal._live_receivers
- def _get_receiver_name(receiver):
- # type: (Callable[..., Any]) -> str
- name = receiver.__module__ + "."
- if hasattr(receiver, "__name__"):
- return name + receiver.__name__
- return name + str(receiver)
-
def _sentry_live_receivers(self, sender):
# type: (Signal, Any) -> List[Callable[..., Any]]
hub = Hub.current
@@ -35,11 +48,12 @@
# type: (Callable[..., Any]) -> Callable[..., Any]
def wrapper(*args, **kwargs):
# type: (Any, Any) -> Any
+ signal_name = _get_receiver_name(receiver)
with hub.start_span(
op="django.signals",
- description=_get_receiver_name(receiver),
+ description=signal_name,
) as span:
- span.set_data("signal", _get_receiver_name(receiver))
+ span.set_data("signal", signal_name)
return receiver(*args, **kwargs)
return wrapper
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/django/signals_handlers.py b/sentry_sdk/integrations/django/signals_handlers.py\n--- a/sentry_sdk/integrations/django/signals_handlers.py\n+++ b/sentry_sdk/integrations/django/signals_handlers.py\n@@ -13,19 +13,32 @@\n from typing import List\n \n \n+def _get_receiver_name(receiver):\n+ # type: (Callable[..., Any]) -> str\n+ name = \"\"\n+\n+ if hasattr(receiver, \"__qualname__\"):\n+ name += receiver.__qualname__\n+ elif hasattr(receiver, \"__name__\"): # Python 2.7 has no __qualname__\n+ name += receiver.__name__\n+\n+ if (\n+ name == \"\"\n+ ): # certain functions (like partials) dont have a name so return the string representation\n+ return str(receiver)\n+\n+ if hasattr(receiver, \"__module__\"): # prepend with module, if there is one\n+ name = receiver.__module__ + \".\" + name\n+\n+ return name\n+\n+\n def patch_signals():\n # type: () -> None\n \"\"\"Patch django signal receivers to create a span\"\"\"\n \n old_live_receivers = Signal._live_receivers\n \n- def _get_receiver_name(receiver):\n- # type: (Callable[..., Any]) -> str\n- name = receiver.__module__ + \".\"\n- if hasattr(receiver, \"__name__\"):\n- return name + receiver.__name__\n- return name + str(receiver)\n-\n def _sentry_live_receivers(self, sender):\n # type: (Signal, Any) -> List[Callable[..., Any]]\n hub = Hub.current\n@@ -35,11 +48,12 @@\n # type: (Callable[..., Any]) -> Callable[..., Any]\n def wrapper(*args, **kwargs):\n # type: (Any, Any) -> Any\n+ signal_name = _get_receiver_name(receiver)\n with hub.start_span(\n op=\"django.signals\",\n- description=_get_receiver_name(receiver),\n+ description=signal_name,\n ) as span:\n- span.set_data(\"signal\", _get_receiver_name(receiver))\n+ span.set_data(\"signal\", signal_name)\n return receiver(*args, **kwargs)\n \n return wrapper\n", "issue": "Django Signals integration breaks on partial objects for python <3.10\n### How do you use Sentry?\n\nSelf-hosted/on-premise\n\n### Version\n\n1.9.9\n\n### Steps to Reproduce\n\n1. Use python older than 3.10.\r\n2. Register a partial function as a signal handler.\n\n### Expected Result\n\nSignal is traced correctly.\n\n### Actual Result\n\nException is raised from `_get_receiver_name` function as `partial` objects don't have `__module__` before python 3.10 (and even there it's undocumented from what I can see).\r\n\r\nIt fails in our tests where we don't even register any signals so either Django itself or some kind of integration (Sentry?) registers such signals by default.\r\n\r\nThe whole signals integration is missing a `capture_internal_exceptions` context too I believe.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\n\nfrom django.dispatch import Signal\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk._types import MYPY\n\n\nif MYPY:\n from typing import Any\n from typing import Callable\n from typing import List\n\n\ndef patch_signals():\n # type: () -> None\n \"\"\"Patch django signal receivers to create a span\"\"\"\n\n old_live_receivers = Signal._live_receivers\n\n def _get_receiver_name(receiver):\n # type: (Callable[..., Any]) -> str\n name = receiver.__module__ + \".\"\n if hasattr(receiver, \"__name__\"):\n return name + receiver.__name__\n return name + str(receiver)\n\n def _sentry_live_receivers(self, sender):\n # type: (Signal, Any) -> List[Callable[..., Any]]\n hub = Hub.current\n receivers = old_live_receivers(self, sender)\n\n def sentry_receiver_wrapper(receiver):\n # type: (Callable[..., Any]) -> Callable[..., Any]\n def wrapper(*args, **kwargs):\n # type: (Any, Any) -> Any\n with hub.start_span(\n op=\"django.signals\",\n description=_get_receiver_name(receiver),\n ) as span:\n span.set_data(\"signal\", _get_receiver_name(receiver))\n return receiver(*args, **kwargs)\n\n return wrapper\n\n for idx, receiver in enumerate(receivers):\n receivers[idx] = sentry_receiver_wrapper(receiver)\n\n return receivers\n\n Signal._live_receivers = _sentry_live_receivers\n", "path": "sentry_sdk/integrations/django/signals_handlers.py"}]}
| 1,171 | 506 |
gh_patches_debug_40637
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-20505
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deleted files cause `pants tailor` with `--changed-since` to fail
**Describe the bug**
I use the following command in CI to validate the monorepo, as recommended by the docs:
```shell
> pants \
--changed-since=origin/main \
tailor --check \
update-build-files --check \
lint
```
However, if I delete a package, including its `BUILD` file in a PR, the `--changed-since` flag causes `tailor` to try to run on those files, which `pants` blows up on:
```shell
16:40:57.91 [ERROR] 1 Exception encountered:
Engine traceback:
in `tailor` goal
IntrinsicError: Unmatched glob from `--changed-since`: "aws/projects/my_project_name/*"
Do the file(s) exist? If so, check if the file(s) are in your `.gitignore` or the global `pants_ignore` option, which may result in Pants not being able to see the file(s) even though they exist on disk. Refer to https://www.pantsbuild.org/v2.19/docs/troubleshooting#pants-cannot-find-a-file-in-your-project.
Exited with code exit status 1
```
If files are deleted, yes, they are changed, but they shouldn't throw an error.
**Pants version**
2.19.0
**OS**
Linux (CircleCI Ubuntu executor)
**Additional info**
N/A
</issue>
<code>
[start of src/python/pants/init/specs_calculator.py]
1 # Copyright 2018 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 import logging
5 from typing import cast
6
7 from pants.base.specs import AddressLiteralSpec, FileLiteralSpec, RawSpecs, Specs
8 from pants.base.specs_parser import SpecsParser
9 from pants.core.util_rules.environments import determine_bootstrap_environment
10 from pants.core.util_rules.system_binaries import GitBinary
11 from pants.engine.addresses import AddressInput
12 from pants.engine.environment import EnvironmentName
13 from pants.engine.internals.scheduler import SchedulerSession
14 from pants.engine.internals.selectors import Params
15 from pants.engine.rules import QueryRule
16 from pants.option.options import Options
17 from pants.option.options_bootstrapper import OptionsBootstrapper
18 from pants.util.frozendict import FrozenDict
19 from pants.vcs.changed import ChangedAddresses, ChangedOptions, ChangedRequest
20 from pants.vcs.git import GitWorktreeRequest, MaybeGitWorktree
21
22 logger = logging.getLogger(__name__)
23
24
25 class InvalidSpecConstraint(Exception):
26 """Raised when invalid constraints are given via specs and arguments like --changed*."""
27
28
29 def calculate_specs(
30 options_bootstrapper: OptionsBootstrapper,
31 options: Options,
32 session: SchedulerSession,
33 working_dir: str,
34 ) -> Specs:
35 """Determine the specs for a given Pants run."""
36 global_options = options.for_global_scope()
37 unmatched_cli_globs = global_options.unmatched_cli_globs
38 specs = SpecsParser(working_dir=working_dir).parse_specs(
39 options.specs,
40 description_of_origin="CLI arguments",
41 unmatched_glob_behavior=unmatched_cli_globs,
42 )
43
44 changed_options = ChangedOptions.from_options(options.for_scope("changed"))
45 logger.debug("specs are: %s", specs)
46 logger.debug("changed_options are: %s", changed_options)
47
48 if specs and changed_options.provided:
49 changed_name = "--changed-since" if changed_options.since else "--changed-diffspec"
50 specs_description = specs.arguments_provided_description()
51 assert specs_description is not None
52 raise InvalidSpecConstraint(
53 f"You used `{changed_name}` at the same time as using {specs_description}. You can "
54 f"only use `{changed_name}` or use normal arguments."
55 )
56
57 if not changed_options.provided:
58 return specs
59
60 bootstrap_environment = determine_bootstrap_environment(session)
61
62 (git_binary,) = session.product_request(GitBinary, [Params(bootstrap_environment)])
63 (maybe_git_worktree,) = session.product_request(
64 MaybeGitWorktree, [Params(GitWorktreeRequest(), git_binary, bootstrap_environment)]
65 )
66 if not maybe_git_worktree.git_worktree:
67 raise InvalidSpecConstraint(
68 "The `--changed-*` options are only available if Git is used for the repository."
69 )
70
71 changed_files = tuple(changed_options.changed_files(maybe_git_worktree.git_worktree))
72 file_literal_specs = tuple(FileLiteralSpec(f) for f in changed_files)
73
74 changed_request = ChangedRequest(changed_files, changed_options.dependents)
75 (changed_addresses,) = session.product_request(
76 ChangedAddresses,
77 [Params(changed_request, options_bootstrapper, bootstrap_environment)],
78 )
79 logger.debug("changed addresses: %s", changed_addresses)
80
81 address_literal_specs = []
82 for address in cast(ChangedAddresses, changed_addresses):
83 address_input = AddressInput.parse(address.spec, description_of_origin="`--changed-since`")
84 address_literal_specs.append(
85 AddressLiteralSpec(
86 path_component=address_input.path_component,
87 target_component=address_input.target_component,
88 generated_component=address_input.generated_component,
89 parameters=FrozenDict(address_input.parameters),
90 )
91 )
92
93 return Specs(
94 includes=RawSpecs(
95 # We need both address_literals and file_literals to cover all our edge cases, including
96 # target-aware vs. target-less goals, e.g. `list` vs `count-loc`.
97 address_literals=tuple(address_literal_specs),
98 file_literals=file_literal_specs,
99 unmatched_glob_behavior=unmatched_cli_globs,
100 filter_by_global_options=True,
101 from_change_detection=True,
102 description_of_origin="`--changed-since`",
103 ),
104 ignores=RawSpecs(description_of_origin="`--changed-since`"),
105 )
106
107
108 def rules():
109 return [
110 QueryRule(ChangedAddresses, [ChangedRequest, EnvironmentName]),
111 QueryRule(GitBinary, [EnvironmentName]),
112 QueryRule(MaybeGitWorktree, [GitWorktreeRequest, GitBinary, EnvironmentName]),
113 ]
114
[end of src/python/pants/init/specs_calculator.py]
[start of src/python/pants/vcs/git.py]
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import dataclasses
7 import logging
8 import os
9 from dataclasses import dataclass
10 from os import PathLike
11 from pathlib import Path, PurePath
12 from typing import Any, Iterable
13
14 from pants.core.util_rules.system_binaries import GitBinary, GitBinaryException, MaybeGitBinary
15 from pants.engine.engine_aware import EngineAwareReturnType
16 from pants.engine.rules import collect_rules, rule
17 from pants.util.contextutil import pushd
18
19 logger = logging.getLogger(__name__)
20
21
22 class GitWorktree(EngineAwareReturnType):
23 """Implements a safe wrapper for un-sandboxed access to Git in the user's working copy.
24
25 This type (and any wrappers) should be marked `EngineAwareReturnType.cacheable=False`, because
26 it internally uses un-sandboxed APIs, and `@rules` which produce it should re-run in each
27 session. It additionally implements a default `__eq__` in order to prevent early-cutoff in the
28 graph, and force any consumers of the type to re-run.
29 """
30
31 worktree: PurePath
32 _gitdir: PurePath
33 _git_binary: GitBinary
34
35 def __init__(
36 self,
37 binary: GitBinary,
38 worktree: PathLike[str] | None = None,
39 gitdir: PathLike[str] | None = None,
40 ) -> None:
41 """Creates a git object that assumes the git repository is in the cwd by default.
42
43 binary: The git binary to use.
44 worktree: The path to the git repository working tree directory (typically '.').
45 gitdir: The path to the repository's git metadata directory (typically '.git').
46 """
47 self.worktree = Path(worktree or os.getcwd()).resolve()
48 self._gitdir = Path(gitdir).resolve() if gitdir else (self.worktree / ".git")
49 self._git_binary = binary
50
51 def cacheable(self) -> bool:
52 return False
53
54 @property
55 def current_rev_identifier(self):
56 return "HEAD"
57
58 @property
59 def commit_id(self):
60 return self._git_binary._invoke_unsandboxed(self._create_git_cmdline(["rev-parse", "HEAD"]))
61
62 @property
63 def branch_name(self) -> str | None:
64 branch = self._git_binary._invoke_unsandboxed(
65 self._create_git_cmdline(["rev-parse", "--abbrev-ref", "HEAD"])
66 )
67 return None if branch == "HEAD" else branch
68
69 def _fix_git_relative_path(self, worktree_path: str, relative_to: PurePath | str) -> str:
70 return str((self.worktree / worktree_path).relative_to(relative_to))
71
72 def changed_files(
73 self,
74 from_commit: str | None = None,
75 include_untracked: bool = False,
76 relative_to: PurePath | str | None = None,
77 ) -> set[str]:
78 relative_to = PurePath(relative_to) if relative_to is not None else self.worktree
79 rel_suffix = ["--", str(relative_to)]
80 uncommitted_changes = self._git_binary._invoke_unsandboxed(
81 self._create_git_cmdline(
82 ["diff", "--name-only", "HEAD"] + rel_suffix,
83 )
84 )
85
86 files = set(uncommitted_changes.splitlines())
87 if from_commit:
88 # Grab the diff from the merge-base to HEAD using ... syntax. This ensures we have just
89 # the changes that have occurred on the current branch.
90 committed_cmd = ["diff", "--name-only", from_commit + "...HEAD"] + rel_suffix
91 committed_changes = self._git_binary._invoke_unsandboxed(
92 self._create_git_cmdline(committed_cmd)
93 )
94 files.update(committed_changes.split())
95 if include_untracked:
96 untracked_cmd = [
97 "ls-files",
98 "--other",
99 "--exclude-standard",
100 "--full-name",
101 ] + rel_suffix
102 untracked = self._git_binary._invoke_unsandboxed(
103 self._create_git_cmdline(untracked_cmd)
104 )
105 files.update(untracked.split())
106 # git will report changed files relative to the worktree: re-relativize to relative_to
107 return {self._fix_git_relative_path(f, relative_to) for f in files}
108
109 def changes_in(self, diffspec: str, relative_to: PurePath | str | None = None) -> set[str]:
110 relative_to = PurePath(relative_to) if relative_to is not None else self.worktree
111 cmd = ["diff-tree", "--no-commit-id", "--name-only", "-r", diffspec]
112 files = self._git_binary._invoke_unsandboxed(self._create_git_cmdline(cmd)).split()
113 return {self._fix_git_relative_path(f.strip(), relative_to) for f in files}
114
115 def _create_git_cmdline(self, args: Iterable[str]) -> list[str]:
116 return [f"--git-dir={self._gitdir}", f"--work-tree={self.worktree}", *args]
117
118 def __eq__(self, other: Any) -> bool:
119 # NB: See the class doc regarding equality.
120 return id(self) == id(other)
121
122
123 @dataclass(frozen=True)
124 class MaybeGitWorktree(EngineAwareReturnType):
125 git_worktree: GitWorktree | None = None
126
127 def cacheable(self) -> bool:
128 return False
129
130
131 @dataclasses.dataclass(frozen=True)
132 class GitWorktreeRequest:
133 gitdir: PathLike[str] | None = None
134 subdir: PathLike[str] | None = None
135
136
137 @rule
138 async def get_git_worktree(
139 git_worktree_request: GitWorktreeRequest,
140 maybe_git_binary: MaybeGitBinary,
141 ) -> MaybeGitWorktree:
142 if not maybe_git_binary.git_binary:
143 return MaybeGitWorktree()
144
145 git_binary = maybe_git_binary.git_binary
146 cmd = ["rev-parse", "--show-toplevel"]
147
148 try:
149 if git_worktree_request.subdir:
150 with pushd(str(git_worktree_request.subdir)):
151 output = git_binary._invoke_unsandboxed(cmd)
152 else:
153 output = git_binary._invoke_unsandboxed(cmd)
154 except GitBinaryException as e:
155 logger.info(f"No git repository at {os.getcwd()}: {e!r}")
156 return MaybeGitWorktree()
157
158 git_worktree = GitWorktree(
159 binary=git_binary,
160 gitdir=git_worktree_request.gitdir,
161 worktree=PurePath(output),
162 )
163
164 logger.debug(
165 f"Detected git repository at {git_worktree.worktree} on branch {git_worktree.branch_name}"
166 )
167 return MaybeGitWorktree(git_worktree=git_worktree)
168
169
170 def rules():
171 return [*collect_rules()]
172
[end of src/python/pants/vcs/git.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/python/pants/init/specs_calculator.py b/src/python/pants/init/specs_calculator.py
--- a/src/python/pants/init/specs_calculator.py
+++ b/src/python/pants/init/specs_calculator.py
@@ -4,6 +4,7 @@
import logging
from typing import cast
+from pants.base.glob_match_error_behavior import GlobMatchErrorBehavior
from pants.base.specs import AddressLiteralSpec, FileLiteralSpec, RawSpecs, Specs
from pants.base.specs_parser import SpecsParser
from pants.core.util_rules.environments import determine_bootstrap_environment
@@ -96,7 +97,11 @@
# target-aware vs. target-less goals, e.g. `list` vs `count-loc`.
address_literals=tuple(address_literal_specs),
file_literals=file_literal_specs,
- unmatched_glob_behavior=unmatched_cli_globs,
+ # The globs here are synthesized from VCS data by the `changed` mechanism.
+ # As such it does not make sense to apply user-facing matching errors to them.
+ # In particular, they can legitimately not match anything, if entire git
+ # subtrees were deleted for example.
+ unmatched_glob_behavior=GlobMatchErrorBehavior.ignore,
filter_by_global_options=True,
from_change_detection=True,
description_of_origin="`--changed-since`",
diff --git a/src/python/pants/vcs/git.py b/src/python/pants/vcs/git.py
--- a/src/python/pants/vcs/git.py
+++ b/src/python/pants/vcs/git.py
@@ -91,7 +91,7 @@
committed_changes = self._git_binary._invoke_unsandboxed(
self._create_git_cmdline(committed_cmd)
)
- files.update(committed_changes.split())
+ files.update(committed_changes.splitlines())
if include_untracked:
untracked_cmd = [
"ls-files",
@@ -102,14 +102,14 @@
untracked = self._git_binary._invoke_unsandboxed(
self._create_git_cmdline(untracked_cmd)
)
- files.update(untracked.split())
+ files.update(untracked.splitlines())
# git will report changed files relative to the worktree: re-relativize to relative_to
return {self._fix_git_relative_path(f, relative_to) for f in files}
def changes_in(self, diffspec: str, relative_to: PurePath | str | None = None) -> set[str]:
relative_to = PurePath(relative_to) if relative_to is not None else self.worktree
cmd = ["diff-tree", "--no-commit-id", "--name-only", "-r", diffspec]
- files = self._git_binary._invoke_unsandboxed(self._create_git_cmdline(cmd)).split()
+ files = self._git_binary._invoke_unsandboxed(self._create_git_cmdline(cmd)).splitlines()
return {self._fix_git_relative_path(f.strip(), relative_to) for f in files}
def _create_git_cmdline(self, args: Iterable[str]) -> list[str]:
|
{"golden_diff": "diff --git a/src/python/pants/init/specs_calculator.py b/src/python/pants/init/specs_calculator.py\n--- a/src/python/pants/init/specs_calculator.py\n+++ b/src/python/pants/init/specs_calculator.py\n@@ -4,6 +4,7 @@\n import logging\n from typing import cast\n \n+from pants.base.glob_match_error_behavior import GlobMatchErrorBehavior\n from pants.base.specs import AddressLiteralSpec, FileLiteralSpec, RawSpecs, Specs\n from pants.base.specs_parser import SpecsParser\n from pants.core.util_rules.environments import determine_bootstrap_environment\n@@ -96,7 +97,11 @@\n # target-aware vs. target-less goals, e.g. `list` vs `count-loc`.\n address_literals=tuple(address_literal_specs),\n file_literals=file_literal_specs,\n- unmatched_glob_behavior=unmatched_cli_globs,\n+ # The globs here are synthesized from VCS data by the `changed` mechanism.\n+ # As such it does not make sense to apply user-facing matching errors to them.\n+ # In particular, they can legitimately not match anything, if entire git\n+ # subtrees were deleted for example.\n+ unmatched_glob_behavior=GlobMatchErrorBehavior.ignore,\n filter_by_global_options=True,\n from_change_detection=True,\n description_of_origin=\"`--changed-since`\",\ndiff --git a/src/python/pants/vcs/git.py b/src/python/pants/vcs/git.py\n--- a/src/python/pants/vcs/git.py\n+++ b/src/python/pants/vcs/git.py\n@@ -91,7 +91,7 @@\n committed_changes = self._git_binary._invoke_unsandboxed(\n self._create_git_cmdline(committed_cmd)\n )\n- files.update(committed_changes.split())\n+ files.update(committed_changes.splitlines())\n if include_untracked:\n untracked_cmd = [\n \"ls-files\",\n@@ -102,14 +102,14 @@\n untracked = self._git_binary._invoke_unsandboxed(\n self._create_git_cmdline(untracked_cmd)\n )\n- files.update(untracked.split())\n+ files.update(untracked.splitlines())\n # git will report changed files relative to the worktree: re-relativize to relative_to\n return {self._fix_git_relative_path(f, relative_to) for f in files}\n \n def changes_in(self, diffspec: str, relative_to: PurePath | str | None = None) -> set[str]:\n relative_to = PurePath(relative_to) if relative_to is not None else self.worktree\n cmd = [\"diff-tree\", \"--no-commit-id\", \"--name-only\", \"-r\", diffspec]\n- files = self._git_binary._invoke_unsandboxed(self._create_git_cmdline(cmd)).split()\n+ files = self._git_binary._invoke_unsandboxed(self._create_git_cmdline(cmd)).splitlines()\n return {self._fix_git_relative_path(f.strip(), relative_to) for f in files}\n \n def _create_git_cmdline(self, args: Iterable[str]) -> list[str]:\n", "issue": "Deleted files cause `pants tailor` with `--changed-since` to fail\n**Describe the bug**\r\n\r\nI use the following command in CI to validate the monorepo, as recommended by the docs:\r\n\r\n```shell\r\n> pants \\\r\n\t --changed-since=origin/main \\\r\n\t tailor --check \\\r\n\t update-build-files --check \\\r\n\t lint\r\n```\r\n\r\nHowever, if I delete a package, including its `BUILD` file in a PR, the `--changed-since` flag causes `tailor` to try to run on those files, which `pants` blows up on:\r\n\r\n```shell\r\n16:40:57.91 [ERROR] 1 Exception encountered:\r\n\r\nEngine traceback:\r\n in `tailor` goal\r\n\r\nIntrinsicError: Unmatched glob from `--changed-since`: \"aws/projects/my_project_name/*\"\r\n\r\nDo the file(s) exist? If so, check if the file(s) are in your `.gitignore` or the global `pants_ignore` option, which may result in Pants not being able to see the file(s) even though they exist on disk. Refer to https://www.pantsbuild.org/v2.19/docs/troubleshooting#pants-cannot-find-a-file-in-your-project.\r\n\r\n\r\nExited with code exit status 1\r\n```\r\n\r\nIf files are deleted, yes, they are changed, but they shouldn't throw an error.\r\n\r\n**Pants version**\r\n\r\n2.19.0\r\n\r\n**OS**\r\n\r\nLinux (CircleCI Ubuntu executor)\r\n\r\n**Additional info**\r\n\r\nN/A\r\n\n", "before_files": [{"content": "# Copyright 2018 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport logging\nfrom typing import cast\n\nfrom pants.base.specs import AddressLiteralSpec, FileLiteralSpec, RawSpecs, Specs\nfrom pants.base.specs_parser import SpecsParser\nfrom pants.core.util_rules.environments import determine_bootstrap_environment\nfrom pants.core.util_rules.system_binaries import GitBinary\nfrom pants.engine.addresses import AddressInput\nfrom pants.engine.environment import EnvironmentName\nfrom pants.engine.internals.scheduler import SchedulerSession\nfrom pants.engine.internals.selectors import Params\nfrom pants.engine.rules import QueryRule\nfrom pants.option.options import Options\nfrom pants.option.options_bootstrapper import OptionsBootstrapper\nfrom pants.util.frozendict import FrozenDict\nfrom pants.vcs.changed import ChangedAddresses, ChangedOptions, ChangedRequest\nfrom pants.vcs.git import GitWorktreeRequest, MaybeGitWorktree\n\nlogger = logging.getLogger(__name__)\n\n\nclass InvalidSpecConstraint(Exception):\n \"\"\"Raised when invalid constraints are given via specs and arguments like --changed*.\"\"\"\n\n\ndef calculate_specs(\n options_bootstrapper: OptionsBootstrapper,\n options: Options,\n session: SchedulerSession,\n working_dir: str,\n) -> Specs:\n \"\"\"Determine the specs for a given Pants run.\"\"\"\n global_options = options.for_global_scope()\n unmatched_cli_globs = global_options.unmatched_cli_globs\n specs = SpecsParser(working_dir=working_dir).parse_specs(\n options.specs,\n description_of_origin=\"CLI arguments\",\n unmatched_glob_behavior=unmatched_cli_globs,\n )\n\n changed_options = ChangedOptions.from_options(options.for_scope(\"changed\"))\n logger.debug(\"specs are: %s\", specs)\n logger.debug(\"changed_options are: %s\", changed_options)\n\n if specs and changed_options.provided:\n changed_name = \"--changed-since\" if changed_options.since else \"--changed-diffspec\"\n specs_description = specs.arguments_provided_description()\n assert specs_description is not None\n raise InvalidSpecConstraint(\n f\"You used `{changed_name}` at the same time as using {specs_description}. You can \"\n f\"only use `{changed_name}` or use normal arguments.\"\n )\n\n if not changed_options.provided:\n return specs\n\n bootstrap_environment = determine_bootstrap_environment(session)\n\n (git_binary,) = session.product_request(GitBinary, [Params(bootstrap_environment)])\n (maybe_git_worktree,) = session.product_request(\n MaybeGitWorktree, [Params(GitWorktreeRequest(), git_binary, bootstrap_environment)]\n )\n if not maybe_git_worktree.git_worktree:\n raise InvalidSpecConstraint(\n \"The `--changed-*` options are only available if Git is used for the repository.\"\n )\n\n changed_files = tuple(changed_options.changed_files(maybe_git_worktree.git_worktree))\n file_literal_specs = tuple(FileLiteralSpec(f) for f in changed_files)\n\n changed_request = ChangedRequest(changed_files, changed_options.dependents)\n (changed_addresses,) = session.product_request(\n ChangedAddresses,\n [Params(changed_request, options_bootstrapper, bootstrap_environment)],\n )\n logger.debug(\"changed addresses: %s\", changed_addresses)\n\n address_literal_specs = []\n for address in cast(ChangedAddresses, changed_addresses):\n address_input = AddressInput.parse(address.spec, description_of_origin=\"`--changed-since`\")\n address_literal_specs.append(\n AddressLiteralSpec(\n path_component=address_input.path_component,\n target_component=address_input.target_component,\n generated_component=address_input.generated_component,\n parameters=FrozenDict(address_input.parameters),\n )\n )\n\n return Specs(\n includes=RawSpecs(\n # We need both address_literals and file_literals to cover all our edge cases, including\n # target-aware vs. target-less goals, e.g. `list` vs `count-loc`.\n address_literals=tuple(address_literal_specs),\n file_literals=file_literal_specs,\n unmatched_glob_behavior=unmatched_cli_globs,\n filter_by_global_options=True,\n from_change_detection=True,\n description_of_origin=\"`--changed-since`\",\n ),\n ignores=RawSpecs(description_of_origin=\"`--changed-since`\"),\n )\n\n\ndef rules():\n return [\n QueryRule(ChangedAddresses, [ChangedRequest, EnvironmentName]),\n QueryRule(GitBinary, [EnvironmentName]),\n QueryRule(MaybeGitWorktree, [GitWorktreeRequest, GitBinary, EnvironmentName]),\n ]\n", "path": "src/python/pants/init/specs_calculator.py"}, {"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport dataclasses\nimport logging\nimport os\nfrom dataclasses import dataclass\nfrom os import PathLike\nfrom pathlib import Path, PurePath\nfrom typing import Any, Iterable\n\nfrom pants.core.util_rules.system_binaries import GitBinary, GitBinaryException, MaybeGitBinary\nfrom pants.engine.engine_aware import EngineAwareReturnType\nfrom pants.engine.rules import collect_rules, rule\nfrom pants.util.contextutil import pushd\n\nlogger = logging.getLogger(__name__)\n\n\nclass GitWorktree(EngineAwareReturnType):\n \"\"\"Implements a safe wrapper for un-sandboxed access to Git in the user's working copy.\n\n This type (and any wrappers) should be marked `EngineAwareReturnType.cacheable=False`, because\n it internally uses un-sandboxed APIs, and `@rules` which produce it should re-run in each\n session. It additionally implements a default `__eq__` in order to prevent early-cutoff in the\n graph, and force any consumers of the type to re-run.\n \"\"\"\n\n worktree: PurePath\n _gitdir: PurePath\n _git_binary: GitBinary\n\n def __init__(\n self,\n binary: GitBinary,\n worktree: PathLike[str] | None = None,\n gitdir: PathLike[str] | None = None,\n ) -> None:\n \"\"\"Creates a git object that assumes the git repository is in the cwd by default.\n\n binary: The git binary to use.\n worktree: The path to the git repository working tree directory (typically '.').\n gitdir: The path to the repository's git metadata directory (typically '.git').\n \"\"\"\n self.worktree = Path(worktree or os.getcwd()).resolve()\n self._gitdir = Path(gitdir).resolve() if gitdir else (self.worktree / \".git\")\n self._git_binary = binary\n\n def cacheable(self) -> bool:\n return False\n\n @property\n def current_rev_identifier(self):\n return \"HEAD\"\n\n @property\n def commit_id(self):\n return self._git_binary._invoke_unsandboxed(self._create_git_cmdline([\"rev-parse\", \"HEAD\"]))\n\n @property\n def branch_name(self) -> str | None:\n branch = self._git_binary._invoke_unsandboxed(\n self._create_git_cmdline([\"rev-parse\", \"--abbrev-ref\", \"HEAD\"])\n )\n return None if branch == \"HEAD\" else branch\n\n def _fix_git_relative_path(self, worktree_path: str, relative_to: PurePath | str) -> str:\n return str((self.worktree / worktree_path).relative_to(relative_to))\n\n def changed_files(\n self,\n from_commit: str | None = None,\n include_untracked: bool = False,\n relative_to: PurePath | str | None = None,\n ) -> set[str]:\n relative_to = PurePath(relative_to) if relative_to is not None else self.worktree\n rel_suffix = [\"--\", str(relative_to)]\n uncommitted_changes = self._git_binary._invoke_unsandboxed(\n self._create_git_cmdline(\n [\"diff\", \"--name-only\", \"HEAD\"] + rel_suffix,\n )\n )\n\n files = set(uncommitted_changes.splitlines())\n if from_commit:\n # Grab the diff from the merge-base to HEAD using ... syntax. This ensures we have just\n # the changes that have occurred on the current branch.\n committed_cmd = [\"diff\", \"--name-only\", from_commit + \"...HEAD\"] + rel_suffix\n committed_changes = self._git_binary._invoke_unsandboxed(\n self._create_git_cmdline(committed_cmd)\n )\n files.update(committed_changes.split())\n if include_untracked:\n untracked_cmd = [\n \"ls-files\",\n \"--other\",\n \"--exclude-standard\",\n \"--full-name\",\n ] + rel_suffix\n untracked = self._git_binary._invoke_unsandboxed(\n self._create_git_cmdline(untracked_cmd)\n )\n files.update(untracked.split())\n # git will report changed files relative to the worktree: re-relativize to relative_to\n return {self._fix_git_relative_path(f, relative_to) for f in files}\n\n def changes_in(self, diffspec: str, relative_to: PurePath | str | None = None) -> set[str]:\n relative_to = PurePath(relative_to) if relative_to is not None else self.worktree\n cmd = [\"diff-tree\", \"--no-commit-id\", \"--name-only\", \"-r\", diffspec]\n files = self._git_binary._invoke_unsandboxed(self._create_git_cmdline(cmd)).split()\n return {self._fix_git_relative_path(f.strip(), relative_to) for f in files}\n\n def _create_git_cmdline(self, args: Iterable[str]) -> list[str]:\n return [f\"--git-dir={self._gitdir}\", f\"--work-tree={self.worktree}\", *args]\n\n def __eq__(self, other: Any) -> bool:\n # NB: See the class doc regarding equality.\n return id(self) == id(other)\n\n\n@dataclass(frozen=True)\nclass MaybeGitWorktree(EngineAwareReturnType):\n git_worktree: GitWorktree | None = None\n\n def cacheable(self) -> bool:\n return False\n\n\[email protected](frozen=True)\nclass GitWorktreeRequest:\n gitdir: PathLike[str] | None = None\n subdir: PathLike[str] | None = None\n\n\n@rule\nasync def get_git_worktree(\n git_worktree_request: GitWorktreeRequest,\n maybe_git_binary: MaybeGitBinary,\n) -> MaybeGitWorktree:\n if not maybe_git_binary.git_binary:\n return MaybeGitWorktree()\n\n git_binary = maybe_git_binary.git_binary\n cmd = [\"rev-parse\", \"--show-toplevel\"]\n\n try:\n if git_worktree_request.subdir:\n with pushd(str(git_worktree_request.subdir)):\n output = git_binary._invoke_unsandboxed(cmd)\n else:\n output = git_binary._invoke_unsandboxed(cmd)\n except GitBinaryException as e:\n logger.info(f\"No git repository at {os.getcwd()}: {e!r}\")\n return MaybeGitWorktree()\n\n git_worktree = GitWorktree(\n binary=git_binary,\n gitdir=git_worktree_request.gitdir,\n worktree=PurePath(output),\n )\n\n logger.debug(\n f\"Detected git repository at {git_worktree.worktree} on branch {git_worktree.branch_name}\"\n )\n return MaybeGitWorktree(git_worktree=git_worktree)\n\n\ndef rules():\n return [*collect_rules()]\n", "path": "src/python/pants/vcs/git.py"}]}
| 3,985 | 667 |
gh_patches_debug_22089
|
rasdani/github-patches
|
git_diff
|
pypa__pip-12578
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
-vv is not passed to build env install subprocesses
### Description
While using `-vv` I noticed seeing a big pause between these two output lines:
`
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
`
Clearly a lot of stuff was happening - like wheel building - but there was no output
It turns out that when -vv was introduced in #9450 this higher verbosity level was not passed onto these subprocesses
### Expected behavior
_No response_
### pip version
24.0
### Python version
3.9
### OS
RHEL
### How to Reproduce
Compare the logging output from
```
rm -rf ~/.cache/pip && rm -f *.whl && pip -vv wheel --no-binary :all: hatchling
```
before and after the patch. I'm seeing 1k lines before and 12k lines after.
### Output
_No response_
### Code of Conduct
- [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/).
</issue>
<code>
[start of src/pip/_internal/build_env.py]
1 """Build Environment used for isolation during sdist building
2 """
3
4 import logging
5 import os
6 import pathlib
7 import site
8 import sys
9 import textwrap
10 from collections import OrderedDict
11 from types import TracebackType
12 from typing import TYPE_CHECKING, Iterable, List, Optional, Set, Tuple, Type, Union
13
14 from pip._vendor.certifi import where
15 from pip._vendor.packaging.requirements import Requirement
16 from pip._vendor.packaging.version import Version
17
18 from pip import __file__ as pip_location
19 from pip._internal.cli.spinners import open_spinner
20 from pip._internal.locations import get_platlib, get_purelib, get_scheme
21 from pip._internal.metadata import get_default_environment, get_environment
22 from pip._internal.utils.subprocess import call_subprocess
23 from pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds
24
25 if TYPE_CHECKING:
26 from pip._internal.index.package_finder import PackageFinder
27
28 logger = logging.getLogger(__name__)
29
30
31 def _dedup(a: str, b: str) -> Union[Tuple[str], Tuple[str, str]]:
32 return (a, b) if a != b else (a,)
33
34
35 class _Prefix:
36 def __init__(self, path: str) -> None:
37 self.path = path
38 self.setup = False
39 scheme = get_scheme("", prefix=path)
40 self.bin_dir = scheme.scripts
41 self.lib_dirs = _dedup(scheme.purelib, scheme.platlib)
42
43
44 def get_runnable_pip() -> str:
45 """Get a file to pass to a Python executable, to run the currently-running pip.
46
47 This is used to run a pip subprocess, for installing requirements into the build
48 environment.
49 """
50 source = pathlib.Path(pip_location).resolve().parent
51
52 if not source.is_dir():
53 # This would happen if someone is using pip from inside a zip file. In that
54 # case, we can use that directly.
55 return str(source)
56
57 return os.fsdecode(source / "__pip-runner__.py")
58
59
60 def _get_system_sitepackages() -> Set[str]:
61 """Get system site packages
62
63 Usually from site.getsitepackages,
64 but fallback on `get_purelib()/get_platlib()` if unavailable
65 (e.g. in a virtualenv created by virtualenv<20)
66
67 Returns normalized set of strings.
68 """
69 if hasattr(site, "getsitepackages"):
70 system_sites = site.getsitepackages()
71 else:
72 # virtualenv < 20 overwrites site.py without getsitepackages
73 # fallback on get_purelib/get_platlib.
74 # this is known to miss things, but shouldn't in the cases
75 # where getsitepackages() has been removed (inside a virtualenv)
76 system_sites = [get_purelib(), get_platlib()]
77 return {os.path.normcase(path) for path in system_sites}
78
79
80 class BuildEnvironment:
81 """Creates and manages an isolated environment to install build deps"""
82
83 def __init__(self) -> None:
84 temp_dir = TempDirectory(kind=tempdir_kinds.BUILD_ENV, globally_managed=True)
85
86 self._prefixes = OrderedDict(
87 (name, _Prefix(os.path.join(temp_dir.path, name)))
88 for name in ("normal", "overlay")
89 )
90
91 self._bin_dirs: List[str] = []
92 self._lib_dirs: List[str] = []
93 for prefix in reversed(list(self._prefixes.values())):
94 self._bin_dirs.append(prefix.bin_dir)
95 self._lib_dirs.extend(prefix.lib_dirs)
96
97 # Customize site to:
98 # - ensure .pth files are honored
99 # - prevent access to system site packages
100 system_sites = _get_system_sitepackages()
101
102 self._site_dir = os.path.join(temp_dir.path, "site")
103 if not os.path.exists(self._site_dir):
104 os.mkdir(self._site_dir)
105 with open(
106 os.path.join(self._site_dir, "sitecustomize.py"), "w", encoding="utf-8"
107 ) as fp:
108 fp.write(
109 textwrap.dedent(
110 """
111 import os, site, sys
112
113 # First, drop system-sites related paths.
114 original_sys_path = sys.path[:]
115 known_paths = set()
116 for path in {system_sites!r}:
117 site.addsitedir(path, known_paths=known_paths)
118 system_paths = set(
119 os.path.normcase(path)
120 for path in sys.path[len(original_sys_path):]
121 )
122 original_sys_path = [
123 path for path in original_sys_path
124 if os.path.normcase(path) not in system_paths
125 ]
126 sys.path = original_sys_path
127
128 # Second, add lib directories.
129 # ensuring .pth file are processed.
130 for path in {lib_dirs!r}:
131 assert not path in sys.path
132 site.addsitedir(path)
133 """
134 ).format(system_sites=system_sites, lib_dirs=self._lib_dirs)
135 )
136
137 def __enter__(self) -> None:
138 self._save_env = {
139 name: os.environ.get(name, None)
140 for name in ("PATH", "PYTHONNOUSERSITE", "PYTHONPATH")
141 }
142
143 path = self._bin_dirs[:]
144 old_path = self._save_env["PATH"]
145 if old_path:
146 path.extend(old_path.split(os.pathsep))
147
148 pythonpath = [self._site_dir]
149
150 os.environ.update(
151 {
152 "PATH": os.pathsep.join(path),
153 "PYTHONNOUSERSITE": "1",
154 "PYTHONPATH": os.pathsep.join(pythonpath),
155 }
156 )
157
158 def __exit__(
159 self,
160 exc_type: Optional[Type[BaseException]],
161 exc_val: Optional[BaseException],
162 exc_tb: Optional[TracebackType],
163 ) -> None:
164 for varname, old_value in self._save_env.items():
165 if old_value is None:
166 os.environ.pop(varname, None)
167 else:
168 os.environ[varname] = old_value
169
170 def check_requirements(
171 self, reqs: Iterable[str]
172 ) -> Tuple[Set[Tuple[str, str]], Set[str]]:
173 """Return 2 sets:
174 - conflicting requirements: set of (installed, wanted) reqs tuples
175 - missing requirements: set of reqs
176 """
177 missing = set()
178 conflicting = set()
179 if reqs:
180 env = (
181 get_environment(self._lib_dirs)
182 if hasattr(self, "_lib_dirs")
183 else get_default_environment()
184 )
185 for req_str in reqs:
186 req = Requirement(req_str)
187 # We're explicitly evaluating with an empty extra value, since build
188 # environments are not provided any mechanism to select specific extras.
189 if req.marker is not None and not req.marker.evaluate({"extra": ""}):
190 continue
191 dist = env.get_distribution(req.name)
192 if not dist:
193 missing.add(req_str)
194 continue
195 if isinstance(dist.version, Version):
196 installed_req_str = f"{req.name}=={dist.version}"
197 else:
198 installed_req_str = f"{req.name}==={dist.version}"
199 if not req.specifier.contains(dist.version, prereleases=True):
200 conflicting.add((installed_req_str, req_str))
201 # FIXME: Consider direct URL?
202 return conflicting, missing
203
204 def install_requirements(
205 self,
206 finder: "PackageFinder",
207 requirements: Iterable[str],
208 prefix_as_string: str,
209 *,
210 kind: str,
211 ) -> None:
212 prefix = self._prefixes[prefix_as_string]
213 assert not prefix.setup
214 prefix.setup = True
215 if not requirements:
216 return
217 self._install_requirements(
218 get_runnable_pip(),
219 finder,
220 requirements,
221 prefix,
222 kind=kind,
223 )
224
225 @staticmethod
226 def _install_requirements(
227 pip_runnable: str,
228 finder: "PackageFinder",
229 requirements: Iterable[str],
230 prefix: _Prefix,
231 *,
232 kind: str,
233 ) -> None:
234 args: List[str] = [
235 sys.executable,
236 pip_runnable,
237 "install",
238 "--ignore-installed",
239 "--no-user",
240 "--prefix",
241 prefix.path,
242 "--no-warn-script-location",
243 ]
244 if logger.getEffectiveLevel() <= logging.DEBUG:
245 args.append("-v")
246 for format_control in ("no_binary", "only_binary"):
247 formats = getattr(finder.format_control, format_control)
248 args.extend(
249 (
250 "--" + format_control.replace("_", "-"),
251 ",".join(sorted(formats or {":none:"})),
252 )
253 )
254
255 index_urls = finder.index_urls
256 if index_urls:
257 args.extend(["-i", index_urls[0]])
258 for extra_index in index_urls[1:]:
259 args.extend(["--extra-index-url", extra_index])
260 else:
261 args.append("--no-index")
262 for link in finder.find_links:
263 args.extend(["--find-links", link])
264
265 for host in finder.trusted_hosts:
266 args.extend(["--trusted-host", host])
267 if finder.allow_all_prereleases:
268 args.append("--pre")
269 if finder.prefer_binary:
270 args.append("--prefer-binary")
271 args.append("--")
272 args.extend(requirements)
273 extra_environ = {"_PIP_STANDALONE_CERT": where()}
274 with open_spinner(f"Installing {kind}") as spinner:
275 call_subprocess(
276 args,
277 command_desc=f"pip subprocess to install {kind}",
278 spinner=spinner,
279 extra_environ=extra_environ,
280 )
281
282
283 class NoOpBuildEnvironment(BuildEnvironment):
284 """A no-op drop-in replacement for BuildEnvironment"""
285
286 def __init__(self) -> None:
287 pass
288
289 def __enter__(self) -> None:
290 pass
291
292 def __exit__(
293 self,
294 exc_type: Optional[Type[BaseException]],
295 exc_val: Optional[BaseException],
296 exc_tb: Optional[TracebackType],
297 ) -> None:
298 pass
299
300 def cleanup(self) -> None:
301 pass
302
303 def install_requirements(
304 self,
305 finder: "PackageFinder",
306 requirements: Iterable[str],
307 prefix_as_string: str,
308 *,
309 kind: str,
310 ) -> None:
311 raise NotImplementedError()
312
[end of src/pip/_internal/build_env.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pip/_internal/build_env.py b/src/pip/_internal/build_env.py
--- a/src/pip/_internal/build_env.py
+++ b/src/pip/_internal/build_env.py
@@ -19,6 +19,7 @@
from pip._internal.cli.spinners import open_spinner
from pip._internal.locations import get_platlib, get_purelib, get_scheme
from pip._internal.metadata import get_default_environment, get_environment
+from pip._internal.utils.logging import VERBOSE
from pip._internal.utils.subprocess import call_subprocess
from pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds
@@ -242,6 +243,8 @@
"--no-warn-script-location",
]
if logger.getEffectiveLevel() <= logging.DEBUG:
+ args.append("-vv")
+ elif logger.getEffectiveLevel() <= VERBOSE:
args.append("-v")
for format_control in ("no_binary", "only_binary"):
formats = getattr(finder.format_control, format_control)
|
{"golden_diff": "diff --git a/src/pip/_internal/build_env.py b/src/pip/_internal/build_env.py\n--- a/src/pip/_internal/build_env.py\n+++ b/src/pip/_internal/build_env.py\n@@ -19,6 +19,7 @@\n from pip._internal.cli.spinners import open_spinner\n from pip._internal.locations import get_platlib, get_purelib, get_scheme\n from pip._internal.metadata import get_default_environment, get_environment\n+from pip._internal.utils.logging import VERBOSE\n from pip._internal.utils.subprocess import call_subprocess\n from pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds\n \n@@ -242,6 +243,8 @@\n \"--no-warn-script-location\",\n ]\n if logger.getEffectiveLevel() <= logging.DEBUG:\n+ args.append(\"-vv\")\n+ elif logger.getEffectiveLevel() <= VERBOSE:\n args.append(\"-v\")\n for format_control in (\"no_binary\", \"only_binary\"):\n formats = getattr(finder.format_control, format_control)\n", "issue": "-vv is not passed to build env install subprocesses\n### Description\n\nWhile using `-vv` I noticed seeing a big pause between these two output lines:\r\n\r\n`\r\n Installing backend dependencies: started\r\n Installing backend dependencies: finished with status 'done'\r\n`\r\n\r\nClearly a lot of stuff was happening - like wheel building - but there was no output\r\n\r\nIt turns out that when -vv was introduced in #9450 this higher verbosity level was not passed onto these subprocesses\n\n### Expected behavior\n\n_No response_\n\n### pip version\n\n24.0\n\n### Python version\n\n3.9\n\n### OS\n\nRHEL\n\n### How to Reproduce\n\nCompare the logging output from\r\n\r\n```\r\nrm -rf ~/.cache/pip && rm -f *.whl && pip -vv wheel --no-binary :all: hatchling\r\n```\r\n\r\nbefore and after the patch. I'm seeing 1k lines before and 12k lines after.\n\n### Output\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/).\n", "before_files": [{"content": "\"\"\"Build Environment used for isolation during sdist building\n\"\"\"\n\nimport logging\nimport os\nimport pathlib\nimport site\nimport sys\nimport textwrap\nfrom collections import OrderedDict\nfrom types import TracebackType\nfrom typing import TYPE_CHECKING, Iterable, List, Optional, Set, Tuple, Type, Union\n\nfrom pip._vendor.certifi import where\nfrom pip._vendor.packaging.requirements import Requirement\nfrom pip._vendor.packaging.version import Version\n\nfrom pip import __file__ as pip_location\nfrom pip._internal.cli.spinners import open_spinner\nfrom pip._internal.locations import get_platlib, get_purelib, get_scheme\nfrom pip._internal.metadata import get_default_environment, get_environment\nfrom pip._internal.utils.subprocess import call_subprocess\nfrom pip._internal.utils.temp_dir import TempDirectory, tempdir_kinds\n\nif TYPE_CHECKING:\n from pip._internal.index.package_finder import PackageFinder\n\nlogger = logging.getLogger(__name__)\n\n\ndef _dedup(a: str, b: str) -> Union[Tuple[str], Tuple[str, str]]:\n return (a, b) if a != b else (a,)\n\n\nclass _Prefix:\n def __init__(self, path: str) -> None:\n self.path = path\n self.setup = False\n scheme = get_scheme(\"\", prefix=path)\n self.bin_dir = scheme.scripts\n self.lib_dirs = _dedup(scheme.purelib, scheme.platlib)\n\n\ndef get_runnable_pip() -> str:\n \"\"\"Get a file to pass to a Python executable, to run the currently-running pip.\n\n This is used to run a pip subprocess, for installing requirements into the build\n environment.\n \"\"\"\n source = pathlib.Path(pip_location).resolve().parent\n\n if not source.is_dir():\n # This would happen if someone is using pip from inside a zip file. In that\n # case, we can use that directly.\n return str(source)\n\n return os.fsdecode(source / \"__pip-runner__.py\")\n\n\ndef _get_system_sitepackages() -> Set[str]:\n \"\"\"Get system site packages\n\n Usually from site.getsitepackages,\n but fallback on `get_purelib()/get_platlib()` if unavailable\n (e.g. in a virtualenv created by virtualenv<20)\n\n Returns normalized set of strings.\n \"\"\"\n if hasattr(site, \"getsitepackages\"):\n system_sites = site.getsitepackages()\n else:\n # virtualenv < 20 overwrites site.py without getsitepackages\n # fallback on get_purelib/get_platlib.\n # this is known to miss things, but shouldn't in the cases\n # where getsitepackages() has been removed (inside a virtualenv)\n system_sites = [get_purelib(), get_platlib()]\n return {os.path.normcase(path) for path in system_sites}\n\n\nclass BuildEnvironment:\n \"\"\"Creates and manages an isolated environment to install build deps\"\"\"\n\n def __init__(self) -> None:\n temp_dir = TempDirectory(kind=tempdir_kinds.BUILD_ENV, globally_managed=True)\n\n self._prefixes = OrderedDict(\n (name, _Prefix(os.path.join(temp_dir.path, name)))\n for name in (\"normal\", \"overlay\")\n )\n\n self._bin_dirs: List[str] = []\n self._lib_dirs: List[str] = []\n for prefix in reversed(list(self._prefixes.values())):\n self._bin_dirs.append(prefix.bin_dir)\n self._lib_dirs.extend(prefix.lib_dirs)\n\n # Customize site to:\n # - ensure .pth files are honored\n # - prevent access to system site packages\n system_sites = _get_system_sitepackages()\n\n self._site_dir = os.path.join(temp_dir.path, \"site\")\n if not os.path.exists(self._site_dir):\n os.mkdir(self._site_dir)\n with open(\n os.path.join(self._site_dir, \"sitecustomize.py\"), \"w\", encoding=\"utf-8\"\n ) as fp:\n fp.write(\n textwrap.dedent(\n \"\"\"\n import os, site, sys\n\n # First, drop system-sites related paths.\n original_sys_path = sys.path[:]\n known_paths = set()\n for path in {system_sites!r}:\n site.addsitedir(path, known_paths=known_paths)\n system_paths = set(\n os.path.normcase(path)\n for path in sys.path[len(original_sys_path):]\n )\n original_sys_path = [\n path for path in original_sys_path\n if os.path.normcase(path) not in system_paths\n ]\n sys.path = original_sys_path\n\n # Second, add lib directories.\n # ensuring .pth file are processed.\n for path in {lib_dirs!r}:\n assert not path in sys.path\n site.addsitedir(path)\n \"\"\"\n ).format(system_sites=system_sites, lib_dirs=self._lib_dirs)\n )\n\n def __enter__(self) -> None:\n self._save_env = {\n name: os.environ.get(name, None)\n for name in (\"PATH\", \"PYTHONNOUSERSITE\", \"PYTHONPATH\")\n }\n\n path = self._bin_dirs[:]\n old_path = self._save_env[\"PATH\"]\n if old_path:\n path.extend(old_path.split(os.pathsep))\n\n pythonpath = [self._site_dir]\n\n os.environ.update(\n {\n \"PATH\": os.pathsep.join(path),\n \"PYTHONNOUSERSITE\": \"1\",\n \"PYTHONPATH\": os.pathsep.join(pythonpath),\n }\n )\n\n def __exit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n exc_tb: Optional[TracebackType],\n ) -> None:\n for varname, old_value in self._save_env.items():\n if old_value is None:\n os.environ.pop(varname, None)\n else:\n os.environ[varname] = old_value\n\n def check_requirements(\n self, reqs: Iterable[str]\n ) -> Tuple[Set[Tuple[str, str]], Set[str]]:\n \"\"\"Return 2 sets:\n - conflicting requirements: set of (installed, wanted) reqs tuples\n - missing requirements: set of reqs\n \"\"\"\n missing = set()\n conflicting = set()\n if reqs:\n env = (\n get_environment(self._lib_dirs)\n if hasattr(self, \"_lib_dirs\")\n else get_default_environment()\n )\n for req_str in reqs:\n req = Requirement(req_str)\n # We're explicitly evaluating with an empty extra value, since build\n # environments are not provided any mechanism to select specific extras.\n if req.marker is not None and not req.marker.evaluate({\"extra\": \"\"}):\n continue\n dist = env.get_distribution(req.name)\n if not dist:\n missing.add(req_str)\n continue\n if isinstance(dist.version, Version):\n installed_req_str = f\"{req.name}=={dist.version}\"\n else:\n installed_req_str = f\"{req.name}==={dist.version}\"\n if not req.specifier.contains(dist.version, prereleases=True):\n conflicting.add((installed_req_str, req_str))\n # FIXME: Consider direct URL?\n return conflicting, missing\n\n def install_requirements(\n self,\n finder: \"PackageFinder\",\n requirements: Iterable[str],\n prefix_as_string: str,\n *,\n kind: str,\n ) -> None:\n prefix = self._prefixes[prefix_as_string]\n assert not prefix.setup\n prefix.setup = True\n if not requirements:\n return\n self._install_requirements(\n get_runnable_pip(),\n finder,\n requirements,\n prefix,\n kind=kind,\n )\n\n @staticmethod\n def _install_requirements(\n pip_runnable: str,\n finder: \"PackageFinder\",\n requirements: Iterable[str],\n prefix: _Prefix,\n *,\n kind: str,\n ) -> None:\n args: List[str] = [\n sys.executable,\n pip_runnable,\n \"install\",\n \"--ignore-installed\",\n \"--no-user\",\n \"--prefix\",\n prefix.path,\n \"--no-warn-script-location\",\n ]\n if logger.getEffectiveLevel() <= logging.DEBUG:\n args.append(\"-v\")\n for format_control in (\"no_binary\", \"only_binary\"):\n formats = getattr(finder.format_control, format_control)\n args.extend(\n (\n \"--\" + format_control.replace(\"_\", \"-\"),\n \",\".join(sorted(formats or {\":none:\"})),\n )\n )\n\n index_urls = finder.index_urls\n if index_urls:\n args.extend([\"-i\", index_urls[0]])\n for extra_index in index_urls[1:]:\n args.extend([\"--extra-index-url\", extra_index])\n else:\n args.append(\"--no-index\")\n for link in finder.find_links:\n args.extend([\"--find-links\", link])\n\n for host in finder.trusted_hosts:\n args.extend([\"--trusted-host\", host])\n if finder.allow_all_prereleases:\n args.append(\"--pre\")\n if finder.prefer_binary:\n args.append(\"--prefer-binary\")\n args.append(\"--\")\n args.extend(requirements)\n extra_environ = {\"_PIP_STANDALONE_CERT\": where()}\n with open_spinner(f\"Installing {kind}\") as spinner:\n call_subprocess(\n args,\n command_desc=f\"pip subprocess to install {kind}\",\n spinner=spinner,\n extra_environ=extra_environ,\n )\n\n\nclass NoOpBuildEnvironment(BuildEnvironment):\n \"\"\"A no-op drop-in replacement for BuildEnvironment\"\"\"\n\n def __init__(self) -> None:\n pass\n\n def __enter__(self) -> None:\n pass\n\n def __exit__(\n self,\n exc_type: Optional[Type[BaseException]],\n exc_val: Optional[BaseException],\n exc_tb: Optional[TracebackType],\n ) -> None:\n pass\n\n def cleanup(self) -> None:\n pass\n\n def install_requirements(\n self,\n finder: \"PackageFinder\",\n requirements: Iterable[str],\n prefix_as_string: str,\n *,\n kind: str,\n ) -> None:\n raise NotImplementedError()\n", "path": "src/pip/_internal/build_env.py"}]}
| 3,811 | 221 |
gh_patches_debug_36522
|
rasdani/github-patches
|
git_diff
|
TheAlgorithms__Python-9975
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve our test coverage
### Feature description
Many of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.
### How to find low-coverage files
Go to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under "Run Tests" and scroll down until you find the section on code coverage:
```
---------- coverage: platform linux, python 3.12.0-final-0 -----------
Name Stmts Miss Cover Missing
-----------------------------------------------------------------------------------------------------------
quantum/q_fourier_transform.py 30 30 0% 14-93
scripts/validate_solutions.py 54 54 0% 2-94
strings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129
...
```
The "Cover" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.
Some files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.
_**When you open your PR, put "Contributes to #9943" in the PR description.**_ Do not use the word "fixes", "resolves", or "closes". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.
### How to add doctests
A doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:
```py
def add(a: int, b: int) -> int:
"""
Adds two non-negative numbers.
>>> add(1, 1)
2
>>> add(2, 5)
7
>>> add(1, 0)
1
>>> add(-1, -1)
Traceback (most recent last):
...
ValueError: Numbers must be non-negative
"""
```
For every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).
Do not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.
_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_
</issue>
<code>
[start of data_structures/binary_tree/segment_tree.py]
1 import math
2
3
4 class SegmentTree:
5 def __init__(self, a):
6 self.N = len(a)
7 self.st = [0] * (
8 4 * self.N
9 ) # approximate the overall size of segment tree with array N
10 if self.N:
11 self.build(1, 0, self.N - 1)
12
13 def left(self, idx):
14 return idx * 2
15
16 def right(self, idx):
17 return idx * 2 + 1
18
19 def build(self, idx, l, r): # noqa: E741
20 if l == r:
21 self.st[idx] = A[l]
22 else:
23 mid = (l + r) // 2
24 self.build(self.left(idx), l, mid)
25 self.build(self.right(idx), mid + 1, r)
26 self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
27
28 def update(self, a, b, val):
29 return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val)
30
31 def update_recursive(self, idx, l, r, a, b, val): # noqa: E741
32 """
33 update(1, 1, N, a, b, v) for update val v to [a,b]
34 """
35 if r < a or l > b:
36 return True
37 if l == r:
38 self.st[idx] = val
39 return True
40 mid = (l + r) // 2
41 self.update_recursive(self.left(idx), l, mid, a, b, val)
42 self.update_recursive(self.right(idx), mid + 1, r, a, b, val)
43 self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
44 return True
45
46 def query(self, a, b):
47 return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1)
48
49 def query_recursive(self, idx, l, r, a, b): # noqa: E741
50 """
51 query(1, 1, N, a, b) for query max of [a,b]
52 """
53 if r < a or l > b:
54 return -math.inf
55 if l >= a and r <= b:
56 return self.st[idx]
57 mid = (l + r) // 2
58 q1 = self.query_recursive(self.left(idx), l, mid, a, b)
59 q2 = self.query_recursive(self.right(idx), mid + 1, r, a, b)
60 return max(q1, q2)
61
62 def show_data(self):
63 show_list = []
64 for i in range(1, N + 1):
65 show_list += [self.query(i, i)]
66 print(show_list)
67
68
69 if __name__ == "__main__":
70 A = [1, 2, -4, 7, 3, -5, 6, 11, -20, 9, 14, 15, 5, 2, -8]
71 N = 15
72 segt = SegmentTree(A)
73 print(segt.query(4, 6))
74 print(segt.query(7, 11))
75 print(segt.query(7, 12))
76 segt.update(1, 3, 111)
77 print(segt.query(1, 15))
78 segt.update(7, 8, 235)
79 segt.show_data()
80
[end of data_structures/binary_tree/segment_tree.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/data_structures/binary_tree/segment_tree.py b/data_structures/binary_tree/segment_tree.py
--- a/data_structures/binary_tree/segment_tree.py
+++ b/data_structures/binary_tree/segment_tree.py
@@ -3,7 +3,8 @@
class SegmentTree:
def __init__(self, a):
- self.N = len(a)
+ self.A = a
+ self.N = len(self.A)
self.st = [0] * (
4 * self.N
) # approximate the overall size of segment tree with array N
@@ -11,14 +12,32 @@
self.build(1, 0, self.N - 1)
def left(self, idx):
+ """
+ Returns the left child index for a given index in a binary tree.
+
+ >>> s = SegmentTree([1, 2, 3])
+ >>> s.left(1)
+ 2
+ >>> s.left(2)
+ 4
+ """
return idx * 2
def right(self, idx):
+ """
+ Returns the right child index for a given index in a binary tree.
+
+ >>> s = SegmentTree([1, 2, 3])
+ >>> s.right(1)
+ 3
+ >>> s.right(2)
+ 5
+ """
return idx * 2 + 1
def build(self, idx, l, r): # noqa: E741
if l == r:
- self.st[idx] = A[l]
+ self.st[idx] = self.A[l]
else:
mid = (l + r) // 2
self.build(self.left(idx), l, mid)
@@ -26,6 +45,15 @@
self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])
def update(self, a, b, val):
+ """
+ Update the values in the segment tree in the range [a,b] with the given value.
+
+ >>> s = SegmentTree([1, 2, 3, 4, 5])
+ >>> s.update(2, 4, 10)
+ True
+ >>> s.query(1, 5)
+ 10
+ """
return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val)
def update_recursive(self, idx, l, r, a, b, val): # noqa: E741
@@ -44,6 +72,15 @@
return True
def query(self, a, b):
+ """
+ Query the maximum value in the range [a,b].
+
+ >>> s = SegmentTree([1, 2, 3, 4, 5])
+ >>> s.query(1, 3)
+ 3
+ >>> s.query(1, 5)
+ 5
+ """
return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1)
def query_recursive(self, idx, l, r, a, b): # noqa: E741
|
{"golden_diff": "diff --git a/data_structures/binary_tree/segment_tree.py b/data_structures/binary_tree/segment_tree.py\n--- a/data_structures/binary_tree/segment_tree.py\n+++ b/data_structures/binary_tree/segment_tree.py\n@@ -3,7 +3,8 @@\n \n class SegmentTree:\n def __init__(self, a):\n- self.N = len(a)\n+ self.A = a\n+ self.N = len(self.A)\n self.st = [0] * (\n 4 * self.N\n ) # approximate the overall size of segment tree with array N\n@@ -11,14 +12,32 @@\n self.build(1, 0, self.N - 1)\n \n def left(self, idx):\n+ \"\"\"\n+ Returns the left child index for a given index in a binary tree.\n+\n+ >>> s = SegmentTree([1, 2, 3])\n+ >>> s.left(1)\n+ 2\n+ >>> s.left(2)\n+ 4\n+ \"\"\"\n return idx * 2\n \n def right(self, idx):\n+ \"\"\"\n+ Returns the right child index for a given index in a binary tree.\n+\n+ >>> s = SegmentTree([1, 2, 3])\n+ >>> s.right(1)\n+ 3\n+ >>> s.right(2)\n+ 5\n+ \"\"\"\n return idx * 2 + 1\n \n def build(self, idx, l, r): # noqa: E741\n if l == r:\n- self.st[idx] = A[l]\n+ self.st[idx] = self.A[l]\n else:\n mid = (l + r) // 2\n self.build(self.left(idx), l, mid)\n@@ -26,6 +45,15 @@\n self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])\n \n def update(self, a, b, val):\n+ \"\"\"\n+ Update the values in the segment tree in the range [a,b] with the given value.\n+\n+ >>> s = SegmentTree([1, 2, 3, 4, 5])\n+ >>> s.update(2, 4, 10)\n+ True\n+ >>> s.query(1, 5)\n+ 10\n+ \"\"\"\n return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val)\n \n def update_recursive(self, idx, l, r, a, b, val): # noqa: E741\n@@ -44,6 +72,15 @@\n return True\n \n def query(self, a, b):\n+ \"\"\"\n+ Query the maximum value in the range [a,b].\n+\n+ >>> s = SegmentTree([1, 2, 3, 4, 5])\n+ >>> s.query(1, 3)\n+ 3\n+ >>> s.query(1, 5)\n+ 5\n+ \"\"\"\n return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1)\n \n def query_recursive(self, idx, l, r, a, b): # noqa: E741\n", "issue": "Improve our test coverage\n### Feature description\r\n\r\nMany of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.\r\n\r\n### How to find low-coverage files\r\n\r\nGo to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under \"Run Tests\" and scroll down until you find the section on code coverage:\r\n```\r\n---------- coverage: platform linux, python 3.12.0-final-0 -----------\r\nName Stmts Miss Cover Missing\r\n-----------------------------------------------------------------------------------------------------------\r\nquantum/q_fourier_transform.py 30 30 0% 14-93\r\nscripts/validate_solutions.py 54 54 0% 2-94\r\nstrings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129\r\n...\r\n```\r\nThe \"Cover\" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.\r\n\r\nSome files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.\r\n\r\n_**When you open your PR, put \"Contributes to #9943\" in the PR description.**_ Do not use the word \"fixes\", \"resolves\", or \"closes\". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.\r\n\r\n### How to add doctests\r\n\r\nA doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:\r\n```py\r\ndef add(a: int, b: int) -> int:\r\n \"\"\"\r\n Adds two non-negative numbers.\r\n >>> add(1, 1)\r\n 2\r\n >>> add(2, 5)\r\n 7\r\n >>> add(1, 0)\r\n 1\r\n >>> add(-1, -1)\r\n Traceback (most recent last):\r\n ...\r\n ValueError: Numbers must be non-negative\r\n \"\"\"\r\n```\r\nFor every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).\r\n\r\nDo not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.\r\n\r\n_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_\n", "before_files": [{"content": "import math\n\n\nclass SegmentTree:\n def __init__(self, a):\n self.N = len(a)\n self.st = [0] * (\n 4 * self.N\n ) # approximate the overall size of segment tree with array N\n if self.N:\n self.build(1, 0, self.N - 1)\n\n def left(self, idx):\n return idx * 2\n\n def right(self, idx):\n return idx * 2 + 1\n\n def build(self, idx, l, r): # noqa: E741\n if l == r:\n self.st[idx] = A[l]\n else:\n mid = (l + r) // 2\n self.build(self.left(idx), l, mid)\n self.build(self.right(idx), mid + 1, r)\n self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])\n\n def update(self, a, b, val):\n return self.update_recursive(1, 0, self.N - 1, a - 1, b - 1, val)\n\n def update_recursive(self, idx, l, r, a, b, val): # noqa: E741\n \"\"\"\n update(1, 1, N, a, b, v) for update val v to [a,b]\n \"\"\"\n if r < a or l > b:\n return True\n if l == r:\n self.st[idx] = val\n return True\n mid = (l + r) // 2\n self.update_recursive(self.left(idx), l, mid, a, b, val)\n self.update_recursive(self.right(idx), mid + 1, r, a, b, val)\n self.st[idx] = max(self.st[self.left(idx)], self.st[self.right(idx)])\n return True\n\n def query(self, a, b):\n return self.query_recursive(1, 0, self.N - 1, a - 1, b - 1)\n\n def query_recursive(self, idx, l, r, a, b): # noqa: E741\n \"\"\"\n query(1, 1, N, a, b) for query max of [a,b]\n \"\"\"\n if r < a or l > b:\n return -math.inf\n if l >= a and r <= b:\n return self.st[idx]\n mid = (l + r) // 2\n q1 = self.query_recursive(self.left(idx), l, mid, a, b)\n q2 = self.query_recursive(self.right(idx), mid + 1, r, a, b)\n return max(q1, q2)\n\n def show_data(self):\n show_list = []\n for i in range(1, N + 1):\n show_list += [self.query(i, i)]\n print(show_list)\n\n\nif __name__ == \"__main__\":\n A = [1, 2, -4, 7, 3, -5, 6, 11, -20, 9, 14, 15, 5, 2, -8]\n N = 15\n segt = SegmentTree(A)\n print(segt.query(4, 6))\n print(segt.query(7, 11))\n print(segt.query(7, 12))\n segt.update(1, 3, 111)\n print(segt.query(1, 15))\n segt.update(7, 8, 235)\n segt.show_data()\n", "path": "data_structures/binary_tree/segment_tree.py"}]}
| 2,329 | 735 |
gh_patches_debug_7508
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-2033
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Restarting Qtile leaves a traceback
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/libqtile/ipc.py", line 72, in unpack
assert len(data) >= HDRLEN
AssertionError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/qtile-cmd", line 11, in <module>
load_entry_point('qtile==0.14.2', 'console_scripts', 'qtile-cmd')()
File "/usr/lib/python3.7/site-packages/libqtile/scripts/qtile_cmd.py", line 185, in main
ret = run_function(obj, args.function[0], args.args)
File "/usr/lib/python3.7/site-packages/libqtile/scripts/qtile_cmd.py", line 130, in run_function
ret = func(*args)
File "/usr/lib/python3.7/site-packages/libqtile/command.py", line 114, in __call__
return self.call(self.selectors, self.name, *args, **kwargs)
File "/usr/lib/python3.7/site-packages/libqtile/command.py", line 251, in call
state, val = self.client.call((selectors, name, args, kwargs))
File "/usr/lib/python3.7/site-packages/libqtile/ipc.py", line 213, in call
return self.send(data)
File "/usr/lib/python3.7/site-packages/libqtile/ipc.py", line 206, in send
self.loop.run_until_complete(asyncio.wait_for(client_proto.reply, timeout=10))
File "/usr/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "/usr/lib/python3.7/asyncio/tasks.py", line 416, in wait_for
return fut.result()
File "/usr/lib/python3.7/site-packages/libqtile/ipc.py", line 152, in eof_received
data, _ = _IPC.unpack(self.recv)
File "/usr/lib/python3.7/site-packages/libqtile/ipc.py", line 78, in unpack
"error reading reply!"
libqtile.ipc.IPCError: error readin
</issue>
<code>
[start of libqtile/ipc.py]
1 # Copyright (c) 2008, Aldo Cortesi. All rights reserved.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
19 # SOFTWARE.
20
21 """
22 A simple IPC mechanism for communicating between two local processes. We
23 use marshal to serialize data - this means that both client and server must
24 run the same Python version, and that clients must be trusted (as
25 un-marshalling untrusted data can result in arbitrary code execution).
26 """
27 import asyncio
28 import fcntl
29 import json
30 import marshal
31 import os.path
32 import socket
33 import struct
34 from typing import Any, Optional, Tuple
35
36 from libqtile.log_utils import logger
37 from libqtile.utils import get_cache_dir
38
39 HDRFORMAT = "!L"
40 HDRLEN = struct.calcsize(HDRFORMAT)
41
42 SOCKBASE = "qtilesocket.%s"
43
44
45 def find_sockfile(display: str = None):
46 """Finds the appropriate socket file for the given display"""
47 display = display or os.environ.get("DISPLAY") or ":0.0"
48 if "." not in display:
49 display += ".0"
50 cache_directory = get_cache_dir()
51 return os.path.join(cache_directory, SOCKBASE % display)
52
53
54 class IPCError(Exception):
55 pass
56
57
58 class _IPC:
59 """A helper class to handle properly packing and unpacking messages"""
60
61 @staticmethod
62 def unpack(data: bytes, *, is_json: Optional[bool] = None) -> Tuple[Any, bool]:
63 """Unpack the incoming message
64
65 Parameters
66 ----------
67 data : bytes
68 The incoming message to unpack
69 is_json : Optional[bool]
70 If the message should be unpacked as json. By default, try to
71 unpack json and fallback gracefully to marshalled bytes.
72
73 Returns
74 -------
75 Tuple[Any, bool]
76 A tuple of the unpacked object and a boolean denoting if the
77 message was deserialized using json. If True, the return message
78 should be packed as json.
79 """
80 if is_json is None or is_json:
81 try:
82 return json.loads(data.decode()), True
83 except ValueError as e:
84 if is_json:
85 raise IPCError("Unable to decode json data") from e
86
87 try:
88 assert len(data) >= HDRLEN
89 size = struct.unpack(HDRFORMAT, data[:HDRLEN])[0]
90 assert size >= len(data[HDRLEN:])
91 return marshal.loads(data[HDRLEN:HDRLEN + size]), False
92 except AssertionError as e:
93 raise IPCError(
94 "error reading reply! (probably the socket was disconnected)"
95 ) from e
96
97 @staticmethod
98 def pack(msg: Any, *, is_json: bool = False) -> bytes:
99 """Pack the object into a message to pass"""
100 if is_json:
101 json_obj = json.dumps(msg)
102 return json_obj.encode()
103
104 msg_bytes = marshal.dumps(msg)
105 size = struct.pack(HDRFORMAT, len(msg_bytes))
106 return size + msg_bytes
107
108
109 class Client:
110 def __init__(self, fname: str, is_json=False) -> None:
111 """Create a new IPC client
112
113 Parameters
114 ----------
115 fname : str
116 The file path to the file that is used to open the connection to
117 the running IPC server.
118 is_json : bool
119 Pack and unpack messages as json
120 """
121 self.fname = fname
122 self.loop = asyncio.get_event_loop()
123 self.is_json = is_json
124
125 def call(self, data: Any) -> Any:
126 return self.send(data)
127
128 def send(self, msg: Any) -> Any:
129 """Send the message and return the response from the server
130
131 If any exception is raised by the server, that will propogate out of
132 this call.
133 """
134 return self.loop.run_until_complete(self.async_send(msg))
135
136 async def async_send(self, msg: Any) -> Any:
137 """Send the message to the server
138
139 Connect to the server, then pack and send the message to the server,
140 then wait for and return the response from the server.
141 """
142 try:
143 reader, writer = await asyncio.wait_for(
144 asyncio.open_unix_connection(path=self.fname), timeout=3
145 )
146 except (ConnectionRefusedError, FileNotFoundError):
147 raise IPCError("Could not open {}".format(self.fname))
148
149 try:
150 send_data = _IPC.pack(msg, is_json=self.is_json)
151 writer.write(send_data)
152 writer.write_eof()
153
154 read_data = await asyncio.wait_for(reader.read(), timeout=10)
155 except asyncio.TimeoutError:
156 raise IPCError("Server not responding")
157 finally:
158 # see the note in Server._server_callback()
159 writer.close()
160 await writer.wait_closed()
161
162 data, _ = _IPC.unpack(read_data, is_json=self.is_json)
163
164 return data
165
166
167 class Server:
168 def __init__(self, fname: str, handler) -> None:
169 self.fname = fname
170 self.handler = handler
171 self.server = None # type: Optional[asyncio.AbstractServer]
172
173 if os.path.exists(fname):
174 os.unlink(fname)
175
176 self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM, 0)
177 flags = fcntl.fcntl(self.sock.fileno(), fcntl.F_GETFD)
178 fcntl.fcntl(self.sock.fileno(), fcntl.F_SETFD, flags | fcntl.FD_CLOEXEC)
179 self.sock.bind(self.fname)
180
181 async def _server_callback(
182 self, reader: asyncio.StreamReader, writer: asyncio.StreamWriter
183 ) -> None:
184 """Callback when a connection is made to the server
185
186 Read the data sent from the client, execute the requested command, and
187 send the reply back to the client.
188 """
189 try:
190 logger.debug("Connection made to server")
191 data = await reader.read()
192 logger.debug("EOF received by server")
193
194 req, is_json = _IPC.unpack(data)
195 except IPCError:
196 logger.warn("Invalid data received, closing connection")
197 else:
198 if req[1] == "restart":
199 # if we are going to restart, close the connection first, as we won't be back
200 logger.debug("Closing connection on restart")
201 writer.write_eof()
202
203 rep = self.handler(req)
204
205 result = _IPC.pack(rep, is_json=is_json)
206
207 logger.debug("Sending result on receive EOF")
208 writer.write(result)
209 logger.debug("Closing connection on receive EOF")
210 writer.write_eof()
211 finally:
212 writer.close()
213 await writer.wait_closed()
214
215 async def __aenter__(self) -> "Server":
216 """Start and return the server"""
217 await self.start()
218 return self
219
220 async def __aexit__(self, exc_type, exc_value, tb) -> None:
221 """Close and shutdown the server"""
222 await self.close()
223
224 async def start(self) -> None:
225 """Start the server"""
226 assert self.server is None
227
228 logger.debug("Starting server")
229 server_coroutine = asyncio.start_unix_server(
230 self._server_callback, sock=self.sock
231 )
232 self.server = await server_coroutine
233
234 async def close(self) -> None:
235 """Close and shutdown the server"""
236 assert self.server is not None
237
238 logger.debug("Stopping server on close")
239 self.server.close()
240 await self.server.wait_closed()
241
242 self.server = None
243
[end of libqtile/ipc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libqtile/ipc.py b/libqtile/ipc.py
--- a/libqtile/ipc.py
+++ b/libqtile/ipc.py
@@ -195,11 +195,6 @@
except IPCError:
logger.warn("Invalid data received, closing connection")
else:
- if req[1] == "restart":
- # if we are going to restart, close the connection first, as we won't be back
- logger.debug("Closing connection on restart")
- writer.write_eof()
-
rep = self.handler(req)
result = _IPC.pack(rep, is_json=is_json)
|
{"golden_diff": "diff --git a/libqtile/ipc.py b/libqtile/ipc.py\n--- a/libqtile/ipc.py\n+++ b/libqtile/ipc.py\n@@ -195,11 +195,6 @@\n except IPCError:\n logger.warn(\"Invalid data received, closing connection\")\n else:\n- if req[1] == \"restart\":\n- # if we are going to restart, close the connection first, as we won't be back\n- logger.debug(\"Closing connection on restart\")\n- writer.write_eof()\n-\n rep = self.handler(req)\n \n result = _IPC.pack(rep, is_json=is_json)\n", "issue": "Restarting Qtile leaves a traceback\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/site-packages/libqtile/ipc.py\", line 72, in unpack\r\n assert len(data) >= HDRLEN\r\nAssertionError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/bin/qtile-cmd\", line 11, in <module>\r\n load_entry_point('qtile==0.14.2', 'console_scripts', 'qtile-cmd')()\r\n File \"/usr/lib/python3.7/site-packages/libqtile/scripts/qtile_cmd.py\", line 185, in main\r\n ret = run_function(obj, args.function[0], args.args)\r\n File \"/usr/lib/python3.7/site-packages/libqtile/scripts/qtile_cmd.py\", line 130, in run_function\r\n ret = func(*args)\r\n File \"/usr/lib/python3.7/site-packages/libqtile/command.py\", line 114, in __call__\r\n return self.call(self.selectors, self.name, *args, **kwargs)\r\n File \"/usr/lib/python3.7/site-packages/libqtile/command.py\", line 251, in call\r\n state, val = self.client.call((selectors, name, args, kwargs))\r\n File \"/usr/lib/python3.7/site-packages/libqtile/ipc.py\", line 213, in call\r\n return self.send(data)\r\n File \"/usr/lib/python3.7/site-packages/libqtile/ipc.py\", line 206, in send\r\n self.loop.run_until_complete(asyncio.wait_for(client_proto.reply, timeout=10))\r\n File \"/usr/lib/python3.7/asyncio/base_events.py\", line 584, in run_until_complete\r\n return future.result()\r\n File \"/usr/lib/python3.7/asyncio/tasks.py\", line 416, in wait_for\r\n return fut.result()\r\n File \"/usr/lib/python3.7/site-packages/libqtile/ipc.py\", line 152, in eof_received\r\n data, _ = _IPC.unpack(self.recv)\r\n File \"/usr/lib/python3.7/site-packages/libqtile/ipc.py\", line 78, in unpack\r\n \"error reading reply!\"\r\nlibqtile.ipc.IPCError: error readin\n", "before_files": [{"content": "# Copyright (c) 2008, Aldo Cortesi. All rights reserved.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\n\"\"\"\n A simple IPC mechanism for communicating between two local processes. We\n use marshal to serialize data - this means that both client and server must\n run the same Python version, and that clients must be trusted (as\n un-marshalling untrusted data can result in arbitrary code execution).\n\"\"\"\nimport asyncio\nimport fcntl\nimport json\nimport marshal\nimport os.path\nimport socket\nimport struct\nfrom typing import Any, Optional, Tuple\n\nfrom libqtile.log_utils import logger\nfrom libqtile.utils import get_cache_dir\n\nHDRFORMAT = \"!L\"\nHDRLEN = struct.calcsize(HDRFORMAT)\n\nSOCKBASE = \"qtilesocket.%s\"\n\n\ndef find_sockfile(display: str = None):\n \"\"\"Finds the appropriate socket file for the given display\"\"\"\n display = display or os.environ.get(\"DISPLAY\") or \":0.0\"\n if \".\" not in display:\n display += \".0\"\n cache_directory = get_cache_dir()\n return os.path.join(cache_directory, SOCKBASE % display)\n\n\nclass IPCError(Exception):\n pass\n\n\nclass _IPC:\n \"\"\"A helper class to handle properly packing and unpacking messages\"\"\"\n\n @staticmethod\n def unpack(data: bytes, *, is_json: Optional[bool] = None) -> Tuple[Any, bool]:\n \"\"\"Unpack the incoming message\n\n Parameters\n ----------\n data : bytes\n The incoming message to unpack\n is_json : Optional[bool]\n If the message should be unpacked as json. By default, try to\n unpack json and fallback gracefully to marshalled bytes.\n\n Returns\n -------\n Tuple[Any, bool]\n A tuple of the unpacked object and a boolean denoting if the\n message was deserialized using json. If True, the return message\n should be packed as json.\n \"\"\"\n if is_json is None or is_json:\n try:\n return json.loads(data.decode()), True\n except ValueError as e:\n if is_json:\n raise IPCError(\"Unable to decode json data\") from e\n\n try:\n assert len(data) >= HDRLEN\n size = struct.unpack(HDRFORMAT, data[:HDRLEN])[0]\n assert size >= len(data[HDRLEN:])\n return marshal.loads(data[HDRLEN:HDRLEN + size]), False\n except AssertionError as e:\n raise IPCError(\n \"error reading reply! (probably the socket was disconnected)\"\n ) from e\n\n @staticmethod\n def pack(msg: Any, *, is_json: bool = False) -> bytes:\n \"\"\"Pack the object into a message to pass\"\"\"\n if is_json:\n json_obj = json.dumps(msg)\n return json_obj.encode()\n\n msg_bytes = marshal.dumps(msg)\n size = struct.pack(HDRFORMAT, len(msg_bytes))\n return size + msg_bytes\n\n\nclass Client:\n def __init__(self, fname: str, is_json=False) -> None:\n \"\"\"Create a new IPC client\n\n Parameters\n ----------\n fname : str\n The file path to the file that is used to open the connection to\n the running IPC server.\n is_json : bool\n Pack and unpack messages as json\n \"\"\"\n self.fname = fname\n self.loop = asyncio.get_event_loop()\n self.is_json = is_json\n\n def call(self, data: Any) -> Any:\n return self.send(data)\n\n def send(self, msg: Any) -> Any:\n \"\"\"Send the message and return the response from the server\n\n If any exception is raised by the server, that will propogate out of\n this call.\n \"\"\"\n return self.loop.run_until_complete(self.async_send(msg))\n\n async def async_send(self, msg: Any) -> Any:\n \"\"\"Send the message to the server\n\n Connect to the server, then pack and send the message to the server,\n then wait for and return the response from the server.\n \"\"\"\n try:\n reader, writer = await asyncio.wait_for(\n asyncio.open_unix_connection(path=self.fname), timeout=3\n )\n except (ConnectionRefusedError, FileNotFoundError):\n raise IPCError(\"Could not open {}\".format(self.fname))\n\n try:\n send_data = _IPC.pack(msg, is_json=self.is_json)\n writer.write(send_data)\n writer.write_eof()\n\n read_data = await asyncio.wait_for(reader.read(), timeout=10)\n except asyncio.TimeoutError:\n raise IPCError(\"Server not responding\")\n finally:\n # see the note in Server._server_callback()\n writer.close()\n await writer.wait_closed()\n\n data, _ = _IPC.unpack(read_data, is_json=self.is_json)\n\n return data\n\n\nclass Server:\n def __init__(self, fname: str, handler) -> None:\n self.fname = fname\n self.handler = handler\n self.server = None # type: Optional[asyncio.AbstractServer]\n\n if os.path.exists(fname):\n os.unlink(fname)\n\n self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM, 0)\n flags = fcntl.fcntl(self.sock.fileno(), fcntl.F_GETFD)\n fcntl.fcntl(self.sock.fileno(), fcntl.F_SETFD, flags | fcntl.FD_CLOEXEC)\n self.sock.bind(self.fname)\n\n async def _server_callback(\n self, reader: asyncio.StreamReader, writer: asyncio.StreamWriter\n ) -> None:\n \"\"\"Callback when a connection is made to the server\n\n Read the data sent from the client, execute the requested command, and\n send the reply back to the client.\n \"\"\"\n try:\n logger.debug(\"Connection made to server\")\n data = await reader.read()\n logger.debug(\"EOF received by server\")\n\n req, is_json = _IPC.unpack(data)\n except IPCError:\n logger.warn(\"Invalid data received, closing connection\")\n else:\n if req[1] == \"restart\":\n # if we are going to restart, close the connection first, as we won't be back\n logger.debug(\"Closing connection on restart\")\n writer.write_eof()\n\n rep = self.handler(req)\n\n result = _IPC.pack(rep, is_json=is_json)\n\n logger.debug(\"Sending result on receive EOF\")\n writer.write(result)\n logger.debug(\"Closing connection on receive EOF\")\n writer.write_eof()\n finally:\n writer.close()\n await writer.wait_closed()\n\n async def __aenter__(self) -> \"Server\":\n \"\"\"Start and return the server\"\"\"\n await self.start()\n return self\n\n async def __aexit__(self, exc_type, exc_value, tb) -> None:\n \"\"\"Close and shutdown the server\"\"\"\n await self.close()\n\n async def start(self) -> None:\n \"\"\"Start the server\"\"\"\n assert self.server is None\n\n logger.debug(\"Starting server\")\n server_coroutine = asyncio.start_unix_server(\n self._server_callback, sock=self.sock\n )\n self.server = await server_coroutine\n\n async def close(self) -> None:\n \"\"\"Close and shutdown the server\"\"\"\n assert self.server is not None\n\n logger.debug(\"Stopping server on close\")\n self.server.close()\n await self.server.wait_closed()\n\n self.server = None\n", "path": "libqtile/ipc.py"}]}
| 3,469 | 141 |
gh_patches_debug_33475
|
rasdani/github-patches
|
git_diff
|
mozmeao__snippets-service-1287
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use OAuth client ids instead of names in Firefox App Targeting.
List of client ids: https://docs.telemetry.mozilla.org/datasets/fxa_metrics/attribution.html#service-attribution
Related bug https://bugzilla.mozilla.org/show_bug.cgi?id=1596514#c5
- [ ] After pushing to prod, generate JEXL for related Targets
- [ ] ping mindy when is complete
<!-- probot = {"419912":{"who":"glogiotatidis","what":"","when":"2020-01-08T09:00:00.000Z"}} -->
</issue>
<code>
[start of snippets/base/admin/fields.py]
1 from django.core.exceptions import ValidationError
2 from django.forms import (ChoiceField, ModelChoiceField, ModelMultipleChoiceField,
3 MultiValueField, MultipleChoiceField)
4
5 from snippets.base.models import Addon, TargetedCountry
6
7 from .widgets import JEXLMultiWidget
8
9
10 class MultipleChoiceFieldCSV(MultipleChoiceField):
11 # To be used with in snippets.base.forms.SnippetAdminForm and in
12 # combination with DynamicField. We don't directly save() this field in the
13 # database so get_prep_value has not been implemented.
14
15 def prepare_value(self, value):
16 value = super(MultipleChoiceFieldCSV, self).prepare_value(value)
17 if not isinstance(value, list):
18 value = value.split(';')
19 return value
20
21 def clean(self, value):
22 value = super(MultipleChoiceFieldCSV, self).clean(value)
23 return ';'.join(value)
24
25
26 class JEXLBaseField():
27 def to_jexl(self, value):
28 if value:
29 return self.jexl.format(attr_name=self.attr_name, value=value)
30
31 return None
32
33
34 class JEXLChoiceField(JEXLBaseField, ChoiceField):
35 def __init__(self, attr_name, *args, **kwargs):
36 self.attr_name = attr_name
37 self.jexl = '{attr_name} == {value}'
38 self.jexl = kwargs.pop('jexl', self.jexl)
39 return super().__init__(*args, **kwargs)
40
41 def to_jexl(self, value):
42 if value:
43 return self.jexl.format(attr_name=self.attr_name, value=value)
44
45
46 class JEXLModelMultipleChoiceField(JEXLBaseField, ModelMultipleChoiceField):
47 def __init__(self, attr_name, *args, **kwargs):
48 self.attr_name = attr_name
49 self.jexl = '{attr_name} in {value}'
50 self.jexl = kwargs.pop('jexl', self.jexl)
51 return super().__init__(*args, **kwargs)
52
53 def prepare_value(self, value):
54 if isinstance(value, str):
55 value = value.split(';')
56 return super().prepare_value(value)
57
58 def clean(self, value):
59 value = super().clean(value)
60 return ';'.join([str(x.id) for x in value])
61
62
63 class JEXLCountryField(JEXLModelMultipleChoiceField):
64 def to_jexl(self, value):
65 if value:
66 values = TargetedCountry.objects.filter(id__in=value.split(";"))
67 return f'region in {[x.code for x in values]}'
68 return None
69
70
71 class JEXLRangeField(JEXLBaseField, MultiValueField):
72 def __init__(self, attr_name, choices, **kwargs):
73 self.attr_name = attr_name
74 self.jexl = {
75 'minimum': '{value} <= {attr_name}',
76 'maximum': '{attr_name} < {value}'
77 }
78 self.jexl = kwargs.pop('jexl', self.jexl)
79 fields = (
80 ChoiceField(choices=choices),
81 ChoiceField(choices=choices),
82 )
83 super().__init__(fields, **kwargs)
84 self.widget = JEXLMultiWidget(widgets=[f.widget for f in self.fields],
85 template_name='widgets/jexlrange.html')
86
87 def compress(self, data_list):
88 return ','.join(data_list)
89
90 def to_jexl(self, value):
91 final_jexl = []
92 if value:
93 minimum, maximum = value.split(',')
94 if minimum:
95 final_jexl.append(
96 self.jexl['minimum'].format(attr_name=self.attr_name, value=minimum)
97 )
98 if maximum:
99 final_jexl.append(
100 self.jexl['maximum'].format(attr_name=self.attr_name, value=maximum)
101 )
102 return ' && '.join(final_jexl)
103
104 def validate(self, value):
105 minimum, maximum = value.split(',')
106 self.fields[0].validate(minimum)
107 self.fields[1].validate(maximum)
108
109 if minimum and maximum and int(minimum) > int(maximum):
110 raise ValidationError('Minimum value must be lower or equal to maximum value.')
111 return value
112
113
114 class JEXLFirefoxRangeField(JEXLRangeField):
115 def __init__(self, **kwargs):
116 # Include only versions greater than 63, where ASRSnippets exist.
117 min_version = 64
118 # Need to be able to dynamically change this, probably using
119 # product_details. Issue #855
120 max_version = 84
121
122 choices = (
123 [(None, 'No limit')] +
124 [(x, x) for x in reversed(range(min_version, max_version + 1))]
125 )
126 super().__init__('firefoxVersion', choices, **kwargs)
127
128 def validate(self, value):
129 minimum, maximum = value.split(',')
130 self.fields[0].validate(minimum)
131 self.fields[1].validate(maximum)
132
133 if minimum and maximum and minimum > maximum:
134 raise ValidationError('Minimum value must be lower or equal to maximum value.')
135 return value
136
137
138 class JEXLAddonField(MultiValueField):
139 def __init__(self, **kwargs):
140 choices = (
141 (None, "I don't care"),
142 ('not_installed', 'Not Installed'),
143 ('installed', 'Installed'),
144 )
145 fields = (
146 ChoiceField(choices=choices),
147 ModelChoiceField(queryset=Addon.objects.all(), required=False),
148 )
149 super().__init__(fields, **kwargs)
150 self.widget = JEXLMultiWidget(widgets=[f.widget for f in self.fields])
151
152 def compress(self, data_list):
153 if data_list:
154 return '{},{}'.format(data_list[0], getattr(data_list[1], 'id', ''))
155 return ''
156
157 def to_jexl(self, value):
158 check, addon_id = value.split(',')
159 if not check or not addon_id:
160 return ''
161
162 addon = Addon.objects.get(id=addon_id)
163 if check == 'not_installed':
164 jexl = '("{}" in addonsInfo.addons|keys) == false'.format(addon.guid)
165 elif check == 'installed':
166 jexl = '("{}" in addonsInfo.addons|keys) == true'.format(addon.guid)
167
168 return jexl
169
170 def validate(self, value):
171 check, addon_id = value.split(',')
172
173 self.fields[0].validate(check)
174 self.fields[1].validate(addon_id)
175
176 if check and not addon_id:
177 raise ValidationError('You must select an add-on')
178
179 if not check and addon_id:
180 raise ValidationError('You must select a check')
181 return value
182
183
184 class JEXLFirefoxServicesField(MultiValueField):
185 def __init__(self, **kwargs):
186 check_choices = (
187 (None, "I don't care"),
188 ('no_account', "User hasn't signed up for"),
189 ('has_account', 'User has signed up for'),
190 )
191 service_choices = (
192 (None, '---------'),
193 ('Firefox Lockwise', 'Firefox Lockwise'),
194 ('Firefox Monitor', 'Firefox Monitor'),
195 ('Firefox Send', 'Firefox Send'),
196 ('Firefox Private Network', 'Firefox Private Network'),
197 ('Notes', 'Notes'),
198 ('Pocket', 'Pocket'),
199 )
200 fields = (
201 ChoiceField(choices=check_choices),
202 ChoiceField(choices=service_choices),
203 )
204 super().__init__(fields, **kwargs)
205 self.widget = JEXLMultiWidget(widgets=[f.widget for f in self.fields])
206
207 def compress(self, data_list):
208 if data_list:
209 return f'{data_list[0]},{data_list[1]}'
210 return ''
211
212 def to_jexl(self, value):
213 check, service_name = value.split(',')
214 if not check or not service_name:
215 return ''
216
217 if check == 'no_account':
218 jexl = f'("{service_name}" in attachedFxAOAuthClients|mapToProperty("name")) == false'
219 elif check == 'has_account':
220 jexl = f'("{service_name}" in attachedFxAOAuthClients|mapToProperty("name")) == true'
221
222 return jexl
223
224 def validate(self, value):
225 check, service_name = value.split(',')
226
227 self.fields[0].validate(check)
228 self.fields[1].validate(service_name)
229
230 if check and not service_name:
231 raise ValidationError('You must select an Service.')
232
233 if not check and service_name:
234 raise ValidationError('You must select a check.')
235 return value
236
[end of snippets/base/admin/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/snippets/base/admin/fields.py b/snippets/base/admin/fields.py
--- a/snippets/base/admin/fields.py
+++ b/snippets/base/admin/fields.py
@@ -188,14 +188,19 @@
('no_account', "User hasn't signed up for"),
('has_account', 'User has signed up for'),
)
+ # Verify IDs using
+ # curl -s https://oauth.stage.mozaws.net/v1/client/<ID> | jq .
+ # Incomplete list of IDs
+ # https://docs.telemetry.mozilla.org/datasets/fxa_metrics/attribution.html#service-attribution # noqa
service_choices = (
(None, '---------'),
- ('Firefox Lockwise', 'Firefox Lockwise'),
- ('Firefox Monitor', 'Firefox Monitor'),
- ('Firefox Send', 'Firefox Send'),
- ('Firefox Private Network', 'Firefox Private Network'),
- ('Notes', 'Notes'),
- ('Pocket', 'Pocket'),
+ ('e7ce535d93522896|98adfa37698f255b', 'Firefox Lockwise'),
+ ('802d56ef2a9af9fa', 'Firefox Monitor'),
+ ('1f30e32975ae5112|20f7931c9054d833', 'Firefox Send'),
+ ('a8c528140153d1c6|565585c1745a144d|e6eb0d1e856335fc', 'Firefox Private Network'),
+ ('7ad9917f6c55fb77', 'Firefox Reality'),
+ ('7377719276ad44ee|749818d3f2e7857f', 'Pocket'),
+
)
fields = (
ChoiceField(choices=check_choices),
@@ -210,14 +215,21 @@
return ''
def to_jexl(self, value):
- check, service_name = value.split(',')
- if not check or not service_name:
+ check, ids = value.split(',')
+ ids = ids.split('|') if ids else ''
+
+ if not check or not ids:
return ''
+ jexl = '('
+ for id in ids:
+ jexl += f'("{id}" in attachedFxAOAuthClients|mapToProperty("id")) || '
+ jexl = jexl[:-4]
+
if check == 'no_account':
- jexl = f'("{service_name}" in attachedFxAOAuthClients|mapToProperty("name")) == false'
+ jexl += ') == false'
elif check == 'has_account':
- jexl = f'("{service_name}" in attachedFxAOAuthClients|mapToProperty("name")) == true'
+ jexl += ') == true'
return jexl
|
{"golden_diff": "diff --git a/snippets/base/admin/fields.py b/snippets/base/admin/fields.py\n--- a/snippets/base/admin/fields.py\n+++ b/snippets/base/admin/fields.py\n@@ -188,14 +188,19 @@\n ('no_account', \"User hasn't signed up for\"),\n ('has_account', 'User has signed up for'),\n )\n+ # Verify IDs using\n+ # curl -s https://oauth.stage.mozaws.net/v1/client/<ID> | jq .\n+ # Incomplete list of IDs\n+ # https://docs.telemetry.mozilla.org/datasets/fxa_metrics/attribution.html#service-attribution # noqa\n service_choices = (\n (None, '---------'),\n- ('Firefox Lockwise', 'Firefox Lockwise'),\n- ('Firefox Monitor', 'Firefox Monitor'),\n- ('Firefox Send', 'Firefox Send'),\n- ('Firefox Private Network', 'Firefox Private Network'),\n- ('Notes', 'Notes'),\n- ('Pocket', 'Pocket'),\n+ ('e7ce535d93522896|98adfa37698f255b', 'Firefox Lockwise'),\n+ ('802d56ef2a9af9fa', 'Firefox Monitor'),\n+ ('1f30e32975ae5112|20f7931c9054d833', 'Firefox Send'),\n+ ('a8c528140153d1c6|565585c1745a144d|e6eb0d1e856335fc', 'Firefox Private Network'),\n+ ('7ad9917f6c55fb77', 'Firefox Reality'),\n+ ('7377719276ad44ee|749818d3f2e7857f', 'Pocket'),\n+\n )\n fields = (\n ChoiceField(choices=check_choices),\n@@ -210,14 +215,21 @@\n return ''\n \n def to_jexl(self, value):\n- check, service_name = value.split(',')\n- if not check or not service_name:\n+ check, ids = value.split(',')\n+ ids = ids.split('|') if ids else ''\n+\n+ if not check or not ids:\n return ''\n \n+ jexl = '('\n+ for id in ids:\n+ jexl += f'(\"{id}\" in attachedFxAOAuthClients|mapToProperty(\"id\")) || '\n+ jexl = jexl[:-4]\n+\n if check == 'no_account':\n- jexl = f'(\"{service_name}\" in attachedFxAOAuthClients|mapToProperty(\"name\")) == false'\n+ jexl += ') == false'\n elif check == 'has_account':\n- jexl = f'(\"{service_name}\" in attachedFxAOAuthClients|mapToProperty(\"name\")) == true'\n+ jexl += ') == true'\n \n return jexl\n", "issue": "Use OAuth client ids instead of names in Firefox App Targeting.\nList of client ids: https://docs.telemetry.mozilla.org/datasets/fxa_metrics/attribution.html#service-attribution\r\n\r\nRelated bug https://bugzilla.mozilla.org/show_bug.cgi?id=1596514#c5\r\n\r\n - [ ] After pushing to prod, generate JEXL for related Targets\r\n - [ ] ping mindy when is complete\r\n\r\n<!-- probot = {\"419912\":{\"who\":\"glogiotatidis\",\"what\":\"\",\"when\":\"2020-01-08T09:00:00.000Z\"}} -->\n", "before_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.forms import (ChoiceField, ModelChoiceField, ModelMultipleChoiceField,\n MultiValueField, MultipleChoiceField)\n\nfrom snippets.base.models import Addon, TargetedCountry\n\nfrom .widgets import JEXLMultiWidget\n\n\nclass MultipleChoiceFieldCSV(MultipleChoiceField):\n # To be used with in snippets.base.forms.SnippetAdminForm and in\n # combination with DynamicField. We don't directly save() this field in the\n # database so get_prep_value has not been implemented.\n\n def prepare_value(self, value):\n value = super(MultipleChoiceFieldCSV, self).prepare_value(value)\n if not isinstance(value, list):\n value = value.split(';')\n return value\n\n def clean(self, value):\n value = super(MultipleChoiceFieldCSV, self).clean(value)\n return ';'.join(value)\n\n\nclass JEXLBaseField():\n def to_jexl(self, value):\n if value:\n return self.jexl.format(attr_name=self.attr_name, value=value)\n\n return None\n\n\nclass JEXLChoiceField(JEXLBaseField, ChoiceField):\n def __init__(self, attr_name, *args, **kwargs):\n self.attr_name = attr_name\n self.jexl = '{attr_name} == {value}'\n self.jexl = kwargs.pop('jexl', self.jexl)\n return super().__init__(*args, **kwargs)\n\n def to_jexl(self, value):\n if value:\n return self.jexl.format(attr_name=self.attr_name, value=value)\n\n\nclass JEXLModelMultipleChoiceField(JEXLBaseField, ModelMultipleChoiceField):\n def __init__(self, attr_name, *args, **kwargs):\n self.attr_name = attr_name\n self.jexl = '{attr_name} in {value}'\n self.jexl = kwargs.pop('jexl', self.jexl)\n return super().__init__(*args, **kwargs)\n\n def prepare_value(self, value):\n if isinstance(value, str):\n value = value.split(';')\n return super().prepare_value(value)\n\n def clean(self, value):\n value = super().clean(value)\n return ';'.join([str(x.id) for x in value])\n\n\nclass JEXLCountryField(JEXLModelMultipleChoiceField):\n def to_jexl(self, value):\n if value:\n values = TargetedCountry.objects.filter(id__in=value.split(\";\"))\n return f'region in {[x.code for x in values]}'\n return None\n\n\nclass JEXLRangeField(JEXLBaseField, MultiValueField):\n def __init__(self, attr_name, choices, **kwargs):\n self.attr_name = attr_name\n self.jexl = {\n 'minimum': '{value} <= {attr_name}',\n 'maximum': '{attr_name} < {value}'\n }\n self.jexl = kwargs.pop('jexl', self.jexl)\n fields = (\n ChoiceField(choices=choices),\n ChoiceField(choices=choices),\n )\n super().__init__(fields, **kwargs)\n self.widget = JEXLMultiWidget(widgets=[f.widget for f in self.fields],\n template_name='widgets/jexlrange.html')\n\n def compress(self, data_list):\n return ','.join(data_list)\n\n def to_jexl(self, value):\n final_jexl = []\n if value:\n minimum, maximum = value.split(',')\n if minimum:\n final_jexl.append(\n self.jexl['minimum'].format(attr_name=self.attr_name, value=minimum)\n )\n if maximum:\n final_jexl.append(\n self.jexl['maximum'].format(attr_name=self.attr_name, value=maximum)\n )\n return ' && '.join(final_jexl)\n\n def validate(self, value):\n minimum, maximum = value.split(',')\n self.fields[0].validate(minimum)\n self.fields[1].validate(maximum)\n\n if minimum and maximum and int(minimum) > int(maximum):\n raise ValidationError('Minimum value must be lower or equal to maximum value.')\n return value\n\n\nclass JEXLFirefoxRangeField(JEXLRangeField):\n def __init__(self, **kwargs):\n # Include only versions greater than 63, where ASRSnippets exist.\n min_version = 64\n # Need to be able to dynamically change this, probably using\n # product_details. Issue #855\n max_version = 84\n\n choices = (\n [(None, 'No limit')] +\n [(x, x) for x in reversed(range(min_version, max_version + 1))]\n )\n super().__init__('firefoxVersion', choices, **kwargs)\n\n def validate(self, value):\n minimum, maximum = value.split(',')\n self.fields[0].validate(minimum)\n self.fields[1].validate(maximum)\n\n if minimum and maximum and minimum > maximum:\n raise ValidationError('Minimum value must be lower or equal to maximum value.')\n return value\n\n\nclass JEXLAddonField(MultiValueField):\n def __init__(self, **kwargs):\n choices = (\n (None, \"I don't care\"),\n ('not_installed', 'Not Installed'),\n ('installed', 'Installed'),\n )\n fields = (\n ChoiceField(choices=choices),\n ModelChoiceField(queryset=Addon.objects.all(), required=False),\n )\n super().__init__(fields, **kwargs)\n self.widget = JEXLMultiWidget(widgets=[f.widget for f in self.fields])\n\n def compress(self, data_list):\n if data_list:\n return '{},{}'.format(data_list[0], getattr(data_list[1], 'id', ''))\n return ''\n\n def to_jexl(self, value):\n check, addon_id = value.split(',')\n if not check or not addon_id:\n return ''\n\n addon = Addon.objects.get(id=addon_id)\n if check == 'not_installed':\n jexl = '(\"{}\" in addonsInfo.addons|keys) == false'.format(addon.guid)\n elif check == 'installed':\n jexl = '(\"{}\" in addonsInfo.addons|keys) == true'.format(addon.guid)\n\n return jexl\n\n def validate(self, value):\n check, addon_id = value.split(',')\n\n self.fields[0].validate(check)\n self.fields[1].validate(addon_id)\n\n if check and not addon_id:\n raise ValidationError('You must select an add-on')\n\n if not check and addon_id:\n raise ValidationError('You must select a check')\n return value\n\n\nclass JEXLFirefoxServicesField(MultiValueField):\n def __init__(self, **kwargs):\n check_choices = (\n (None, \"I don't care\"),\n ('no_account', \"User hasn't signed up for\"),\n ('has_account', 'User has signed up for'),\n )\n service_choices = (\n (None, '---------'),\n ('Firefox Lockwise', 'Firefox Lockwise'),\n ('Firefox Monitor', 'Firefox Monitor'),\n ('Firefox Send', 'Firefox Send'),\n ('Firefox Private Network', 'Firefox Private Network'),\n ('Notes', 'Notes'),\n ('Pocket', 'Pocket'),\n )\n fields = (\n ChoiceField(choices=check_choices),\n ChoiceField(choices=service_choices),\n )\n super().__init__(fields, **kwargs)\n self.widget = JEXLMultiWidget(widgets=[f.widget for f in self.fields])\n\n def compress(self, data_list):\n if data_list:\n return f'{data_list[0]},{data_list[1]}'\n return ''\n\n def to_jexl(self, value):\n check, service_name = value.split(',')\n if not check or not service_name:\n return ''\n\n if check == 'no_account':\n jexl = f'(\"{service_name}\" in attachedFxAOAuthClients|mapToProperty(\"name\")) == false'\n elif check == 'has_account':\n jexl = f'(\"{service_name}\" in attachedFxAOAuthClients|mapToProperty(\"name\")) == true'\n\n return jexl\n\n def validate(self, value):\n check, service_name = value.split(',')\n\n self.fields[0].validate(check)\n self.fields[1].validate(service_name)\n\n if check and not service_name:\n raise ValidationError('You must select an Service.')\n\n if not check and service_name:\n raise ValidationError('You must select a check.')\n return value\n", "path": "snippets/base/admin/fields.py"}]}
| 3,137 | 690 |
gh_patches_debug_29509
|
rasdani/github-patches
|
git_diff
|
Textualize__textual-3012
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ListView method to append many
There is a method on ListView to append a single item. We could also use a method to append many items.
If we're following the same conventions as a builtin list, this would be called `ListView.extend` and would accept an Iterable of ListItems.
</issue>
<code>
[start of src/textual/widgets/_list_view.py]
1 from __future__ import annotations
2
3 from typing import ClassVar, Optional
4
5 from textual.await_remove import AwaitRemove
6 from textual.binding import Binding, BindingType
7 from textual.containers import VerticalScroll
8 from textual.events import Mount
9 from textual.geometry import clamp
10 from textual.message import Message
11 from textual.reactive import reactive
12 from textual.widget import AwaitMount, Widget
13 from textual.widgets._list_item import ListItem
14
15
16 class ListView(VerticalScroll, can_focus=True, can_focus_children=False):
17 """A vertical list view widget.
18
19 Displays a vertical list of `ListItem`s which can be highlighted and
20 selected using the mouse or keyboard.
21
22 Attributes:
23 index: The index in the list that's currently highlighted.
24 """
25
26 BINDINGS: ClassVar[list[BindingType]] = [
27 Binding("enter", "select_cursor", "Select", show=False),
28 Binding("up", "cursor_up", "Cursor Up", show=False),
29 Binding("down", "cursor_down", "Cursor Down", show=False),
30 ]
31 """
32 | Key(s) | Description |
33 | :- | :- |
34 | enter | Select the current item. |
35 | up | Move the cursor up. |
36 | down | Move the cursor down. |
37 """
38
39 index = reactive[Optional[int]](0, always_update=True)
40
41 class Highlighted(Message, bubble=True):
42 """Posted when the highlighted item changes.
43
44 Highlighted item is controlled using up/down keys.
45 Can be handled using `on_list_view_highlighted` in a subclass of `ListView`
46 or in a parent widget in the DOM.
47 """
48
49 ALLOW_SELECTOR_MATCH = {"item"}
50 """Additional message attributes that can be used with the [`on` decorator][textual.on]."""
51
52 def __init__(self, list_view: ListView, item: ListItem | None) -> None:
53 super().__init__()
54 self.list_view: ListView = list_view
55 """The view that contains the item highlighted."""
56 self.item: ListItem | None = item
57 """The highlighted item, if there is one highlighted."""
58
59 @property
60 def control(self) -> ListView:
61 """The view that contains the item highlighted.
62
63 This is an alias for [`Highlighted.list_view`][textual.widgets.ListView.Highlighted.list_view]
64 and is used by the [`on`][textual.on] decorator.
65 """
66 return self.list_view
67
68 class Selected(Message, bubble=True):
69 """Posted when a list item is selected, e.g. when you press the enter key on it.
70
71 Can be handled using `on_list_view_selected` in a subclass of `ListView` or in
72 a parent widget in the DOM.
73 """
74
75 ALLOW_SELECTOR_MATCH = {"item"}
76 """Additional message attributes that can be used with the [`on` decorator][textual.on]."""
77
78 def __init__(self, list_view: ListView, item: ListItem) -> None:
79 super().__init__()
80 self.list_view: ListView = list_view
81 """The view that contains the item selected."""
82 self.item: ListItem = item
83 """The selected item."""
84
85 @property
86 def control(self) -> ListView:
87 """The view that contains the item selected.
88
89 This is an alias for [`Selected.list_view`][textual.widgets.ListView.Selected.list_view]
90 and is used by the [`on`][textual.on] decorator.
91 """
92 return self.list_view
93
94 def __init__(
95 self,
96 *children: ListItem,
97 initial_index: int | None = 0,
98 name: str | None = None,
99 id: str | None = None,
100 classes: str | None = None,
101 disabled: bool = False,
102 ) -> None:
103 """
104 Initialize a ListView.
105
106 Args:
107 *children: The ListItems to display in the list.
108 initial_index: The index that should be highlighted when the list is first mounted.
109 name: The name of the widget.
110 id: The unique ID of the widget used in CSS/query selection.
111 classes: The CSS classes of the widget.
112 disabled: Whether the ListView is disabled or not.
113 """
114 super().__init__(
115 *children, name=name, id=id, classes=classes, disabled=disabled
116 )
117 self._index = initial_index
118
119 def _on_mount(self, _: Mount) -> None:
120 """Ensure the ListView is fully-settled after mounting."""
121 self.index = self._index
122
123 @property
124 def highlighted_child(self) -> ListItem | None:
125 """The currently highlighted ListItem, or None if nothing is highlighted."""
126 if self.index is not None and 0 <= self.index < len(self._nodes):
127 list_item = self._nodes[self.index]
128 assert isinstance(list_item, ListItem)
129 return list_item
130 else:
131 return None
132
133 def validate_index(self, index: int | None) -> int | None:
134 """Clamp the index to the valid range, or set to None if there's nothing to highlight.
135
136 Args:
137 index: The index to clamp.
138
139 Returns:
140 The clamped index.
141 """
142 if not self._nodes or index is None:
143 return None
144 return self._clamp_index(index)
145
146 def _clamp_index(self, index: int) -> int:
147 """Clamp the index to a valid value given the current list of children"""
148 last_index = max(len(self._nodes) - 1, 0)
149 return clamp(index, 0, last_index)
150
151 def _is_valid_index(self, index: int | None) -> bool:
152 """Return True if the current index is valid given the current list of children"""
153 if index is None:
154 return False
155 return 0 <= index < len(self._nodes)
156
157 def watch_index(self, old_index: int, new_index: int) -> None:
158 """Updates the highlighting when the index changes."""
159 if self._is_valid_index(old_index):
160 old_child = self._nodes[old_index]
161 assert isinstance(old_child, ListItem)
162 old_child.highlighted = False
163
164 new_child: Widget | None
165 if self._is_valid_index(new_index):
166 new_child = self._nodes[new_index]
167 assert isinstance(new_child, ListItem)
168 new_child.highlighted = True
169 else:
170 new_child = None
171
172 self._scroll_highlighted_region()
173 self.post_message(self.Highlighted(self, new_child))
174
175 def append(self, item: ListItem) -> AwaitMount:
176 """Append a new ListItem to the end of the ListView.
177
178 Args:
179 item: The ListItem to append.
180
181 Returns:
182 An awaitable that yields control to the event loop
183 until the DOM has been updated with the new child item.
184 """
185 await_mount = self.mount(item)
186 if len(self) == 1:
187 self.index = 0
188 return await_mount
189
190 def clear(self) -> AwaitRemove:
191 """Clear all items from the ListView.
192
193 Returns:
194 An awaitable that yields control to the event loop until
195 the DOM has been updated to reflect all children being removed.
196 """
197 await_remove = self.query("ListView > ListItem").remove()
198 self.index = None
199 return await_remove
200
201 def action_select_cursor(self) -> None:
202 """Select the current item in the list."""
203 selected_child = self.highlighted_child
204 if selected_child is None:
205 return
206 self.post_message(self.Selected(self, selected_child))
207
208 def action_cursor_down(self) -> None:
209 """Highlight the next item in the list."""
210 if self.index is None:
211 self.index = 0
212 return
213 self.index += 1
214
215 def action_cursor_up(self) -> None:
216 """Highlight the previous item in the list."""
217 if self.index is None:
218 self.index = 0
219 return
220 self.index -= 1
221
222 def _on_list_item__child_clicked(self, event: ListItem._ChildClicked) -> None:
223 self.focus()
224 self.index = self._nodes.index(event.item)
225 self.post_message(self.Selected(self, event.item))
226
227 def _scroll_highlighted_region(self) -> None:
228 """Used to keep the highlighted index within vision"""
229 if self.highlighted_child is not None:
230 self.scroll_to_widget(self.highlighted_child, animate=False)
231
232 def __len__(self):
233 return len(self._nodes)
234
[end of src/textual/widgets/_list_view.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/textual/widgets/_list_view.py b/src/textual/widgets/_list_view.py
--- a/src/textual/widgets/_list_view.py
+++ b/src/textual/widgets/_list_view.py
@@ -1,6 +1,6 @@
from __future__ import annotations
-from typing import ClassVar, Optional
+from typing import ClassVar, Iterable, Optional
from textual.await_remove import AwaitRemove
from textual.binding import Binding, BindingType
@@ -172,6 +172,21 @@
self._scroll_highlighted_region()
self.post_message(self.Highlighted(self, new_child))
+ def extend(self, items: Iterable[ListItem]) -> AwaitMount:
+ """Append multiple new ListItems to the end of the ListView.
+
+ Args:
+ items: The ListItems to append.
+
+ Returns:
+ An awaitable that yields control to the event loop
+ until the DOM has been updated with the new child items.
+ """
+ await_mount = self.mount(*items)
+ if len(self) == 1:
+ self.index = 0
+ return await_mount
+
def append(self, item: ListItem) -> AwaitMount:
"""Append a new ListItem to the end of the ListView.
@@ -182,10 +197,7 @@
An awaitable that yields control to the event loop
until the DOM has been updated with the new child item.
"""
- await_mount = self.mount(item)
- if len(self) == 1:
- self.index = 0
- return await_mount
+ return self.extend([item])
def clear(self) -> AwaitRemove:
"""Clear all items from the ListView.
|
{"golden_diff": "diff --git a/src/textual/widgets/_list_view.py b/src/textual/widgets/_list_view.py\n--- a/src/textual/widgets/_list_view.py\n+++ b/src/textual/widgets/_list_view.py\n@@ -1,6 +1,6 @@\n from __future__ import annotations\n \n-from typing import ClassVar, Optional\n+from typing import ClassVar, Iterable, Optional\n \n from textual.await_remove import AwaitRemove\n from textual.binding import Binding, BindingType\n@@ -172,6 +172,21 @@\n self._scroll_highlighted_region()\n self.post_message(self.Highlighted(self, new_child))\n \n+ def extend(self, items: Iterable[ListItem]) -> AwaitMount:\n+ \"\"\"Append multiple new ListItems to the end of the ListView.\n+\n+ Args:\n+ items: The ListItems to append.\n+\n+ Returns:\n+ An awaitable that yields control to the event loop\n+ until the DOM has been updated with the new child items.\n+ \"\"\"\n+ await_mount = self.mount(*items)\n+ if len(self) == 1:\n+ self.index = 0\n+ return await_mount\n+\n def append(self, item: ListItem) -> AwaitMount:\n \"\"\"Append a new ListItem to the end of the ListView.\n \n@@ -182,10 +197,7 @@\n An awaitable that yields control to the event loop\n until the DOM has been updated with the new child item.\n \"\"\"\n- await_mount = self.mount(item)\n- if len(self) == 1:\n- self.index = 0\n- return await_mount\n+ return self.extend([item])\n \n def clear(self) -> AwaitRemove:\n \"\"\"Clear all items from the ListView.\n", "issue": "ListView method to append many\nThere is a method on ListView to append a single item. We could also use a method to append many items.\n\nIf we're following the same conventions as a builtin list, this would be called `ListView.extend` and would accept an Iterable of ListItems.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import ClassVar, Optional\n\nfrom textual.await_remove import AwaitRemove\nfrom textual.binding import Binding, BindingType\nfrom textual.containers import VerticalScroll\nfrom textual.events import Mount\nfrom textual.geometry import clamp\nfrom textual.message import Message\nfrom textual.reactive import reactive\nfrom textual.widget import AwaitMount, Widget\nfrom textual.widgets._list_item import ListItem\n\n\nclass ListView(VerticalScroll, can_focus=True, can_focus_children=False):\n \"\"\"A vertical list view widget.\n\n Displays a vertical list of `ListItem`s which can be highlighted and\n selected using the mouse or keyboard.\n\n Attributes:\n index: The index in the list that's currently highlighted.\n \"\"\"\n\n BINDINGS: ClassVar[list[BindingType]] = [\n Binding(\"enter\", \"select_cursor\", \"Select\", show=False),\n Binding(\"up\", \"cursor_up\", \"Cursor Up\", show=False),\n Binding(\"down\", \"cursor_down\", \"Cursor Down\", show=False),\n ]\n \"\"\"\n | Key(s) | Description |\n | :- | :- |\n | enter | Select the current item. |\n | up | Move the cursor up. |\n | down | Move the cursor down. |\n \"\"\"\n\n index = reactive[Optional[int]](0, always_update=True)\n\n class Highlighted(Message, bubble=True):\n \"\"\"Posted when the highlighted item changes.\n\n Highlighted item is controlled using up/down keys.\n Can be handled using `on_list_view_highlighted` in a subclass of `ListView`\n or in a parent widget in the DOM.\n \"\"\"\n\n ALLOW_SELECTOR_MATCH = {\"item\"}\n \"\"\"Additional message attributes that can be used with the [`on` decorator][textual.on].\"\"\"\n\n def __init__(self, list_view: ListView, item: ListItem | None) -> None:\n super().__init__()\n self.list_view: ListView = list_view\n \"\"\"The view that contains the item highlighted.\"\"\"\n self.item: ListItem | None = item\n \"\"\"The highlighted item, if there is one highlighted.\"\"\"\n\n @property\n def control(self) -> ListView:\n \"\"\"The view that contains the item highlighted.\n\n This is an alias for [`Highlighted.list_view`][textual.widgets.ListView.Highlighted.list_view]\n and is used by the [`on`][textual.on] decorator.\n \"\"\"\n return self.list_view\n\n class Selected(Message, bubble=True):\n \"\"\"Posted when a list item is selected, e.g. when you press the enter key on it.\n\n Can be handled using `on_list_view_selected` in a subclass of `ListView` or in\n a parent widget in the DOM.\n \"\"\"\n\n ALLOW_SELECTOR_MATCH = {\"item\"}\n \"\"\"Additional message attributes that can be used with the [`on` decorator][textual.on].\"\"\"\n\n def __init__(self, list_view: ListView, item: ListItem) -> None:\n super().__init__()\n self.list_view: ListView = list_view\n \"\"\"The view that contains the item selected.\"\"\"\n self.item: ListItem = item\n \"\"\"The selected item.\"\"\"\n\n @property\n def control(self) -> ListView:\n \"\"\"The view that contains the item selected.\n\n This is an alias for [`Selected.list_view`][textual.widgets.ListView.Selected.list_view]\n and is used by the [`on`][textual.on] decorator.\n \"\"\"\n return self.list_view\n\n def __init__(\n self,\n *children: ListItem,\n initial_index: int | None = 0,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ) -> None:\n \"\"\"\n Initialize a ListView.\n\n Args:\n *children: The ListItems to display in the list.\n initial_index: The index that should be highlighted when the list is first mounted.\n name: The name of the widget.\n id: The unique ID of the widget used in CSS/query selection.\n classes: The CSS classes of the widget.\n disabled: Whether the ListView is disabled or not.\n \"\"\"\n super().__init__(\n *children, name=name, id=id, classes=classes, disabled=disabled\n )\n self._index = initial_index\n\n def _on_mount(self, _: Mount) -> None:\n \"\"\"Ensure the ListView is fully-settled after mounting.\"\"\"\n self.index = self._index\n\n @property\n def highlighted_child(self) -> ListItem | None:\n \"\"\"The currently highlighted ListItem, or None if nothing is highlighted.\"\"\"\n if self.index is not None and 0 <= self.index < len(self._nodes):\n list_item = self._nodes[self.index]\n assert isinstance(list_item, ListItem)\n return list_item\n else:\n return None\n\n def validate_index(self, index: int | None) -> int | None:\n \"\"\"Clamp the index to the valid range, or set to None if there's nothing to highlight.\n\n Args:\n index: The index to clamp.\n\n Returns:\n The clamped index.\n \"\"\"\n if not self._nodes or index is None:\n return None\n return self._clamp_index(index)\n\n def _clamp_index(self, index: int) -> int:\n \"\"\"Clamp the index to a valid value given the current list of children\"\"\"\n last_index = max(len(self._nodes) - 1, 0)\n return clamp(index, 0, last_index)\n\n def _is_valid_index(self, index: int | None) -> bool:\n \"\"\"Return True if the current index is valid given the current list of children\"\"\"\n if index is None:\n return False\n return 0 <= index < len(self._nodes)\n\n def watch_index(self, old_index: int, new_index: int) -> None:\n \"\"\"Updates the highlighting when the index changes.\"\"\"\n if self._is_valid_index(old_index):\n old_child = self._nodes[old_index]\n assert isinstance(old_child, ListItem)\n old_child.highlighted = False\n\n new_child: Widget | None\n if self._is_valid_index(new_index):\n new_child = self._nodes[new_index]\n assert isinstance(new_child, ListItem)\n new_child.highlighted = True\n else:\n new_child = None\n\n self._scroll_highlighted_region()\n self.post_message(self.Highlighted(self, new_child))\n\n def append(self, item: ListItem) -> AwaitMount:\n \"\"\"Append a new ListItem to the end of the ListView.\n\n Args:\n item: The ListItem to append.\n\n Returns:\n An awaitable that yields control to the event loop\n until the DOM has been updated with the new child item.\n \"\"\"\n await_mount = self.mount(item)\n if len(self) == 1:\n self.index = 0\n return await_mount\n\n def clear(self) -> AwaitRemove:\n \"\"\"Clear all items from the ListView.\n\n Returns:\n An awaitable that yields control to the event loop until\n the DOM has been updated to reflect all children being removed.\n \"\"\"\n await_remove = self.query(\"ListView > ListItem\").remove()\n self.index = None\n return await_remove\n\n def action_select_cursor(self) -> None:\n \"\"\"Select the current item in the list.\"\"\"\n selected_child = self.highlighted_child\n if selected_child is None:\n return\n self.post_message(self.Selected(self, selected_child))\n\n def action_cursor_down(self) -> None:\n \"\"\"Highlight the next item in the list.\"\"\"\n if self.index is None:\n self.index = 0\n return\n self.index += 1\n\n def action_cursor_up(self) -> None:\n \"\"\"Highlight the previous item in the list.\"\"\"\n if self.index is None:\n self.index = 0\n return\n self.index -= 1\n\n def _on_list_item__child_clicked(self, event: ListItem._ChildClicked) -> None:\n self.focus()\n self.index = self._nodes.index(event.item)\n self.post_message(self.Selected(self, event.item))\n\n def _scroll_highlighted_region(self) -> None:\n \"\"\"Used to keep the highlighted index within vision\"\"\"\n if self.highlighted_child is not None:\n self.scroll_to_widget(self.highlighted_child, animate=False)\n\n def __len__(self):\n return len(self._nodes)\n", "path": "src/textual/widgets/_list_view.py"}]}
| 2,994 | 378 |
gh_patches_debug_2785
|
rasdani/github-patches
|
git_diff
|
dynaconf__dynaconf-769
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[RFC] Resolve depreciation warning for depreciated property kv
**Is your feature request related to a problem? Please describe.**
Yes, Currently we are hitting the depreciation warning in hvac 0.11 since the kv property is depreciated and adviced to use from `Client.secrets`
Clear Warning:
DeprecationWarning: Call to deprecated property 'kv'. This property will be removed in version '0.9.0' Please use the 'kv' property on the 'Client.secrets' attribute moving forward
**Describe the solution you'd like**
Remove the usage of kv property directly in dynaconf and use if from `Client.secrets`
**Describe alternatives you've considered**
The alternative is not required.
[RFC] Resolve depreciation warning for depreciated property kv
**Is your feature request related to a problem? Please describe.**
Yes, Currently we are hitting the depreciation warning in hvac 0.11 since the kv property is depreciated and adviced to use from `Client.secrets`
Clear Warning:
DeprecationWarning: Call to deprecated property 'kv'. This property will be removed in version '0.9.0' Please use the 'kv' property on the 'Client.secrets' attribute moving forward
**Describe the solution you'd like**
Remove the usage of kv property directly in dynaconf and use if from `Client.secrets`
**Describe alternatives you've considered**
The alternative is not required.
</issue>
<code>
[start of dynaconf/loaders/vault_loader.py]
1 # docker run -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' -p 8200:8200 vault
2 # pip install hvac
3 from __future__ import annotations
4
5 from dynaconf.utils import build_env_list
6 from dynaconf.utils.parse_conf import parse_conf_data
7
8 try:
9 import boto3
10 except ImportError:
11 boto3 = None
12
13 try:
14 from hvac import Client
15 from hvac.exceptions import InvalidPath
16 except ImportError:
17 raise ImportError(
18 "vault package is not installed in your environment. "
19 "`pip install dynaconf[vault]` or disable the vault loader with "
20 "export VAULT_ENABLED_FOR_DYNACONF=false"
21 )
22
23
24 IDENTIFIER = "vault"
25
26
27 # backwards compatibility
28 _get_env_list = build_env_list
29
30
31 def get_client(obj):
32 client = Client(
33 **{k: v for k, v in obj.VAULT_FOR_DYNACONF.items() if v is not None}
34 )
35 if obj.VAULT_ROLE_ID_FOR_DYNACONF is not None:
36 client.auth_approle(
37 role_id=obj.VAULT_ROLE_ID_FOR_DYNACONF,
38 secret_id=obj.get("VAULT_SECRET_ID_FOR_DYNACONF"),
39 )
40 elif obj.VAULT_ROOT_TOKEN_FOR_DYNACONF is not None:
41 client.token = obj.VAULT_ROOT_TOKEN_FOR_DYNACONF
42 elif obj.VAULT_AUTH_WITH_IAM_FOR_DYNACONF:
43 if boto3 is None:
44 raise ImportError(
45 "boto3 package is not installed in your environment. "
46 "`pip install boto3` or disable the VAULT_AUTH_WITH_IAM"
47 )
48
49 session = boto3.Session()
50 credentials = session.get_credentials()
51 client.auth.aws.iam_login(
52 credentials.access_key,
53 credentials.secret_key,
54 credentials.token,
55 role=obj.VAULT_AUTH_ROLE_FOR_DYNACONF,
56 )
57 assert client.is_authenticated(), (
58 "Vault authentication error: is VAULT_TOKEN_FOR_DYNACONF or "
59 "VAULT_ROLE_ID_FOR_DYNACONF defined?"
60 )
61 client.kv.default_kv_version = obj.VAULT_KV_VERSION_FOR_DYNACONF
62 return client
63
64
65 def load(obj, env=None, silent=None, key=None):
66 """Reads and loads in to "settings" a single key or all keys from vault
67
68 :param obj: the settings instance
69 :param env: settings env default='DYNACONF'
70 :param silent: if errors should raise
71 :param key: if defined load a single key, else load all in env
72 :return: None
73 """
74 client = get_client(obj)
75 try:
76 if obj.VAULT_KV_VERSION_FOR_DYNACONF == 2:
77 dirs = client.secrets.kv.v2.list_secrets(
78 path=obj.VAULT_PATH_FOR_DYNACONF,
79 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,
80 )["data"]["keys"]
81 else:
82 dirs = client.secrets.kv.v1.list_secrets(
83 path=obj.VAULT_PATH_FOR_DYNACONF,
84 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,
85 )["data"]["keys"]
86 except InvalidPath:
87 # The given path is not a directory
88 dirs = []
89 # First look for secrets into environments less store
90 if not obj.ENVIRONMENTS_FOR_DYNACONF:
91 # By adding '', dynaconf will now read secrets from environments-less
92 # store which are not written by `dynaconf write` to Vault store
93 env_list = [obj.MAIN_ENV_FOR_DYNACONF.lower(), ""]
94 # Finally, look for secret into all the environments
95 else:
96 env_list = dirs + build_env_list(obj, env)
97 for env in env_list:
98 path = "/".join([obj.VAULT_PATH_FOR_DYNACONF, env])
99 try:
100 if obj.VAULT_KV_VERSION_FOR_DYNACONF == 2:
101 data = client.secrets.kv.v2.read_secret_version(
102 path, mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF
103 )
104 else:
105 data = client.secrets.kv.read_secret(
106 "data/" + path,
107 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,
108 )
109 except InvalidPath:
110 # If the path doesn't exist, ignore it and set data to None
111 data = None
112 if data:
113 # There seems to be a data dict within a data dict,
114 # extract the inner data
115 data = data.get("data", {}).get("data", {})
116 try:
117 if (
118 obj.VAULT_KV_VERSION_FOR_DYNACONF == 2
119 and obj.ENVIRONMENTS_FOR_DYNACONF
120 and data
121 ):
122 data = data.get("data", {})
123 if data and key:
124 value = parse_conf_data(
125 data.get(key), tomlfy=True, box_settings=obj
126 )
127 if value:
128 obj.set(key, value)
129 elif data:
130 obj.update(data, loader_identifier=IDENTIFIER, tomlfy=True)
131 except Exception:
132 if silent:
133 return False
134 raise
135
136
137 def write(obj, data=None, **kwargs):
138 """Write a value in to loader source
139
140 :param obj: settings object
141 :param data: vars to be stored
142 :param kwargs: vars to be stored
143 :return:
144 """
145 if obj.VAULT_ENABLED_FOR_DYNACONF is False:
146 raise RuntimeError(
147 "Vault is not configured \n"
148 "export VAULT_ENABLED_FOR_DYNACONF=true\n"
149 "and configure the VAULT_FOR_DYNACONF_* variables"
150 )
151 data = data or {}
152 data.update(kwargs)
153 if not data:
154 raise AttributeError("Data must be provided")
155 data = {"data": data}
156 client = get_client(obj)
157 if obj.VAULT_KV_VERSION_FOR_DYNACONF == 1:
158 mount_point = obj.VAULT_MOUNT_POINT_FOR_DYNACONF + "/data"
159 else:
160 mount_point = obj.VAULT_MOUNT_POINT_FOR_DYNACONF
161 path = "/".join([obj.VAULT_PATH_FOR_DYNACONF, obj.current_env.lower()])
162 client.secrets.kv.create_or_update_secret(
163 path, secret=data, mount_point=mount_point
164 )
165 load(obj)
166
167
168 def list_envs(obj, path=""):
169 """
170 This function is a helper to get a list of all the existing envs in
171 the source of data, the use case is:
172 existing_envs = vault_loader.list_envs(settings)
173 for env in exiting_envs:
174 with settings.using_env(env): # switch to the env
175 # do something with a key of that env
176
177 :param obj: settings object
178 :param path: path to the vault secrets
179 :return: list containing all the keys at the given path
180 """
181 client = get_client(obj)
182 path = path or obj.get("VAULT_PATH_FOR_DYNACONF")
183 try:
184 return client.list(f"/secret/metadata/{path}")["data"]["keys"]
185 except TypeError:
186 return []
187
[end of dynaconf/loaders/vault_loader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dynaconf/loaders/vault_loader.py b/dynaconf/loaders/vault_loader.py
--- a/dynaconf/loaders/vault_loader.py
+++ b/dynaconf/loaders/vault_loader.py
@@ -58,7 +58,7 @@
"Vault authentication error: is VAULT_TOKEN_FOR_DYNACONF or "
"VAULT_ROLE_ID_FOR_DYNACONF defined?"
)
- client.kv.default_kv_version = obj.VAULT_KV_VERSION_FOR_DYNACONF
+ client.secrets.kv.default_kv_version = obj.VAULT_KV_VERSION_FOR_DYNACONF
return client
|
{"golden_diff": "diff --git a/dynaconf/loaders/vault_loader.py b/dynaconf/loaders/vault_loader.py\n--- a/dynaconf/loaders/vault_loader.py\n+++ b/dynaconf/loaders/vault_loader.py\n@@ -58,7 +58,7 @@\n \"Vault authentication error: is VAULT_TOKEN_FOR_DYNACONF or \"\n \"VAULT_ROLE_ID_FOR_DYNACONF defined?\"\n )\n- client.kv.default_kv_version = obj.VAULT_KV_VERSION_FOR_DYNACONF\n+ client.secrets.kv.default_kv_version = obj.VAULT_KV_VERSION_FOR_DYNACONF\n return client\n", "issue": "[RFC] Resolve depreciation warning for depreciated property kv\n**Is your feature request related to a problem? Please describe.**\r\nYes, Currently we are hitting the depreciation warning in hvac 0.11 since the kv property is depreciated and adviced to use from `Client.secrets`\r\n\r\nClear Warning:\r\nDeprecationWarning: Call to deprecated property 'kv'. This property will be removed in version '0.9.0' Please use the 'kv' property on the 'Client.secrets' attribute moving forward\r\n\r\n**Describe the solution you'd like**\r\nRemove the usage of kv property directly in dynaconf and use if from `Client.secrets`\r\n\r\n**Describe alternatives you've considered**\r\nThe alternative is not required.\r\n\r\n\n[RFC] Resolve depreciation warning for depreciated property kv\n**Is your feature request related to a problem? Please describe.**\r\nYes, Currently we are hitting the depreciation warning in hvac 0.11 since the kv property is depreciated and adviced to use from `Client.secrets`\r\n\r\nClear Warning:\r\nDeprecationWarning: Call to deprecated property 'kv'. This property will be removed in version '0.9.0' Please use the 'kv' property on the 'Client.secrets' attribute moving forward\r\n\r\n**Describe the solution you'd like**\r\nRemove the usage of kv property directly in dynaconf and use if from `Client.secrets`\r\n\r\n**Describe alternatives you've considered**\r\nThe alternative is not required.\r\n\r\n\n", "before_files": [{"content": "# docker run -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' -p 8200:8200 vault\n# pip install hvac\nfrom __future__ import annotations\n\nfrom dynaconf.utils import build_env_list\nfrom dynaconf.utils.parse_conf import parse_conf_data\n\ntry:\n import boto3\nexcept ImportError:\n boto3 = None\n\ntry:\n from hvac import Client\n from hvac.exceptions import InvalidPath\nexcept ImportError:\n raise ImportError(\n \"vault package is not installed in your environment. \"\n \"`pip install dynaconf[vault]` or disable the vault loader with \"\n \"export VAULT_ENABLED_FOR_DYNACONF=false\"\n )\n\n\nIDENTIFIER = \"vault\"\n\n\n# backwards compatibility\n_get_env_list = build_env_list\n\n\ndef get_client(obj):\n client = Client(\n **{k: v for k, v in obj.VAULT_FOR_DYNACONF.items() if v is not None}\n )\n if obj.VAULT_ROLE_ID_FOR_DYNACONF is not None:\n client.auth_approle(\n role_id=obj.VAULT_ROLE_ID_FOR_DYNACONF,\n secret_id=obj.get(\"VAULT_SECRET_ID_FOR_DYNACONF\"),\n )\n elif obj.VAULT_ROOT_TOKEN_FOR_DYNACONF is not None:\n client.token = obj.VAULT_ROOT_TOKEN_FOR_DYNACONF\n elif obj.VAULT_AUTH_WITH_IAM_FOR_DYNACONF:\n if boto3 is None:\n raise ImportError(\n \"boto3 package is not installed in your environment. \"\n \"`pip install boto3` or disable the VAULT_AUTH_WITH_IAM\"\n )\n\n session = boto3.Session()\n credentials = session.get_credentials()\n client.auth.aws.iam_login(\n credentials.access_key,\n credentials.secret_key,\n credentials.token,\n role=obj.VAULT_AUTH_ROLE_FOR_DYNACONF,\n )\n assert client.is_authenticated(), (\n \"Vault authentication error: is VAULT_TOKEN_FOR_DYNACONF or \"\n \"VAULT_ROLE_ID_FOR_DYNACONF defined?\"\n )\n client.kv.default_kv_version = obj.VAULT_KV_VERSION_FOR_DYNACONF\n return client\n\n\ndef load(obj, env=None, silent=None, key=None):\n \"\"\"Reads and loads in to \"settings\" a single key or all keys from vault\n\n :param obj: the settings instance\n :param env: settings env default='DYNACONF'\n :param silent: if errors should raise\n :param key: if defined load a single key, else load all in env\n :return: None\n \"\"\"\n client = get_client(obj)\n try:\n if obj.VAULT_KV_VERSION_FOR_DYNACONF == 2:\n dirs = client.secrets.kv.v2.list_secrets(\n path=obj.VAULT_PATH_FOR_DYNACONF,\n mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,\n )[\"data\"][\"keys\"]\n else:\n dirs = client.secrets.kv.v1.list_secrets(\n path=obj.VAULT_PATH_FOR_DYNACONF,\n mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,\n )[\"data\"][\"keys\"]\n except InvalidPath:\n # The given path is not a directory\n dirs = []\n # First look for secrets into environments less store\n if not obj.ENVIRONMENTS_FOR_DYNACONF:\n # By adding '', dynaconf will now read secrets from environments-less\n # store which are not written by `dynaconf write` to Vault store\n env_list = [obj.MAIN_ENV_FOR_DYNACONF.lower(), \"\"]\n # Finally, look for secret into all the environments\n else:\n env_list = dirs + build_env_list(obj, env)\n for env in env_list:\n path = \"/\".join([obj.VAULT_PATH_FOR_DYNACONF, env])\n try:\n if obj.VAULT_KV_VERSION_FOR_DYNACONF == 2:\n data = client.secrets.kv.v2.read_secret_version(\n path, mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF\n )\n else:\n data = client.secrets.kv.read_secret(\n \"data/\" + path,\n mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,\n )\n except InvalidPath:\n # If the path doesn't exist, ignore it and set data to None\n data = None\n if data:\n # There seems to be a data dict within a data dict,\n # extract the inner data\n data = data.get(\"data\", {}).get(\"data\", {})\n try:\n if (\n obj.VAULT_KV_VERSION_FOR_DYNACONF == 2\n and obj.ENVIRONMENTS_FOR_DYNACONF\n and data\n ):\n data = data.get(\"data\", {})\n if data and key:\n value = parse_conf_data(\n data.get(key), tomlfy=True, box_settings=obj\n )\n if value:\n obj.set(key, value)\n elif data:\n obj.update(data, loader_identifier=IDENTIFIER, tomlfy=True)\n except Exception:\n if silent:\n return False\n raise\n\n\ndef write(obj, data=None, **kwargs):\n \"\"\"Write a value in to loader source\n\n :param obj: settings object\n :param data: vars to be stored\n :param kwargs: vars to be stored\n :return:\n \"\"\"\n if obj.VAULT_ENABLED_FOR_DYNACONF is False:\n raise RuntimeError(\n \"Vault is not configured \\n\"\n \"export VAULT_ENABLED_FOR_DYNACONF=true\\n\"\n \"and configure the VAULT_FOR_DYNACONF_* variables\"\n )\n data = data or {}\n data.update(kwargs)\n if not data:\n raise AttributeError(\"Data must be provided\")\n data = {\"data\": data}\n client = get_client(obj)\n if obj.VAULT_KV_VERSION_FOR_DYNACONF == 1:\n mount_point = obj.VAULT_MOUNT_POINT_FOR_DYNACONF + \"/data\"\n else:\n mount_point = obj.VAULT_MOUNT_POINT_FOR_DYNACONF\n path = \"/\".join([obj.VAULT_PATH_FOR_DYNACONF, obj.current_env.lower()])\n client.secrets.kv.create_or_update_secret(\n path, secret=data, mount_point=mount_point\n )\n load(obj)\n\n\ndef list_envs(obj, path=\"\"):\n \"\"\"\n This function is a helper to get a list of all the existing envs in\n the source of data, the use case is:\n existing_envs = vault_loader.list_envs(settings)\n for env in exiting_envs:\n with settings.using_env(env): # switch to the env\n # do something with a key of that env\n\n :param obj: settings object\n :param path: path to the vault secrets\n :return: list containing all the keys at the given path\n \"\"\"\n client = get_client(obj)\n path = path or obj.get(\"VAULT_PATH_FOR_DYNACONF\")\n try:\n return client.list(f\"/secret/metadata/{path}\")[\"data\"][\"keys\"]\n except TypeError:\n return []\n", "path": "dynaconf/loaders/vault_loader.py"}]}
| 2,916 | 146 |
gh_patches_debug_9131
|
rasdani/github-patches
|
git_diff
|
dask__dask-7623
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Note on `Add x.str.cat (#3028)` (lines 125-126)
It's not clear to me why `String.str.cat` isn't supported in Dask when `others == None`. Not having the ability to concat a single series into a string is a significant cost, I think. Unless there's a reason for this condition, I recommend deleting lines 126-126.
</issue>
<code>
[start of dask/dataframe/accessor.py]
1 from functools import partial
2
3 import numpy as np
4 import pandas as pd
5
6 from ..utils import derived_from
7
8
9 def maybe_wrap_pandas(obj, x):
10 if isinstance(x, np.ndarray):
11 if isinstance(obj, pd.Series):
12 return pd.Series(x, index=obj.index, dtype=x.dtype)
13 return pd.Index(x)
14 return x
15
16
17 class Accessor:
18 """
19 Base class for pandas Accessor objects cat, dt, and str.
20
21 Notes
22 -----
23 Subclasses should define ``_accessor_name``
24 """
25
26 _not_implemented = set()
27
28 def __init__(self, series):
29 from .core import Series
30
31 if not isinstance(series, Series):
32 raise ValueError("Accessor cannot be initialized")
33
34 series_meta = series._meta
35 if hasattr(series_meta, "to_series"): # is index-like
36 series_meta = series_meta.to_series()
37 meta = getattr(series_meta, self._accessor_name)
38
39 self._meta = meta
40 self._series = series
41
42 @staticmethod
43 def _delegate_property(obj, accessor, attr):
44 out = getattr(getattr(obj, accessor, obj), attr)
45 return maybe_wrap_pandas(obj, out)
46
47 @staticmethod
48 def _delegate_method(obj, accessor, attr, args, kwargs):
49 out = getattr(getattr(obj, accessor, obj), attr)(*args, **kwargs)
50 return maybe_wrap_pandas(obj, out)
51
52 def _property_map(self, attr):
53 meta = self._delegate_property(self._series._meta, self._accessor_name, attr)
54 token = "%s-%s" % (self._accessor_name, attr)
55 return self._series.map_partitions(
56 self._delegate_property, self._accessor_name, attr, token=token, meta=meta
57 )
58
59 def _function_map(self, attr, *args, **kwargs):
60 if "meta" in kwargs:
61 meta = kwargs.pop("meta")
62 else:
63 meta = self._delegate_method(
64 self._series._meta_nonempty, self._accessor_name, attr, args, kwargs
65 )
66 token = "%s-%s" % (self._accessor_name, attr)
67 return self._series.map_partitions(
68 self._delegate_method,
69 self._accessor_name,
70 attr,
71 args,
72 kwargs,
73 meta=meta,
74 token=token,
75 )
76
77 @property
78 def _delegates(self):
79 return set(dir(self._meta)) - self._not_implemented
80
81 def __dir__(self):
82 o = self._delegates
83 o.update(self.__dict__)
84 o.update(dir(type(self)))
85 return list(o)
86
87 def __getattr__(self, key):
88 if key in self._delegates:
89 if callable(getattr(self._meta, key)):
90 return partial(self._function_map, key)
91 else:
92 return self._property_map(key)
93 else:
94 raise AttributeError(key)
95
96
97 class DatetimeAccessor(Accessor):
98 """Accessor object for datetimelike properties of the Series values.
99
100 Examples
101 --------
102
103 >>> s.dt.microsecond # doctest: +SKIP
104 """
105
106 _accessor_name = "dt"
107
108
109 class StringAccessor(Accessor):
110 """Accessor object for string properties of the Series values.
111
112 Examples
113 --------
114
115 >>> s.str.lower() # doctest: +SKIP
116 """
117
118 _accessor_name = "str"
119 _not_implemented = {"get_dummies"}
120
121 @derived_from(pd.core.strings.StringMethods)
122 def split(self, pat=None, n=-1, expand=False):
123 if expand:
124 if n == -1:
125 raise NotImplementedError(
126 "To use the expand parameter you must specify the number of "
127 "expected splits with the n= parameter. Usually n splits result in n+1 output columns."
128 )
129 else:
130 delimiter = " " if pat is None else pat
131 meta = self._series._meta._constructor(
132 [delimiter.join(["a"] * (n + 1))],
133 index=self._series._meta_nonempty[:1].index,
134 )
135 meta = meta.str.split(n=n, expand=expand, pat=pat)
136 else:
137 meta = (self._series.name, object)
138 return self._function_map("split", pat=pat, n=n, expand=expand, meta=meta)
139
140 @derived_from(pd.core.strings.StringMethods)
141 def cat(self, others=None, sep=None, na_rep=None):
142 from .core import Index, Series
143
144 if others is None:
145 raise NotImplementedError("x.str.cat() with `others == None`")
146
147 valid_types = (Series, Index, pd.Series, pd.Index)
148 if isinstance(others, valid_types):
149 others = [others]
150 elif not all(isinstance(a, valid_types) for a in others):
151 raise TypeError("others must be Series/Index")
152
153 return self._series.map_partitions(
154 str_cat, *others, sep=sep, na_rep=na_rep, meta=self._series._meta
155 )
156
157 @derived_from(pd.core.strings.StringMethods)
158 def extractall(self, pat, flags=0):
159 return self._series.map_partitions(
160 str_extractall, pat, flags, token="str-extractall"
161 )
162
163 def __getitem__(self, index):
164 return self._series.map_partitions(str_get, index, meta=self._series._meta)
165
166
167 def str_extractall(series, pat, flags):
168 return series.str.extractall(pat, flags=flags)
169
170
171 def str_get(series, index):
172 """ Implements series.str[index] """
173 return series.str[index]
174
175
176 def str_cat(self, *others, **kwargs):
177 return self.str.cat(others=others, **kwargs)
178
[end of dask/dataframe/accessor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dask/dataframe/accessor.py b/dask/dataframe/accessor.py
--- a/dask/dataframe/accessor.py
+++ b/dask/dataframe/accessor.py
@@ -142,7 +142,15 @@
from .core import Index, Series
if others is None:
- raise NotImplementedError("x.str.cat() with `others == None`")
+
+ def str_cat_none(x):
+
+ if isinstance(x, (Series, Index)):
+ x = x.compute()
+
+ return x.str.cat(sep=sep, na_rep=na_rep)
+
+ return self._series.reduction(chunk=str_cat_none, aggregate=str_cat_none)
valid_types = (Series, Index, pd.Series, pd.Index)
if isinstance(others, valid_types):
|
{"golden_diff": "diff --git a/dask/dataframe/accessor.py b/dask/dataframe/accessor.py\n--- a/dask/dataframe/accessor.py\n+++ b/dask/dataframe/accessor.py\n@@ -142,7 +142,15 @@\n from .core import Index, Series\n \n if others is None:\n- raise NotImplementedError(\"x.str.cat() with `others == None`\")\n+\n+ def str_cat_none(x):\n+\n+ if isinstance(x, (Series, Index)):\n+ x = x.compute()\n+\n+ return x.str.cat(sep=sep, na_rep=na_rep)\n+\n+ return self._series.reduction(chunk=str_cat_none, aggregate=str_cat_none)\n \n valid_types = (Series, Index, pd.Series, pd.Index)\n if isinstance(others, valid_types):\n", "issue": "Note on `Add x.str.cat (#3028)` (lines 125-126)\nIt's not clear to me why `String.str.cat` isn't supported in Dask when `others == None`. Not having the ability to concat a single series into a string is a significant cost, I think. Unless there's a reason for this condition, I recommend deleting lines 126-126.\r\n\n", "before_files": [{"content": "from functools import partial\n\nimport numpy as np\nimport pandas as pd\n\nfrom ..utils import derived_from\n\n\ndef maybe_wrap_pandas(obj, x):\n if isinstance(x, np.ndarray):\n if isinstance(obj, pd.Series):\n return pd.Series(x, index=obj.index, dtype=x.dtype)\n return pd.Index(x)\n return x\n\n\nclass Accessor:\n \"\"\"\n Base class for pandas Accessor objects cat, dt, and str.\n\n Notes\n -----\n Subclasses should define ``_accessor_name``\n \"\"\"\n\n _not_implemented = set()\n\n def __init__(self, series):\n from .core import Series\n\n if not isinstance(series, Series):\n raise ValueError(\"Accessor cannot be initialized\")\n\n series_meta = series._meta\n if hasattr(series_meta, \"to_series\"): # is index-like\n series_meta = series_meta.to_series()\n meta = getattr(series_meta, self._accessor_name)\n\n self._meta = meta\n self._series = series\n\n @staticmethod\n def _delegate_property(obj, accessor, attr):\n out = getattr(getattr(obj, accessor, obj), attr)\n return maybe_wrap_pandas(obj, out)\n\n @staticmethod\n def _delegate_method(obj, accessor, attr, args, kwargs):\n out = getattr(getattr(obj, accessor, obj), attr)(*args, **kwargs)\n return maybe_wrap_pandas(obj, out)\n\n def _property_map(self, attr):\n meta = self._delegate_property(self._series._meta, self._accessor_name, attr)\n token = \"%s-%s\" % (self._accessor_name, attr)\n return self._series.map_partitions(\n self._delegate_property, self._accessor_name, attr, token=token, meta=meta\n )\n\n def _function_map(self, attr, *args, **kwargs):\n if \"meta\" in kwargs:\n meta = kwargs.pop(\"meta\")\n else:\n meta = self._delegate_method(\n self._series._meta_nonempty, self._accessor_name, attr, args, kwargs\n )\n token = \"%s-%s\" % (self._accessor_name, attr)\n return self._series.map_partitions(\n self._delegate_method,\n self._accessor_name,\n attr,\n args,\n kwargs,\n meta=meta,\n token=token,\n )\n\n @property\n def _delegates(self):\n return set(dir(self._meta)) - self._not_implemented\n\n def __dir__(self):\n o = self._delegates\n o.update(self.__dict__)\n o.update(dir(type(self)))\n return list(o)\n\n def __getattr__(self, key):\n if key in self._delegates:\n if callable(getattr(self._meta, key)):\n return partial(self._function_map, key)\n else:\n return self._property_map(key)\n else:\n raise AttributeError(key)\n\n\nclass DatetimeAccessor(Accessor):\n \"\"\"Accessor object for datetimelike properties of the Series values.\n\n Examples\n --------\n\n >>> s.dt.microsecond # doctest: +SKIP\n \"\"\"\n\n _accessor_name = \"dt\"\n\n\nclass StringAccessor(Accessor):\n \"\"\"Accessor object for string properties of the Series values.\n\n Examples\n --------\n\n >>> s.str.lower() # doctest: +SKIP\n \"\"\"\n\n _accessor_name = \"str\"\n _not_implemented = {\"get_dummies\"}\n\n @derived_from(pd.core.strings.StringMethods)\n def split(self, pat=None, n=-1, expand=False):\n if expand:\n if n == -1:\n raise NotImplementedError(\n \"To use the expand parameter you must specify the number of \"\n \"expected splits with the n= parameter. Usually n splits result in n+1 output columns.\"\n )\n else:\n delimiter = \" \" if pat is None else pat\n meta = self._series._meta._constructor(\n [delimiter.join([\"a\"] * (n + 1))],\n index=self._series._meta_nonempty[:1].index,\n )\n meta = meta.str.split(n=n, expand=expand, pat=pat)\n else:\n meta = (self._series.name, object)\n return self._function_map(\"split\", pat=pat, n=n, expand=expand, meta=meta)\n\n @derived_from(pd.core.strings.StringMethods)\n def cat(self, others=None, sep=None, na_rep=None):\n from .core import Index, Series\n\n if others is None:\n raise NotImplementedError(\"x.str.cat() with `others == None`\")\n\n valid_types = (Series, Index, pd.Series, pd.Index)\n if isinstance(others, valid_types):\n others = [others]\n elif not all(isinstance(a, valid_types) for a in others):\n raise TypeError(\"others must be Series/Index\")\n\n return self._series.map_partitions(\n str_cat, *others, sep=sep, na_rep=na_rep, meta=self._series._meta\n )\n\n @derived_from(pd.core.strings.StringMethods)\n def extractall(self, pat, flags=0):\n return self._series.map_partitions(\n str_extractall, pat, flags, token=\"str-extractall\"\n )\n\n def __getitem__(self, index):\n return self._series.map_partitions(str_get, index, meta=self._series._meta)\n\n\ndef str_extractall(series, pat, flags):\n return series.str.extractall(pat, flags=flags)\n\n\ndef str_get(series, index):\n \"\"\" Implements series.str[index] \"\"\"\n return series.str[index]\n\n\ndef str_cat(self, *others, **kwargs):\n return self.str.cat(others=others, **kwargs)\n", "path": "dask/dataframe/accessor.py"}]}
| 2,310 | 176 |
gh_patches_debug_43
|
rasdani/github-patches
|
git_diff
|
python-discord__site-268
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ugly prefix on all ID links.
Currently, all the headers that are created by the wiki will have id's that are prefixed with `wiki-toc`. As such, when you want to link a header, the link will look something like https://pythondiscord.com/pages/contributing/site/#wiki-toc-development-environment.
It would be better if this simply said `#development-environment`, so let's change that.
</issue>
<code>
[start of pydis_site/__init__.py]
[end of pydis_site/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pydis_site/__init__.py b/pydis_site/__init__.py
--- a/pydis_site/__init__.py
+++ b/pydis_site/__init__.py
@@ -0,0 +1,4 @@
+from wiki.plugins.macros.mdx import toc
+
+# Remove the toc header prefix. There's no option for this, so we gotta monkey patch it.
+toc.HEADER_ID_PREFIX = ''
|
{"golden_diff": "diff --git a/pydis_site/__init__.py b/pydis_site/__init__.py\n--- a/pydis_site/__init__.py\n+++ b/pydis_site/__init__.py\n@@ -0,0 +1,4 @@\n+from wiki.plugins.macros.mdx import toc\n+\n+# Remove the toc header prefix. There's no option for this, so we gotta monkey patch it.\n+toc.HEADER_ID_PREFIX = ''\n", "issue": "Ugly prefix on all ID links.\nCurrently, all the headers that are created by the wiki will have id's that are prefixed with `wiki-toc`. As such, when you want to link a header, the link will look something like https://pythondiscord.com/pages/contributing/site/#wiki-toc-development-environment.\r\n\r\nIt would be better if this simply said `#development-environment`, so let's change that.\n", "before_files": [{"content": "", "path": "pydis_site/__init__.py"}]}
| 628 | 91 |
gh_patches_debug_36586
|
rasdani/github-patches
|
git_diff
|
ESMCI__cime-1055
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support some interface to the shell in our XML
Users like @amametjanov are asking for access to the shell within our XML files. Something like:
```
<env name="NETCDF_PATH">$SHELL{which nc-config | xargs dirname | xargs dirname}</env>
```
</issue>
<code>
[start of utils/python/CIME/XML/generic_xml.py]
1 """
2 Common interface to XML files, this is an abstract class and is expected to
3 be used by other XML interface modules and not directly.
4 """
5 from CIME.XML.standard_module_setup import *
6 from distutils.spawn import find_executable
7 from xml.dom import minidom
8 from CIME.utils import expect, get_cime_root
9
10 logger = logging.getLogger(__name__)
11
12 class GenericXML(object):
13
14 def __init__(self, infile=None, schema=None):
15 """
16 Initialize an object
17 """
18
19 logger.debug("Initializing %s" , infile)
20 self.tree = None
21
22 if infile == None:
23 # if file is not defined just return
24 self.filename = None
25 return
26
27 if os.path.isfile(infile) and os.access(infile, os.R_OK):
28 # If file is defined and exists, read it
29 self.filename = infile
30 self.read(infile, schema)
31 else:
32 # if file does not exist create a root xml element
33 # and set it's id to file
34
35 logger.debug("File %s does not exists." , infile)
36 expect("$" not in infile,"File path not fully resolved %s"%infile)
37
38 self.filename = infile
39 root = ET.Element("xml")
40 self.root = ET.SubElement(root, "file")
41 self.root.set("id", os.path.basename(infile))
42 self.tree = ET.ElementTree(root)
43
44 def read(self, infile, schema=None):
45 """
46 Read and parse an xml file into the object
47 """
48 logger.debug("read: " + infile)
49 if self.tree:
50 self.root.append(ET.parse(infile).getroot())
51 else:
52 self.tree = ET.parse(infile)
53 self.root = self.tree.getroot()
54
55 if schema is not None and self.get_version() > 1.0:
56 self.validate_xml_file(infile, schema)
57
58 logger.debug("File version is %s"%str(self.get_version()))
59
60 def get_version(self):
61 version = self.root.get("version")
62 version = 1.0 if version is None else float(version)
63 return version
64
65 def write(self, outfile=None):
66 """
67 Write an xml file from data in self
68 """
69 if outfile is None:
70 outfile = self.filename
71
72 logger.debug("write: " + outfile)
73
74 xmlstr = self.get_raw_record()
75
76 # xmllint provides a better format option for the output file
77 xmllint = find_executable("xmllint")
78 if xmllint is not None:
79 run_cmd_no_fail("%s --format --output %s -"%(xmllint,outfile), input_str=xmlstr)
80 else:
81 doc = minidom.parseString(xmlstr)
82 with open(outfile,'w') as xmlout:
83 doc.writexml(xmlout,addindent=' ')
84
85 def get_node(self, nodename, attributes=None, root=None, xpath=None):
86 """
87 Get an xml element matching nodename with optional attributes.
88
89 Error unless exactly one match.
90 """
91
92 nodes = self.get_nodes(nodename, attributes=attributes, root=root, xpath=xpath)
93
94 expect(len(nodes) == 1, "Incorrect number of matches, %d, for nodename '%s' and attrs '%s' in file '%s'" %
95 (len(nodes), nodename, attributes, self.filename))
96 return nodes[0]
97
98 def get_optional_node(self, nodename, attributes=None, root=None, xpath=None):
99 """
100 Get an xml element matching nodename with optional attributes.
101
102 Return None if no match.
103 """
104 nodes = self.get_nodes(nodename, attributes=attributes, root=root, xpath=xpath)
105
106 expect(len(nodes) <= 1, "Multiple matches for nodename '%s' and attrs '%s' in file '%s'" %
107 (nodename, attributes, self.filename))
108 return nodes[0] if nodes else None
109
110 def get_nodes(self, nodename, attributes=None, root=None, xpath=None):
111
112 logger.debug("(get_nodes) Input values: %s , %s , %s , %s , %s" , self.__class__.__name__ , nodename , attributes , root , xpath)
113
114 if root is None:
115 root = self.root
116 nodes = []
117
118 expect(attributes is None or xpath is None,
119 " Arguments attributes and xpath are exclusive")
120 if xpath is None:
121 xpath = ".//"+nodename
122
123 if attributes:
124 # xml.etree has limited support for xpath and does not allow more than
125 # one attribute in an xpath query so we query seperately for each attribute
126 # and create a result with the intersection of those lists
127
128 for key, value in attributes.iteritems():
129 if value is not None:
130 expect(isinstance(value, basestring),
131 " Bad value passed for key %s"%key)
132 xpath = ".//%s[@%s=\'%s\']" % (nodename, key, value)
133 logger.debug("xpath is %s"%xpath)
134
135 try:
136 newnodes = root.findall(xpath)
137 except Exception as e:
138 expect(False, "Bad xpath search term '%s', error: %s" % (xpath, e))
139
140 if not nodes:
141 nodes = newnodes
142 else:
143 for node in nodes[:]:
144 if node not in newnodes:
145 nodes.remove(node)
146 if not nodes:
147 return []
148
149 else:
150 logger.debug("xpath: %s" , xpath)
151 nodes = root.findall(xpath)
152
153 logger.debug("Returning %s nodes (%s)" , len(nodes), nodes)
154
155 return nodes
156
157 def add_child(self, node, root=None):
158 """
159 Add element node to self at root
160 """
161 if root is None:
162 root = self.root
163 self.root.append(node)
164
165 def get_value(self, item, attribute=None, resolved=True, subgroup=None): # pylint: disable=unused-argument
166 """
167 get_value is expected to be defined by the derived classes, if you get here
168 the value was not found in the class.
169 """
170 logger.debug("Get Value for " + item)
171 return None
172
173 def get_values(self, vid, attribute=None, resolved=True, subgroup=None):# pylint: disable=unused-argument
174 logger.debug("Get Values for " + vid)
175 return []
176
177 def set_value(self, vid, value, subgroup=None, ignore_type=True): # pylint: disable=unused-argument
178 """
179 ignore_type is not used in this flavor
180 """
181 valnodes = self.get_nodes(vid)
182 if valnodes:
183 for node in valnodes:
184 node.text = value
185
186 def get_resolved_value(self, raw_value):
187 """
188 A value in the xml file may contain references to other xml
189 variables or to environment variables. These are refered to in
190 the perl style with $name and $ENV{name}.
191
192 >>> obj = GenericXML()
193 >>> os.environ["FOO"] = "BAR"
194 >>> os.environ["BAZ"] = "BARF"
195 >>> obj.get_resolved_value("one $ENV{FOO} two $ENV{BAZ} three")
196 'one BAR two BARF three'
197 >>> obj.get_resolved_value("2 + 3 - 1")
198 '4'
199 >>> obj.get_resolved_value("0001-01-01")
200 '0001-01-01'
201 """
202 logger.debug("raw_value %s" % raw_value)
203 reference_re = re.compile(r'\${?(\w+)}?')
204 env_ref_re = re.compile(r'\$ENV\{(\w+)\}')
205 math_re = re.compile(r'\s[+-/*]\s')
206 item_data = raw_value
207
208 if item_data is None:
209 return None
210
211 if type(item_data) is not str:
212 return item_data
213
214 for m in env_ref_re.finditer(item_data):
215 logger.debug("look for %s in env" % item_data)
216 env_var = m.groups()[0]
217 expect(env_var in os.environ, "Undefined env var '%s'" % env_var)
218 item_data = item_data.replace(m.group(), os.environ[env_var])
219
220 for m in reference_re.finditer(item_data):
221 var = m.groups()[0]
222 logger.debug("find: %s" % var)
223 ref = self.get_value(var)
224 if ref is not None:
225 logger.debug("resolve: " + str(ref))
226 item_data = item_data.replace(m.group(), self.get_resolved_value(str(ref)))
227 elif var == "CIMEROOT":
228 cimeroot = get_cime_root()
229 item_data = item_data.replace(m.group(), cimeroot)
230 elif var == "SRCROOT":
231 srcroot = os.path.join(get_cime_root(),"..")
232 item_data = item_data.replace(m.group(), srcroot)
233 elif var in os.environ:
234 # this is a list of suppressed warnings (things normally expected to be resolved in env)
235 if var not in ("USER",):
236 logging.debug("Resolved from env: " + var)
237 item_data = item_data.replace(m.group(), os.environ[var])
238 if math_re.search(item_data):
239 try:
240 tmp = eval(item_data)
241 except:
242 tmp = item_data
243 item_data = str(tmp)
244
245 return item_data
246
247 def add_sub_node(self, node, subnode_name, subnode_text):
248 expect(node is not None," Bad value passed")
249 subnode = ET.Element(subnode_name)
250 subnode.text = subnode_text
251 node.append(subnode)
252 return node
253
254 def validate_xml_file(self, filename, schema):
255 """
256 validate an XML file against a provided schema file using pylint
257 """
258 expect(os.path.isfile(filename),"xml file not found %s"%filename)
259 expect(os.path.isfile(schema),"schema file not found %s"%schema)
260 xmllint = find_executable("xmllint")
261 if xmllint is not None:
262 logger.debug("Checking file %s against schema %s"%(filename, schema))
263 run_cmd_no_fail("%s --noout --schema %s %s"%(xmllint, schema, filename))
264 else:
265 logger.warn("xmllint not found, could not validate file %s"%filename)
266
267 def get_element_text(self, element_name, attributes=None, root=None, xpath=None):
268 element_node = self.get_optional_node(element_name, attributes, root, xpath)
269 if element_node is not None:
270 return element_node.text
271 return None
272
273 def set_element_text(self, element_name, new_text, attributes=None, root=None, xpath=None):
274 element_node = self.get_optional_node(element_name, attributes, root, xpath)
275 if element_node is not None:
276 element_node.text = new_text
277 return new_text
278 return None
279
280 def get_raw_record(self, root=None):
281 if root is None:
282 root = self.root
283 try:
284 xmlstr = ET.tostring(root)
285 except ET.ParseError as e:
286 ET.dump(root)
287 expect(False, "Could not write file %s, xml formatting error '%s'" % (self.filename, e))
288 return xmlstr
289
290
291
[end of utils/python/CIME/XML/generic_xml.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/utils/python/CIME/XML/generic_xml.py b/utils/python/CIME/XML/generic_xml.py
--- a/utils/python/CIME/XML/generic_xml.py
+++ b/utils/python/CIME/XML/generic_xml.py
@@ -7,6 +7,8 @@
from xml.dom import minidom
from CIME.utils import expect, get_cime_root
+import getpass
+
logger = logging.getLogger(__name__)
class GenericXML(object):
@@ -198,10 +200,13 @@
'4'
>>> obj.get_resolved_value("0001-01-01")
'0001-01-01'
+ >>> obj.get_resolved_value("$SHELL{echo hi}")
+ 'hi'
"""
logger.debug("raw_value %s" % raw_value)
reference_re = re.compile(r'\${?(\w+)}?')
env_ref_re = re.compile(r'\$ENV\{(\w+)\}')
+ shell_ref_re = re.compile(r'\$SHELL\{([^}]+)\}')
math_re = re.compile(r'\s[+-/*]\s')
item_data = raw_value
@@ -217,6 +222,11 @@
expect(env_var in os.environ, "Undefined env var '%s'" % env_var)
item_data = item_data.replace(m.group(), os.environ[env_var])
+ for s in shell_ref_re.finditer(item_data):
+ logger.debug("execute %s in shell" % item_data)
+ shell_cmd = s.groups()[0]
+ item_data = item_data.replace(s.group(), run_cmd_no_fail(shell_cmd))
+
for m in reference_re.finditer(item_data):
var = m.groups()[0]
logger.debug("find: %s" % var)
@@ -230,11 +240,9 @@
elif var == "SRCROOT":
srcroot = os.path.join(get_cime_root(),"..")
item_data = item_data.replace(m.group(), srcroot)
- elif var in os.environ:
- # this is a list of suppressed warnings (things normally expected to be resolved in env)
- if var not in ("USER",):
- logging.debug("Resolved from env: " + var)
- item_data = item_data.replace(m.group(), os.environ[var])
+ elif var == "USER":
+ item_data = item_data.replace(m.group(), getpass.getuser())
+
if math_re.search(item_data):
try:
tmp = eval(item_data)
|
{"golden_diff": "diff --git a/utils/python/CIME/XML/generic_xml.py b/utils/python/CIME/XML/generic_xml.py\n--- a/utils/python/CIME/XML/generic_xml.py\n+++ b/utils/python/CIME/XML/generic_xml.py\n@@ -7,6 +7,8 @@\n from xml.dom import minidom\n from CIME.utils import expect, get_cime_root\n \n+import getpass\n+\n logger = logging.getLogger(__name__)\n \n class GenericXML(object):\n@@ -198,10 +200,13 @@\n '4'\n >>> obj.get_resolved_value(\"0001-01-01\")\n '0001-01-01'\n+ >>> obj.get_resolved_value(\"$SHELL{echo hi}\")\n+ 'hi'\n \"\"\"\n logger.debug(\"raw_value %s\" % raw_value)\n reference_re = re.compile(r'\\${?(\\w+)}?')\n env_ref_re = re.compile(r'\\$ENV\\{(\\w+)\\}')\n+ shell_ref_re = re.compile(r'\\$SHELL\\{([^}]+)\\}')\n math_re = re.compile(r'\\s[+-/*]\\s')\n item_data = raw_value\n \n@@ -217,6 +222,11 @@\n expect(env_var in os.environ, \"Undefined env var '%s'\" % env_var)\n item_data = item_data.replace(m.group(), os.environ[env_var])\n \n+ for s in shell_ref_re.finditer(item_data):\n+ logger.debug(\"execute %s in shell\" % item_data)\n+ shell_cmd = s.groups()[0]\n+ item_data = item_data.replace(s.group(), run_cmd_no_fail(shell_cmd))\n+\n for m in reference_re.finditer(item_data):\n var = m.groups()[0]\n logger.debug(\"find: %s\" % var)\n@@ -230,11 +240,9 @@\n elif var == \"SRCROOT\":\n srcroot = os.path.join(get_cime_root(),\"..\")\n item_data = item_data.replace(m.group(), srcroot)\n- elif var in os.environ:\n- # this is a list of suppressed warnings (things normally expected to be resolved in env)\n- if var not in (\"USER\",):\n- logging.debug(\"Resolved from env: \" + var)\n- item_data = item_data.replace(m.group(), os.environ[var])\n+ elif var == \"USER\":\n+ item_data = item_data.replace(m.group(), getpass.getuser())\n+\n if math_re.search(item_data):\n try:\n tmp = eval(item_data)\n", "issue": "Support some interface to the shell in our XML\nUsers like @amametjanov are asking for access to the shell within our XML files. Something like:\r\n\r\n```\r\n<env name=\"NETCDF_PATH\">$SHELL{which nc-config | xargs dirname | xargs dirname}</env>\r\n```\n", "before_files": [{"content": "\"\"\"\nCommon interface to XML files, this is an abstract class and is expected to\nbe used by other XML interface modules and not directly.\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom distutils.spawn import find_executable\nfrom xml.dom import minidom\nfrom CIME.utils import expect, get_cime_root\n\nlogger = logging.getLogger(__name__)\n\nclass GenericXML(object):\n\n def __init__(self, infile=None, schema=None):\n \"\"\"\n Initialize an object\n \"\"\"\n\n logger.debug(\"Initializing %s\" , infile)\n self.tree = None\n\n if infile == None:\n # if file is not defined just return\n self.filename = None\n return\n\n if os.path.isfile(infile) and os.access(infile, os.R_OK):\n # If file is defined and exists, read it\n self.filename = infile\n self.read(infile, schema)\n else:\n # if file does not exist create a root xml element\n # and set it's id to file\n\n logger.debug(\"File %s does not exists.\" , infile)\n expect(\"$\" not in infile,\"File path not fully resolved %s\"%infile)\n\n self.filename = infile\n root = ET.Element(\"xml\")\n self.root = ET.SubElement(root, \"file\")\n self.root.set(\"id\", os.path.basename(infile))\n self.tree = ET.ElementTree(root)\n\n def read(self, infile, schema=None):\n \"\"\"\n Read and parse an xml file into the object\n \"\"\"\n logger.debug(\"read: \" + infile)\n if self.tree:\n self.root.append(ET.parse(infile).getroot())\n else:\n self.tree = ET.parse(infile)\n self.root = self.tree.getroot()\n\n if schema is not None and self.get_version() > 1.0:\n self.validate_xml_file(infile, schema)\n\n logger.debug(\"File version is %s\"%str(self.get_version()))\n\n def get_version(self):\n version = self.root.get(\"version\")\n version = 1.0 if version is None else float(version)\n return version\n\n def write(self, outfile=None):\n \"\"\"\n Write an xml file from data in self\n \"\"\"\n if outfile is None:\n outfile = self.filename\n\n logger.debug(\"write: \" + outfile)\n\n xmlstr = self.get_raw_record()\n\n # xmllint provides a better format option for the output file\n xmllint = find_executable(\"xmllint\")\n if xmllint is not None:\n run_cmd_no_fail(\"%s --format --output %s -\"%(xmllint,outfile), input_str=xmlstr)\n else:\n doc = minidom.parseString(xmlstr)\n with open(outfile,'w') as xmlout:\n doc.writexml(xmlout,addindent=' ')\n\n def get_node(self, nodename, attributes=None, root=None, xpath=None):\n \"\"\"\n Get an xml element matching nodename with optional attributes.\n\n Error unless exactly one match.\n \"\"\"\n\n nodes = self.get_nodes(nodename, attributes=attributes, root=root, xpath=xpath)\n\n expect(len(nodes) == 1, \"Incorrect number of matches, %d, for nodename '%s' and attrs '%s' in file '%s'\" %\n (len(nodes), nodename, attributes, self.filename))\n return nodes[0]\n\n def get_optional_node(self, nodename, attributes=None, root=None, xpath=None):\n \"\"\"\n Get an xml element matching nodename with optional attributes.\n\n Return None if no match.\n \"\"\"\n nodes = self.get_nodes(nodename, attributes=attributes, root=root, xpath=xpath)\n\n expect(len(nodes) <= 1, \"Multiple matches for nodename '%s' and attrs '%s' in file '%s'\" %\n (nodename, attributes, self.filename))\n return nodes[0] if nodes else None\n\n def get_nodes(self, nodename, attributes=None, root=None, xpath=None):\n\n logger.debug(\"(get_nodes) Input values: %s , %s , %s , %s , %s\" , self.__class__.__name__ , nodename , attributes , root , xpath)\n\n if root is None:\n root = self.root\n nodes = []\n\n expect(attributes is None or xpath is None,\n \" Arguments attributes and xpath are exclusive\")\n if xpath is None:\n xpath = \".//\"+nodename\n\n if attributes:\n # xml.etree has limited support for xpath and does not allow more than\n # one attribute in an xpath query so we query seperately for each attribute\n # and create a result with the intersection of those lists\n\n for key, value in attributes.iteritems():\n if value is not None:\n expect(isinstance(value, basestring),\n \" Bad value passed for key %s\"%key)\n xpath = \".//%s[@%s=\\'%s\\']\" % (nodename, key, value)\n logger.debug(\"xpath is %s\"%xpath)\n\n try:\n newnodes = root.findall(xpath)\n except Exception as e:\n expect(False, \"Bad xpath search term '%s', error: %s\" % (xpath, e))\n\n if not nodes:\n nodes = newnodes\n else:\n for node in nodes[:]:\n if node not in newnodes:\n nodes.remove(node)\n if not nodes:\n return []\n\n else:\n logger.debug(\"xpath: %s\" , xpath)\n nodes = root.findall(xpath)\n\n logger.debug(\"Returning %s nodes (%s)\" , len(nodes), nodes)\n\n return nodes\n\n def add_child(self, node, root=None):\n \"\"\"\n Add element node to self at root\n \"\"\"\n if root is None:\n root = self.root\n self.root.append(node)\n\n def get_value(self, item, attribute=None, resolved=True, subgroup=None): # pylint: disable=unused-argument\n \"\"\"\n get_value is expected to be defined by the derived classes, if you get here\n the value was not found in the class.\n \"\"\"\n logger.debug(\"Get Value for \" + item)\n return None\n\n def get_values(self, vid, attribute=None, resolved=True, subgroup=None):# pylint: disable=unused-argument\n logger.debug(\"Get Values for \" + vid)\n return []\n\n def set_value(self, vid, value, subgroup=None, ignore_type=True): # pylint: disable=unused-argument\n \"\"\"\n ignore_type is not used in this flavor\n \"\"\"\n valnodes = self.get_nodes(vid)\n if valnodes:\n for node in valnodes:\n node.text = value\n\n def get_resolved_value(self, raw_value):\n \"\"\"\n A value in the xml file may contain references to other xml\n variables or to environment variables. These are refered to in\n the perl style with $name and $ENV{name}.\n\n >>> obj = GenericXML()\n >>> os.environ[\"FOO\"] = \"BAR\"\n >>> os.environ[\"BAZ\"] = \"BARF\"\n >>> obj.get_resolved_value(\"one $ENV{FOO} two $ENV{BAZ} three\")\n 'one BAR two BARF three'\n >>> obj.get_resolved_value(\"2 + 3 - 1\")\n '4'\n >>> obj.get_resolved_value(\"0001-01-01\")\n '0001-01-01'\n \"\"\"\n logger.debug(\"raw_value %s\" % raw_value)\n reference_re = re.compile(r'\\${?(\\w+)}?')\n env_ref_re = re.compile(r'\\$ENV\\{(\\w+)\\}')\n math_re = re.compile(r'\\s[+-/*]\\s')\n item_data = raw_value\n\n if item_data is None:\n return None\n\n if type(item_data) is not str:\n return item_data\n\n for m in env_ref_re.finditer(item_data):\n logger.debug(\"look for %s in env\" % item_data)\n env_var = m.groups()[0]\n expect(env_var in os.environ, \"Undefined env var '%s'\" % env_var)\n item_data = item_data.replace(m.group(), os.environ[env_var])\n\n for m in reference_re.finditer(item_data):\n var = m.groups()[0]\n logger.debug(\"find: %s\" % var)\n ref = self.get_value(var)\n if ref is not None:\n logger.debug(\"resolve: \" + str(ref))\n item_data = item_data.replace(m.group(), self.get_resolved_value(str(ref)))\n elif var == \"CIMEROOT\":\n cimeroot = get_cime_root()\n item_data = item_data.replace(m.group(), cimeroot)\n elif var == \"SRCROOT\":\n srcroot = os.path.join(get_cime_root(),\"..\")\n item_data = item_data.replace(m.group(), srcroot)\n elif var in os.environ:\n # this is a list of suppressed warnings (things normally expected to be resolved in env)\n if var not in (\"USER\",):\n logging.debug(\"Resolved from env: \" + var)\n item_data = item_data.replace(m.group(), os.environ[var])\n if math_re.search(item_data):\n try:\n tmp = eval(item_data)\n except:\n tmp = item_data\n item_data = str(tmp)\n\n return item_data\n\n def add_sub_node(self, node, subnode_name, subnode_text):\n expect(node is not None,\" Bad value passed\")\n subnode = ET.Element(subnode_name)\n subnode.text = subnode_text\n node.append(subnode)\n return node\n\n def validate_xml_file(self, filename, schema):\n \"\"\"\n validate an XML file against a provided schema file using pylint\n \"\"\"\n expect(os.path.isfile(filename),\"xml file not found %s\"%filename)\n expect(os.path.isfile(schema),\"schema file not found %s\"%schema)\n xmllint = find_executable(\"xmllint\")\n if xmllint is not None:\n logger.debug(\"Checking file %s against schema %s\"%(filename, schema))\n run_cmd_no_fail(\"%s --noout --schema %s %s\"%(xmllint, schema, filename))\n else:\n logger.warn(\"xmllint not found, could not validate file %s\"%filename)\n\n def get_element_text(self, element_name, attributes=None, root=None, xpath=None):\n element_node = self.get_optional_node(element_name, attributes, root, xpath)\n if element_node is not None:\n return element_node.text\n return None\n\n def set_element_text(self, element_name, new_text, attributes=None, root=None, xpath=None):\n element_node = self.get_optional_node(element_name, attributes, root, xpath)\n if element_node is not None:\n element_node.text = new_text\n return new_text\n return None\n\n def get_raw_record(self, root=None):\n if root is None:\n root = self.root\n try:\n xmlstr = ET.tostring(root)\n except ET.ParseError as e:\n ET.dump(root)\n expect(False, \"Could not write file %s, xml formatting error '%s'\" % (self.filename, e))\n return xmlstr\n\n\n", "path": "utils/python/CIME/XML/generic_xml.py"}]}
| 3,802 | 558 |
gh_patches_debug_38107
|
rasdani/github-patches
|
git_diff
|
pytorch__text-139
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unintuitive behavior of Iterator when sort is False
Currently, the following line is executed regardless of `sort` value.
https://github.com/pytorch/text/blob/2980f1bc39ba6af332c5c2783da8bee109796d4c/torchtext/data/iterator.py#L162
It could result in a counter-intuitive behavior when `sort is False`, since one would probably expect that the order of data is kept intact when `sort` is `False`.
I think this should be executed only when `sort` is `True`.
Is it by design, or a bug?
</issue>
<code>
[start of torchtext/data/iterator.py]
1 from __future__ import division
2
3 import math
4 import random
5 from contextlib import contextmanager
6 from copy import deepcopy
7
8 from .batch import Batch
9 from .dataset import Dataset
10
11
12 class RandomShuffler(object):
13 """Use random functions while keeping track of the random state to make it
14 reproducible and deterministic."""
15
16 def __init__(self, random_state=None):
17 self._random_state = random_state
18 if self._random_state is None:
19 self._random_state = random.getstate()
20
21 @contextmanager
22 def use_internal_state(self):
23 """Use a specific RNG state."""
24 old_state = random.getstate()
25 random.setstate(self._random_state)
26 yield
27 self._random_state = random.getstate()
28 random.setstate(old_state)
29
30 @property
31 def random_state(self):
32 return deepcopy(self._random_state)
33
34 @random_state.setter
35 def random_state(self, s):
36 self._random_state = s
37
38 def __call__(self, data):
39 """Shuffle and return a new list."""
40 with self.use_internal_state():
41 return random.sample(data, len(data))
42
43
44 class Iterator(object):
45 """Defines an iterator that loads batches of data from a Dataset.
46
47 Attributes:
48 dataset: The Dataset object to load Examples from.
49 batch_size: Batch size.
50 batch_size_fn: Function of three arguments (new example to add, current
51 count of examples in the batch, and current effective batch size)
52 that returns the new effective batch size resulting from adding
53 that example to a batch. This is useful for dynamic batching, where
54 this function would add to the current effective batch size the
55 number of tokens in the new example.
56 sort_key: A key to use for sorting examples in order to batch together
57 examples with similar lengths and minimize padding. The sort_key
58 provided to the Iterator constructor overrides the sort_key
59 attribute of the Dataset, or defers to it if None.
60 train: Whether the iterator represents a train set.
61 repeat: Whether to repeat the iterator for multiple epochs.
62 shuffle: Whether to shuffle examples between epochs.
63 sort: Whether to sort examples according to self.sort_key.
64 Note that repeat, shuffle, and sort default to train, train, and
65 (not train).
66 device: Device to create batches on. Use -1 for CPU and None for the
67 currently active GPU device.
68 """
69
70 def __init__(self, dataset, batch_size, sort_key=None, device=None,
71 batch_size_fn=lambda new, count, sofar: count, train=True,
72 repeat=None, shuffle=None, sort=None):
73 self.batch_size, self.train, self.dataset = batch_size, train, dataset
74 self.batch_size_fn = batch_size_fn
75 self.iterations = 0
76 self.repeat = train if repeat is None else repeat
77 self.shuffle = train if shuffle is None else shuffle
78 self.sort = not train if sort is None else sort
79 if sort_key is None:
80 self.sort_key = dataset.sort_key
81 else:
82 self.sort_key = sort_key
83 self.device = device
84
85 self.random_shuffler = RandomShuffler()
86
87 # For state loading/saving only
88 self._iterations_this_epoch = 0
89 self._random_state_this_epoch = None
90 self._restored_from_state = False
91
92 @classmethod
93 def splits(cls, datasets, batch_sizes=None, **kwargs):
94 """Create Iterator objects for multiple splits of a dataset.
95
96 Arguments:
97 datasets: Tuple of Dataset objects corresponding to the splits. The
98 first such object should be the train set.
99 batch_sizes: Tuple of batch sizes to use for the different splits,
100 or None to use the same batch_size for all splits.
101 Remaining keyword arguments: Passed to the constructor of the
102 iterator class being used.
103 """
104 if batch_sizes is None:
105 batch_sizes = [kwargs.pop('batch_size')] * len(datasets)
106 ret = []
107 for i in range(len(datasets)):
108 train = i == 0
109 ret.append(cls(
110 datasets[i], batch_size=batch_sizes[i], train=train, **kwargs))
111 return tuple(ret)
112
113 def data(self):
114 """Return the examples in the dataset in order, sorted, or shuffled."""
115 if self.sort:
116 xs = sorted(self.dataset, key=self.sort_key)
117 elif self.shuffle:
118 xs = [self.dataset[i] for i in self.random_shuffler(range(len(self.dataset)))]
119 else:
120 xs = self.dataset
121 return xs
122
123 def init_epoch(self):
124 """Set up the batch generator for a new epoch."""
125
126 if self._restored_from_state:
127 self.random_shuffler.random_state = self._random_state_this_epoch
128 else:
129 self._random_state_this_epoch = self.random_shuffler.random_state
130
131 self.create_batches()
132
133 if self._restored_from_state:
134 self._restored_from_state = False
135 else:
136 self._iterations_this_epoch = 0
137
138 if not self.repeat:
139 self.iterations = 0
140
141 def create_batches(self):
142 self.batches = batch(self.data(), self.batch_size, self.batch_size_fn)
143
144 @property
145 def epoch(self):
146 return self.iterations / len(self)
147
148 def __len__(self):
149 return math.ceil(len(self.dataset) / self.batch_size)
150
151 def __iter__(self):
152 while True:
153 self.init_epoch()
154 for idx, minibatch in enumerate(self.batches):
155 # fast-forward if loaded from state
156 if self._iterations_this_epoch > idx:
157 continue
158 self.iterations += 1
159 self._iterations_this_epoch += 1
160 # NOTE: `rnn.pack_padded_sequence` requires that a minibatch be sorted by
161 # decreasing order, which requires reversing relative to typical sort keys
162 minibatch.reverse()
163 yield Batch(minibatch, self.dataset, self.device,
164 self.train)
165 if not self.repeat:
166 raise StopIteration
167
168 def state_dict(self):
169 return {
170 "iterations": self.iterations,
171 "iterations_this_epoch": self._iterations_this_epoch,
172 "random_state_this_epoch": self._random_state_this_epoch}
173
174 def load_state_dict(self, state_dict):
175 self.iterations = state_dict["iterations"]
176 self._iterations_this_epoch = state_dict["iterations_this_epoch"]
177 self._random_state_this_epoch = state_dict["random_state_this_epoch"]
178 self._restored_from_state = True
179
180
181 class BPTTIterator(Iterator):
182 """Defines an iterator for language modeling tasks that use BPTT.
183
184 Provides contiguous streams of examples together with targets that are
185 one timestep further forward, for language modeling training with
186 backpropagation through time (BPTT). Expects a Dataset with a single
187 example and a single field called 'text' and produces Batches with text and
188 target attributes.
189
190 Attributes:
191 dataset: The Dataset object to load Examples from.
192 batch_size: Batch size.
193 bptt_len: Length of sequences for backpropagation through time.
194 sort_key: A key to use for sorting examples in order to batch together
195 examples with similar lengths and minimize padding. The sort_key
196 provided to the Iterator constructor overrides the sort_key
197 attribute of the Dataset, or defers to it if None.
198 train: Whether the iterator represents a train set.
199 repeat: Whether to repeat the iterator for multiple epochs.
200 shuffle: Whether to shuffle examples between epochs.
201 sort: Whether to sort examples according to self.sort_key.
202 Note that repeat, shuffle, and sort default to train, train, and
203 (not train).
204 device: Device to create batches on. Use -1 for CPU and None for the
205 currently active GPU device.
206 """
207
208 def __init__(self, dataset, batch_size, bptt_len, **kwargs):
209 self.bptt_len = bptt_len
210 super(BPTTIterator, self).__init__(dataset, batch_size, **kwargs)
211
212 def __len__(self):
213 return math.ceil(len(self.dataset[0].text) /
214 (self.batch_size * self.bptt_len))
215
216 def __iter__(self):
217 text = self.dataset[0].text
218 TEXT = self.dataset.fields['text']
219 TEXT.eos_token = None
220 text = text + ([TEXT.pad_token] * int(math.ceil(len(text) / self.batch_size) *
221 self.batch_size - len(text)))
222 data = TEXT.numericalize(
223 [text], device=self.device, train=self.train)
224 data = data.view(self.batch_size, -1).t().contiguous()
225 dataset = Dataset(examples=self.dataset.examples, fields=[
226 ('text', TEXT), ('target', TEXT)])
227 while True:
228 for i in range(0, len(self) * self.bptt_len, self.bptt_len):
229 seq_len = min(self.bptt_len, len(data) - 1 - i)
230 yield Batch.fromvars(
231 dataset, self.batch_size, train=self.train,
232 text=data[i:i + seq_len],
233 target=data[i + 1:i + 1 + seq_len])
234 if not self.repeat:
235 raise StopIteration
236
237
238 class BucketIterator(Iterator):
239 """Defines an iterator that batches examples of similar lengths together.
240
241 Minimizes amount of padding needed while producing freshly shuffled
242 batches for each new epoch. See pool for the bucketing procedure used.
243 """
244
245 def create_batches(self):
246 if self.sort:
247 self.batches = batch(self.data(), self.batch_size,
248 self.batch_size_fn)
249 else:
250 self.batches = pool(self.data(), self.batch_size,
251 self.sort_key, self.batch_size_fn,
252 random_shuffler=self.random_shuffler)
253
254
255 def batch(data, batch_size, batch_size_fn=lambda new, count, sofar: count):
256 """Yield elements from data in chunks of batch_size."""
257 minibatch, size_so_far = [], 0
258 for ex in data:
259 minibatch.append(ex)
260 size_so_far = batch_size_fn(ex, len(minibatch), size_so_far)
261 if size_so_far == batch_size:
262 yield minibatch
263 minibatch, size_so_far = [], 0
264 elif size_so_far > batch_size:
265 yield minibatch[:-1]
266 minibatch, size_so_far = minibatch[-1:], batch_size_fn(ex, 1, 0)
267 if minibatch:
268 yield minibatch
269
270
271 def pool(data, batch_size, key, batch_size_fn=lambda new, count, sofar: count,
272 random_shuffler=None):
273 """Sort within buckets, then batch, then shuffle batches.
274
275 Partitions data into chunks of size 100*batch_size, sorts examples within
276 each chunk using sort_key, then batch these examples and shuffle the
277 batches.
278 """
279 if random_shuffler is None:
280 random_shuffler = random.shuffle
281 for p in batch(data, batch_size * 100, batch_size_fn):
282 p_batch = batch(sorted(p, key=key), batch_size, batch_size_fn)
283 for b in random_shuffler(list(p_batch)):
284 yield b
285
[end of torchtext/data/iterator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torchtext/data/iterator.py b/torchtext/data/iterator.py
--- a/torchtext/data/iterator.py
+++ b/torchtext/data/iterator.py
@@ -63,19 +63,28 @@
sort: Whether to sort examples according to self.sort_key.
Note that repeat, shuffle, and sort default to train, train, and
(not train).
+ sort_within_batch: Whether to sort (in descending order according to
+ self.sort_key) within each batch. If None, defaults to self.sort.
+ If self.sort is True and this is False, the batch is left in the
+ original (ascending) sorted order.
device: Device to create batches on. Use -1 for CPU and None for the
currently active GPU device.
"""
def __init__(self, dataset, batch_size, sort_key=None, device=None,
batch_size_fn=lambda new, count, sofar: count, train=True,
- repeat=None, shuffle=None, sort=None):
+ repeat=None, shuffle=None, sort=None,
+ sort_within_batch=None):
self.batch_size, self.train, self.dataset = batch_size, train, dataset
self.batch_size_fn = batch_size_fn
self.iterations = 0
self.repeat = train if repeat is None else repeat
self.shuffle = train if shuffle is None else shuffle
self.sort = not train if sort is None else sort
+ if sort_within_batch is None:
+ self.sort_within_batch = self.sort
+ else:
+ self.sort_within_batch = sort_within_batch
if sort_key is None:
self.sort_key = dataset.sort_key
else:
@@ -157,9 +166,14 @@
continue
self.iterations += 1
self._iterations_this_epoch += 1
- # NOTE: `rnn.pack_padded_sequence` requires that a minibatch be sorted by
- # decreasing order, which requires reversing relative to typical sort keys
- minibatch.reverse()
+ if self.sort_within_batch:
+ # NOTE: `rnn.pack_padded_sequence` requires that a minibatch
+ # be sorted by decreasing order, which requires reversing
+ # relative to typical sort keys
+ if self.sort:
+ minibatch.reverse()
+ else:
+ minibatch.sort(key=self.sort_key, reverse=True)
yield Batch(minibatch, self.dataset, self.device,
self.train)
if not self.repeat:
|
{"golden_diff": "diff --git a/torchtext/data/iterator.py b/torchtext/data/iterator.py\n--- a/torchtext/data/iterator.py\n+++ b/torchtext/data/iterator.py\n@@ -63,19 +63,28 @@\n sort: Whether to sort examples according to self.sort_key.\n Note that repeat, shuffle, and sort default to train, train, and\n (not train).\n+ sort_within_batch: Whether to sort (in descending order according to\n+ self.sort_key) within each batch. If None, defaults to self.sort.\n+ If self.sort is True and this is False, the batch is left in the\n+ original (ascending) sorted order.\n device: Device to create batches on. Use -1 for CPU and None for the\n currently active GPU device.\n \"\"\"\n \n def __init__(self, dataset, batch_size, sort_key=None, device=None,\n batch_size_fn=lambda new, count, sofar: count, train=True,\n- repeat=None, shuffle=None, sort=None):\n+ repeat=None, shuffle=None, sort=None,\n+ sort_within_batch=None):\n self.batch_size, self.train, self.dataset = batch_size, train, dataset\n self.batch_size_fn = batch_size_fn\n self.iterations = 0\n self.repeat = train if repeat is None else repeat\n self.shuffle = train if shuffle is None else shuffle\n self.sort = not train if sort is None else sort\n+ if sort_within_batch is None:\n+ self.sort_within_batch = self.sort\n+ else:\n+ self.sort_within_batch = sort_within_batch\n if sort_key is None:\n self.sort_key = dataset.sort_key\n else:\n@@ -157,9 +166,14 @@\n continue\n self.iterations += 1\n self._iterations_this_epoch += 1\n- # NOTE: `rnn.pack_padded_sequence` requires that a minibatch be sorted by\n- # decreasing order, which requires reversing relative to typical sort keys\n- minibatch.reverse()\n+ if self.sort_within_batch:\n+ # NOTE: `rnn.pack_padded_sequence` requires that a minibatch\n+ # be sorted by decreasing order, which requires reversing\n+ # relative to typical sort keys\n+ if self.sort:\n+ minibatch.reverse()\n+ else:\n+ minibatch.sort(key=self.sort_key, reverse=True)\n yield Batch(minibatch, self.dataset, self.device,\n self.train)\n if not self.repeat:\n", "issue": "Unintuitive behavior of Iterator when sort is False\nCurrently, the following line is executed regardless of `sort` value.\r\nhttps://github.com/pytorch/text/blob/2980f1bc39ba6af332c5c2783da8bee109796d4c/torchtext/data/iterator.py#L162\r\n\r\nIt could result in a counter-intuitive behavior when `sort is False`, since one would probably expect that the order of data is kept intact when `sort` is `False`.\r\n\r\nI think this should be executed only when `sort` is `True`.\r\nIs it by design, or a bug?\n", "before_files": [{"content": "from __future__ import division\n\nimport math\nimport random\nfrom contextlib import contextmanager\nfrom copy import deepcopy\n\nfrom .batch import Batch\nfrom .dataset import Dataset\n\n\nclass RandomShuffler(object):\n \"\"\"Use random functions while keeping track of the random state to make it\n reproducible and deterministic.\"\"\"\n\n def __init__(self, random_state=None):\n self._random_state = random_state\n if self._random_state is None:\n self._random_state = random.getstate()\n\n @contextmanager\n def use_internal_state(self):\n \"\"\"Use a specific RNG state.\"\"\"\n old_state = random.getstate()\n random.setstate(self._random_state)\n yield\n self._random_state = random.getstate()\n random.setstate(old_state)\n\n @property\n def random_state(self):\n return deepcopy(self._random_state)\n\n @random_state.setter\n def random_state(self, s):\n self._random_state = s\n\n def __call__(self, data):\n \"\"\"Shuffle and return a new list.\"\"\"\n with self.use_internal_state():\n return random.sample(data, len(data))\n\n\nclass Iterator(object):\n \"\"\"Defines an iterator that loads batches of data from a Dataset.\n\n Attributes:\n dataset: The Dataset object to load Examples from.\n batch_size: Batch size.\n batch_size_fn: Function of three arguments (new example to add, current\n count of examples in the batch, and current effective batch size)\n that returns the new effective batch size resulting from adding\n that example to a batch. This is useful for dynamic batching, where\n this function would add to the current effective batch size the\n number of tokens in the new example.\n sort_key: A key to use for sorting examples in order to batch together\n examples with similar lengths and minimize padding. The sort_key\n provided to the Iterator constructor overrides the sort_key\n attribute of the Dataset, or defers to it if None.\n train: Whether the iterator represents a train set.\n repeat: Whether to repeat the iterator for multiple epochs.\n shuffle: Whether to shuffle examples between epochs.\n sort: Whether to sort examples according to self.sort_key.\n Note that repeat, shuffle, and sort default to train, train, and\n (not train).\n device: Device to create batches on. Use -1 for CPU and None for the\n currently active GPU device.\n \"\"\"\n\n def __init__(self, dataset, batch_size, sort_key=None, device=None,\n batch_size_fn=lambda new, count, sofar: count, train=True,\n repeat=None, shuffle=None, sort=None):\n self.batch_size, self.train, self.dataset = batch_size, train, dataset\n self.batch_size_fn = batch_size_fn\n self.iterations = 0\n self.repeat = train if repeat is None else repeat\n self.shuffle = train if shuffle is None else shuffle\n self.sort = not train if sort is None else sort\n if sort_key is None:\n self.sort_key = dataset.sort_key\n else:\n self.sort_key = sort_key\n self.device = device\n\n self.random_shuffler = RandomShuffler()\n\n # For state loading/saving only\n self._iterations_this_epoch = 0\n self._random_state_this_epoch = None\n self._restored_from_state = False\n\n @classmethod\n def splits(cls, datasets, batch_sizes=None, **kwargs):\n \"\"\"Create Iterator objects for multiple splits of a dataset.\n\n Arguments:\n datasets: Tuple of Dataset objects corresponding to the splits. The\n first such object should be the train set.\n batch_sizes: Tuple of batch sizes to use for the different splits,\n or None to use the same batch_size for all splits.\n Remaining keyword arguments: Passed to the constructor of the\n iterator class being used.\n \"\"\"\n if batch_sizes is None:\n batch_sizes = [kwargs.pop('batch_size')] * len(datasets)\n ret = []\n for i in range(len(datasets)):\n train = i == 0\n ret.append(cls(\n datasets[i], batch_size=batch_sizes[i], train=train, **kwargs))\n return tuple(ret)\n\n def data(self):\n \"\"\"Return the examples in the dataset in order, sorted, or shuffled.\"\"\"\n if self.sort:\n xs = sorted(self.dataset, key=self.sort_key)\n elif self.shuffle:\n xs = [self.dataset[i] for i in self.random_shuffler(range(len(self.dataset)))]\n else:\n xs = self.dataset\n return xs\n\n def init_epoch(self):\n \"\"\"Set up the batch generator for a new epoch.\"\"\"\n\n if self._restored_from_state:\n self.random_shuffler.random_state = self._random_state_this_epoch\n else:\n self._random_state_this_epoch = self.random_shuffler.random_state\n\n self.create_batches()\n\n if self._restored_from_state:\n self._restored_from_state = False\n else:\n self._iterations_this_epoch = 0\n\n if not self.repeat:\n self.iterations = 0\n\n def create_batches(self):\n self.batches = batch(self.data(), self.batch_size, self.batch_size_fn)\n\n @property\n def epoch(self):\n return self.iterations / len(self)\n\n def __len__(self):\n return math.ceil(len(self.dataset) / self.batch_size)\n\n def __iter__(self):\n while True:\n self.init_epoch()\n for idx, minibatch in enumerate(self.batches):\n # fast-forward if loaded from state\n if self._iterations_this_epoch > idx:\n continue\n self.iterations += 1\n self._iterations_this_epoch += 1\n # NOTE: `rnn.pack_padded_sequence` requires that a minibatch be sorted by\n # decreasing order, which requires reversing relative to typical sort keys\n minibatch.reverse()\n yield Batch(minibatch, self.dataset, self.device,\n self.train)\n if not self.repeat:\n raise StopIteration\n\n def state_dict(self):\n return {\n \"iterations\": self.iterations,\n \"iterations_this_epoch\": self._iterations_this_epoch,\n \"random_state_this_epoch\": self._random_state_this_epoch}\n\n def load_state_dict(self, state_dict):\n self.iterations = state_dict[\"iterations\"]\n self._iterations_this_epoch = state_dict[\"iterations_this_epoch\"]\n self._random_state_this_epoch = state_dict[\"random_state_this_epoch\"]\n self._restored_from_state = True\n\n\nclass BPTTIterator(Iterator):\n \"\"\"Defines an iterator for language modeling tasks that use BPTT.\n\n Provides contiguous streams of examples together with targets that are\n one timestep further forward, for language modeling training with\n backpropagation through time (BPTT). Expects a Dataset with a single\n example and a single field called 'text' and produces Batches with text and\n target attributes.\n\n Attributes:\n dataset: The Dataset object to load Examples from.\n batch_size: Batch size.\n bptt_len: Length of sequences for backpropagation through time.\n sort_key: A key to use for sorting examples in order to batch together\n examples with similar lengths and minimize padding. The sort_key\n provided to the Iterator constructor overrides the sort_key\n attribute of the Dataset, or defers to it if None.\n train: Whether the iterator represents a train set.\n repeat: Whether to repeat the iterator for multiple epochs.\n shuffle: Whether to shuffle examples between epochs.\n sort: Whether to sort examples according to self.sort_key.\n Note that repeat, shuffle, and sort default to train, train, and\n (not train).\n device: Device to create batches on. Use -1 for CPU and None for the\n currently active GPU device.\n \"\"\"\n\n def __init__(self, dataset, batch_size, bptt_len, **kwargs):\n self.bptt_len = bptt_len\n super(BPTTIterator, self).__init__(dataset, batch_size, **kwargs)\n\n def __len__(self):\n return math.ceil(len(self.dataset[0].text) /\n (self.batch_size * self.bptt_len))\n\n def __iter__(self):\n text = self.dataset[0].text\n TEXT = self.dataset.fields['text']\n TEXT.eos_token = None\n text = text + ([TEXT.pad_token] * int(math.ceil(len(text) / self.batch_size) *\n self.batch_size - len(text)))\n data = TEXT.numericalize(\n [text], device=self.device, train=self.train)\n data = data.view(self.batch_size, -1).t().contiguous()\n dataset = Dataset(examples=self.dataset.examples, fields=[\n ('text', TEXT), ('target', TEXT)])\n while True:\n for i in range(0, len(self) * self.bptt_len, self.bptt_len):\n seq_len = min(self.bptt_len, len(data) - 1 - i)\n yield Batch.fromvars(\n dataset, self.batch_size, train=self.train,\n text=data[i:i + seq_len],\n target=data[i + 1:i + 1 + seq_len])\n if not self.repeat:\n raise StopIteration\n\n\nclass BucketIterator(Iterator):\n \"\"\"Defines an iterator that batches examples of similar lengths together.\n\n Minimizes amount of padding needed while producing freshly shuffled\n batches for each new epoch. See pool for the bucketing procedure used.\n \"\"\"\n\n def create_batches(self):\n if self.sort:\n self.batches = batch(self.data(), self.batch_size,\n self.batch_size_fn)\n else:\n self.batches = pool(self.data(), self.batch_size,\n self.sort_key, self.batch_size_fn,\n random_shuffler=self.random_shuffler)\n\n\ndef batch(data, batch_size, batch_size_fn=lambda new, count, sofar: count):\n \"\"\"Yield elements from data in chunks of batch_size.\"\"\"\n minibatch, size_so_far = [], 0\n for ex in data:\n minibatch.append(ex)\n size_so_far = batch_size_fn(ex, len(minibatch), size_so_far)\n if size_so_far == batch_size:\n yield minibatch\n minibatch, size_so_far = [], 0\n elif size_so_far > batch_size:\n yield minibatch[:-1]\n minibatch, size_so_far = minibatch[-1:], batch_size_fn(ex, 1, 0)\n if minibatch:\n yield minibatch\n\n\ndef pool(data, batch_size, key, batch_size_fn=lambda new, count, sofar: count,\n random_shuffler=None):\n \"\"\"Sort within buckets, then batch, then shuffle batches.\n\n Partitions data into chunks of size 100*batch_size, sorts examples within\n each chunk using sort_key, then batch these examples and shuffle the\n batches.\n \"\"\"\n if random_shuffler is None:\n random_shuffler = random.shuffle\n for p in batch(data, batch_size * 100, batch_size_fn):\n p_batch = batch(sorted(p, key=key), batch_size, batch_size_fn)\n for b in random_shuffler(list(p_batch)):\n yield b\n", "path": "torchtext/data/iterator.py"}]}
| 3,855 | 550 |
gh_patches_debug_17717
|
rasdani/github-patches
|
git_diff
|
nautobot__nautobot-763
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Concurrency issues(?) (with tasks workers?)
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Python version: 3.9
* Nautobot version: 1.1.0
* Redis: redis:6.2.5-alpine
* PSQL: postgres:13.3-alpine
Docker-compose extract:
```
services:
nautobot: &nautobot
image: networktocode/nautobot:1.1.0-py3.9
depends_on:
- postgres
- redis
volumes:
- ./volumes/media:/opt/nautobot/media:z,rw
- ./volumes/git:/opt/nautobot/git:z,rw
- ./volumes/jobs:/opt/nautobot/jobs:z,rw
environment:
NAUTOBOT_DB_HOST: postgres
NAUTOBOT_DB_USER: nautobot
NAUTOBOT_DB_PASSWORD: nautobot
NAUTOBOT_DB_NAME: nautobot
NAUTOBOT_REDIS_HOST: redis
NAUTOBOT_SECRET_KEY: "*****"
NAUTOBOT_MAX_PAGE_SIZE: "50000"
NAUTOBOT_CHANGELOG_RETENTION: "366"
NAUTOBOT_METRICS_ENABLED: "true"
#NAUTOBOT_CACHEOPS_ENABLED: "false"
celery_worker:
<<: *nautobot
entrypoint: "nautobot-server celery worker -B -l INFO"
networks:
- default
labels: []
depends_on:
- nautobot
healthcheck:
interval: 5s
timeout: 5s
start_period: 5s
retries: 3
test: ["CMD", "nautobot-server", "health_check"]
rq_worker:
<<: *nautobot
entrypoint: "nautobot-server rqworker"
networks:
- default
labels: []
depends_on:
- nautobot
healthcheck:
interval: 5s
timeout: 5s
start_period: 5s
retries: 3
test: ["CMD", "nautobot-server", "health_check"]
# postgres - https://hub.docker.com/_/postgres
postgres:
image: postgres:13.3-alpine
volumes:
- ./volumes/pgsql:/var/lib/postgresql/data
environment:
POSTGRES_USER: nautobot
POSTGRES_PASSWORD: nautobot
POSTGRES_DB: nautobot
# redis - https://hub.docker.com/_/redis
redis:
image: redis:6.2.5-alpine
```
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Stop the Celery worker
2. Create jobs (doing many queries?):
```py
from nautobot.extras.jobs import Job
from nautobot.dcim.models import Device, Interface
import time
class TestA(Job):
devices = Device.objects
class Meta:
read_only = True
def test_a(self):
j = 0
for i, device in enumerate(self.devices.all()):
for j, interface in enumerate(device.interfaces.all(), start=j):
if j > 50:
break
self.log_info(obj=interface, message=f'Iteration {i}/{j}, name={device.name}/{interface.name}, {interface.connected_endpoint}')
time.sleep(0.1)
class TestB(Job):
devices = Device.objects
class Meta:
read_only = True
def test_b(self):
j = 0
for i, device in enumerate(self.devices.all()):
for j, interface in enumerate(device.interfaces.all(), start=j):
if j > 50:
break
self.log_info(obj=interface, message=f'Iteration {i}/{j}, name={device.name}/{interface.name}, {interface.connected_endpoint}')
time.sleep(0.1)
```
2. Start multiple instances of each (using the API if not allowed in the UI)
3. Start the worker
<!-- What did you expect to happen? -->
### Expected Behavior
Jobs are fully ran in the start order
<!-- What happened instead? -->
### Observed Behavior
Some jobs are stuck in either the pending or the running state, with errors in the worker logs
### Additional informations
This is the best repro I could find in many hours of really weird and random errors. I noticed setting `NAUTOBOT_CACHEOPS_ENABLED: 'false'` could help getting errors more often.
And in that case (no cacheops), right after an instance is started, listing the job results (which seems to imply `page_size` times git refresh ?) + loading another page (like job list or git list/detail) is also a good way to crash the web container with weird errors too, like:
* `<class 'AttributeError'>, 'datetime.date' object has no attribute 'encode'`
* `<class 'RuntimeError'>, generator raised StopIteration`
* `<class 'AttributeError'>, 'NoneType' object has no attribute 'DoesNotExist'`
And in the logs about the git refresh:
```
nautobot_1 | 18:42:23.628 INFO nautobot.jobs :
nautobot_1 | Repository successfully refreshed
nautobot_1 | 18:42:23.655 INFO nautobot.jobs :
nautobot_1 | Repository successfully refreshed
nautobot_1 | 18:42:23.677 INFO nautobot.jobs :
nautobot_1 | Repository successfully refreshed
[..]
```
So I'm not sure the issue is restricted to the jobs part.
Some more random errors observed:
* `<class 'django.contrib.contenttypes.models.ContentType.DoesNotExist'> ContentType matching query does not exist.`
* `<class 'django.db.utils.DatabaseError'> error with status PGRES_TUPLES_OK and no message from the libpq`
* `<class 'ValueError'> Field 'id' expected a number but got 'ryws2lq3ihs******md9auxf1ua3'.`
For the job part, a possible workaround seems to be to set the pool implementation to solo (https://docs.celeryproject.org/en/stable/reference/cli.html#cmdoption-celery-worker-P):
```yaml
entrypoint: "nautobot-server celery worker -B -l INFO-P solo"
```
This really feels like I'm doing something wrong (especially since nobody else seems complaining?), I really hope not, but if that's the case I can't point out what it is.
Best, Alexandre
</issue>
<code>
[start of nautobot/core/celery.py]
1 import json
2 import logging
3
4 import nautobot
5
6 from celery import Celery, shared_task
7 from django.core.serializers.json import DjangoJSONEncoder
8 from django.utils.module_loading import import_string
9 from kombu.serialization import register
10
11 logger = logging.getLogger(__name__)
12
13 # The Celery documentation tells us to call setup on the app to initialize
14 # settings, but we will NOT be doing that because of a chicken-and-egg problem
15 # when bootstrapping the Django settings with `nautobot-server`.
16 #
17 # Note this would normally set the `DJANGO_SETTINGS_MODULE` environment variable
18 # which Celery and its workers need under the hood.The Celery docs and examples
19 # normally have you set it here, but because of our custom settings bootstrapping
20 # it is handled in the `nautobot.setup() call, and we have implemented a
21 # `nautobot-server celery` command to provide the correct context so this does
22 # NOT need to be called here.
23 # nautobot.setup()
24
25 app = Celery("nautobot")
26
27 # Using a string here means the worker doesn't have to serialize
28 # the configuration object to child processes. Again, this is possible
29 # only after calling `nautobot.setup()` which sets `DJANGO_SETTINGS_MODULE`.
30 # - namespace='CELERY' means all celery-related configuration keys
31 # should have a `CELERY_` prefix.
32 app.config_from_object("django.conf:settings", namespace="CELERY")
33
34 # Load task modules from all registered Django apps.
35 app.autodiscover_tasks()
36
37
38 class NautobotKombuJSONEncoder(DjangoJSONEncoder):
39 """
40 Custom json encoder based on DjangoJSONEncoder that serializes objects that implement
41 the `nautobot_serialize()` method via the `__nautobot_type__` interface. This is useful
42 in passing special objects to and from Celery tasks.
43
44 This pattern should generally be avoided by passing pointers to persisted objects to the
45 Celery tasks and retrieving them from within the task execution. While this is always possible
46 for model instances (which covers 99% of use cases), for rare instances where it does not,
47 and the actual object must be passed, this pattern allows for encoding and decoding
48 of such objects.
49
50 It requires a conforming class to implement the instance method `nautobot_serialize()` which
51 returns a json serializable dictionary of the object representation. The class must also implement
52 the `nautobot_deserialize()` class method which takes the dictionary representation and returns
53 an actual instance of the class.
54 """
55
56 def default(self, obj):
57 if hasattr(obj, "nautobot_serialize"):
58 cls = obj.__class__
59 module = cls.__module__
60 qual_name = ".".join([module, cls.__qualname__]) # fully qualified dotted import path
61 logger.debug("Performing nautobot serialization on %s for type %s", obj, qual_name)
62 data = {"__nautobot_type__": qual_name}
63 data.update(obj.nautobot_serialize())
64 return data
65
66 elif isinstance(obj, set):
67 # Convert a set to a list for passing to and from a task
68 return list(obj)
69
70 else:
71 return DjangoJSONEncoder.default(self, obj)
72
73
74 def nautobot_kombu_json_loads_hook(data):
75 """
76 In concert with the NautobotKombuJSONEncoder json encoder, this object hook method decodes
77 objects that implement the `__nautobot_type__` interface via the `nautobot_deserialize()` class method.
78 """
79 if "__nautobot_type__" in data:
80 qual_name = data.pop("__nautobot_type__")
81 logger.debug("Performing nautobot deserialization for type %s", qual_name)
82 cls = import_string(qual_name) # fully qualified dotted import path
83 if cls:
84 return cls.nautobot_deserialize(data)
85 else:
86 raise TypeError(f"Unable to import {qual_name} during nautobot deserialization")
87 else:
88 return data
89
90
91 # Encoder function
92 def _dumps(obj):
93 return json.dumps(obj, cls=NautobotKombuJSONEncoder)
94
95
96 # Decoder function
97 def _loads(obj):
98 return json.loads(obj, object_hook=nautobot_kombu_json_loads_hook)
99
100
101 # Register the custom serialization type
102 register("nautobot_json", _dumps, _loads, content_type="application/x-nautobot-json", content_encoding="utf-8")
103
104
105 #
106 # nautobot_task
107 #
108 # By exposing `shared_task` within our own namespace, we leave the door open to
109 # extending and expanding the usage and meaning of shared_task without having
110 # to undergo further refactoring of task's decorators. We could also transparently
111 # swap out shared_task to a custom base task.
112 #
113
114 nautobot_task = shared_task
115
[end of nautobot/core/celery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nautobot/core/celery.py b/nautobot/core/celery.py
--- a/nautobot/core/celery.py
+++ b/nautobot/core/celery.py
@@ -4,6 +4,7 @@
import nautobot
from celery import Celery, shared_task
+from celery.fixups.django import DjangoFixup
from django.core.serializers.json import DjangoJSONEncoder
from django.utils.module_loading import import_string
from kombu.serialization import register
@@ -31,6 +32,11 @@
# should have a `CELERY_` prefix.
app.config_from_object("django.conf:settings", namespace="CELERY")
+# Because of the chicken-and-egg Django settings bootstrapping issue,
+# Celery doesn't automatically install its Django-specific patches.
+# So we need to explicitly do so ourselves:
+DjangoFixup(app).install()
+
# Load task modules from all registered Django apps.
app.autodiscover_tasks()
|
{"golden_diff": "diff --git a/nautobot/core/celery.py b/nautobot/core/celery.py\n--- a/nautobot/core/celery.py\n+++ b/nautobot/core/celery.py\n@@ -4,6 +4,7 @@\n import nautobot\n \n from celery import Celery, shared_task\n+from celery.fixups.django import DjangoFixup\n from django.core.serializers.json import DjangoJSONEncoder\n from django.utils.module_loading import import_string\n from kombu.serialization import register\n@@ -31,6 +32,11 @@\n # should have a `CELERY_` prefix.\n app.config_from_object(\"django.conf:settings\", namespace=\"CELERY\")\n \n+# Because of the chicken-and-egg Django settings bootstrapping issue,\n+# Celery doesn't automatically install its Django-specific patches.\n+# So we need to explicitly do so ourselves:\n+DjangoFixup(app).install()\n+\n # Load task modules from all registered Django apps.\n app.autodiscover_tasks()\n", "issue": "Concurrency issues(?) (with tasks workers?)\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Python version: 3.9\r\n* Nautobot version: 1.1.0\r\n* Redis: redis:6.2.5-alpine\r\n* PSQL: postgres:13.3-alpine\r\n\r\nDocker-compose extract:\r\n```\r\nservices:\r\n nautobot: &nautobot\r\n image: networktocode/nautobot:1.1.0-py3.9\r\n depends_on:\r\n - postgres\r\n - redis\r\n volumes:\r\n - ./volumes/media:/opt/nautobot/media:z,rw\r\n - ./volumes/git:/opt/nautobot/git:z,rw\r\n - ./volumes/jobs:/opt/nautobot/jobs:z,rw\r\n environment:\r\n NAUTOBOT_DB_HOST: postgres\r\n NAUTOBOT_DB_USER: nautobot\r\n NAUTOBOT_DB_PASSWORD: nautobot\r\n NAUTOBOT_DB_NAME: nautobot\r\n NAUTOBOT_REDIS_HOST: redis\r\n NAUTOBOT_SECRET_KEY: \"*****\"\r\n NAUTOBOT_MAX_PAGE_SIZE: \"50000\"\r\n NAUTOBOT_CHANGELOG_RETENTION: \"366\"\r\n NAUTOBOT_METRICS_ENABLED: \"true\"\r\n #NAUTOBOT_CACHEOPS_ENABLED: \"false\"\r\n\r\n celery_worker:\r\n <<: *nautobot\r\n entrypoint: \"nautobot-server celery worker -B -l INFO\"\r\n networks:\r\n - default\r\n labels: []\r\n depends_on:\r\n - nautobot\r\n healthcheck:\r\n interval: 5s\r\n timeout: 5s\r\n start_period: 5s\r\n retries: 3\r\n test: [\"CMD\", \"nautobot-server\", \"health_check\"]\r\n\r\n rq_worker:\r\n <<: *nautobot\r\n entrypoint: \"nautobot-server rqworker\"\r\n networks:\r\n - default\r\n labels: []\r\n depends_on:\r\n - nautobot\r\n healthcheck:\r\n interval: 5s\r\n timeout: 5s\r\n start_period: 5s\r\n retries: 3\r\n test: [\"CMD\", \"nautobot-server\", \"health_check\"]\r\n\r\n # postgres - https://hub.docker.com/_/postgres\r\n postgres:\r\n image: postgres:13.3-alpine\r\n volumes:\r\n - ./volumes/pgsql:/var/lib/postgresql/data\r\n environment:\r\n POSTGRES_USER: nautobot\r\n POSTGRES_PASSWORD: nautobot\r\n POSTGRES_DB: nautobot\r\n\r\n # redis - https://hub.docker.com/_/redis\r\n redis:\r\n image: redis:6.2.5-alpine\r\n```\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Stop the Celery worker\r\n2. Create jobs (doing many queries?):\r\n```py\r\nfrom nautobot.extras.jobs import Job\r\nfrom nautobot.dcim.models import Device, Interface\r\nimport time\r\n\r\n\r\nclass TestA(Job):\r\n\r\n devices = Device.objects\r\n\r\n class Meta:\r\n read_only = True\r\n\r\n def test_a(self):\r\n j = 0\r\n for i, device in enumerate(self.devices.all()):\r\n for j, interface in enumerate(device.interfaces.all(), start=j):\r\n if j > 50:\r\n break\r\n self.log_info(obj=interface, message=f'Iteration {i}/{j}, name={device.name}/{interface.name}, {interface.connected_endpoint}')\r\n time.sleep(0.1)\r\n\r\n\r\nclass TestB(Job):\r\n\r\n devices = Device.objects\r\n\r\n class Meta:\r\n read_only = True\r\n\r\n def test_b(self):\r\n j = 0\r\n for i, device in enumerate(self.devices.all()):\r\n for j, interface in enumerate(device.interfaces.all(), start=j):\r\n if j > 50:\r\n break\r\n self.log_info(obj=interface, message=f'Iteration {i}/{j}, name={device.name}/{interface.name}, {interface.connected_endpoint}')\r\n time.sleep(0.1)\r\n```\r\n\r\n2. Start multiple instances of each (using the API if not allowed in the UI)\r\n3. Start the worker\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nJobs are fully ran in the start order\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\nSome jobs are stuck in either the pending or the running state, with errors in the worker logs\r\n\r\n### Additional informations\r\n\r\nThis is the best repro I could find in many hours of really weird and random errors. I noticed setting `NAUTOBOT_CACHEOPS_ENABLED: 'false'` could help getting errors more often.\r\n\r\nAnd in that case (no cacheops), right after an instance is started, listing the job results (which seems to imply `page_size` times git refresh ?) + loading another page (like job list or git list/detail) is also a good way to crash the web container with weird errors too, like:\r\n* `<class 'AttributeError'>, 'datetime.date' object has no attribute 'encode'`\r\n* `<class 'RuntimeError'>, generator raised StopIteration`\r\n* `<class 'AttributeError'>, 'NoneType' object has no attribute 'DoesNotExist'`\r\n\r\nAnd in the logs about the git refresh:\r\n```\r\nnautobot_1 | 18:42:23.628 INFO nautobot.jobs :\r\nnautobot_1 | Repository successfully refreshed\r\nnautobot_1 | 18:42:23.655 INFO nautobot.jobs :\r\nnautobot_1 | Repository successfully refreshed\r\nnautobot_1 | 18:42:23.677 INFO nautobot.jobs :\r\nnautobot_1 | Repository successfully refreshed\r\n[..]\r\n```\r\n\r\nSo I'm not sure the issue is restricted to the jobs part.\r\n\r\nSome more random errors observed:\r\n* `<class 'django.contrib.contenttypes.models.ContentType.DoesNotExist'> ContentType matching query does not exist.`\r\n* `<class 'django.db.utils.DatabaseError'> error with status PGRES_TUPLES_OK and no message from the libpq`\r\n* `<class 'ValueError'> Field 'id' expected a number but got 'ryws2lq3ihs******md9auxf1ua3'.`\r\n\r\nFor the job part, a possible workaround seems to be to set the pool implementation to solo (https://docs.celeryproject.org/en/stable/reference/cli.html#cmdoption-celery-worker-P): \r\n```yaml\r\n entrypoint: \"nautobot-server celery worker -B -l INFO-P solo\"\r\n```\r\n\r\nThis really feels like I'm doing something wrong (especially since nobody else seems complaining?), I really hope not, but if that's the case I can't point out what it is.\r\n\r\nBest, Alexandre\n", "before_files": [{"content": "import json\nimport logging\n\nimport nautobot\n\nfrom celery import Celery, shared_task\nfrom django.core.serializers.json import DjangoJSONEncoder\nfrom django.utils.module_loading import import_string\nfrom kombu.serialization import register\n\nlogger = logging.getLogger(__name__)\n\n# The Celery documentation tells us to call setup on the app to initialize\n# settings, but we will NOT be doing that because of a chicken-and-egg problem\n# when bootstrapping the Django settings with `nautobot-server`.\n#\n# Note this would normally set the `DJANGO_SETTINGS_MODULE` environment variable\n# which Celery and its workers need under the hood.The Celery docs and examples\n# normally have you set it here, but because of our custom settings bootstrapping\n# it is handled in the `nautobot.setup() call, and we have implemented a\n# `nautobot-server celery` command to provide the correct context so this does\n# NOT need to be called here.\n# nautobot.setup()\n\napp = Celery(\"nautobot\")\n\n# Using a string here means the worker doesn't have to serialize\n# the configuration object to child processes. Again, this is possible\n# only after calling `nautobot.setup()` which sets `DJANGO_SETTINGS_MODULE`.\n# - namespace='CELERY' means all celery-related configuration keys\n# should have a `CELERY_` prefix.\napp.config_from_object(\"django.conf:settings\", namespace=\"CELERY\")\n\n# Load task modules from all registered Django apps.\napp.autodiscover_tasks()\n\n\nclass NautobotKombuJSONEncoder(DjangoJSONEncoder):\n \"\"\"\n Custom json encoder based on DjangoJSONEncoder that serializes objects that implement\n the `nautobot_serialize()` method via the `__nautobot_type__` interface. This is useful\n in passing special objects to and from Celery tasks.\n\n This pattern should generally be avoided by passing pointers to persisted objects to the\n Celery tasks and retrieving them from within the task execution. While this is always possible\n for model instances (which covers 99% of use cases), for rare instances where it does not,\n and the actual object must be passed, this pattern allows for encoding and decoding\n of such objects.\n\n It requires a conforming class to implement the instance method `nautobot_serialize()` which\n returns a json serializable dictionary of the object representation. The class must also implement\n the `nautobot_deserialize()` class method which takes the dictionary representation and returns\n an actual instance of the class.\n \"\"\"\n\n def default(self, obj):\n if hasattr(obj, \"nautobot_serialize\"):\n cls = obj.__class__\n module = cls.__module__\n qual_name = \".\".join([module, cls.__qualname__]) # fully qualified dotted import path\n logger.debug(\"Performing nautobot serialization on %s for type %s\", obj, qual_name)\n data = {\"__nautobot_type__\": qual_name}\n data.update(obj.nautobot_serialize())\n return data\n\n elif isinstance(obj, set):\n # Convert a set to a list for passing to and from a task\n return list(obj)\n\n else:\n return DjangoJSONEncoder.default(self, obj)\n\n\ndef nautobot_kombu_json_loads_hook(data):\n \"\"\"\n In concert with the NautobotKombuJSONEncoder json encoder, this object hook method decodes\n objects that implement the `__nautobot_type__` interface via the `nautobot_deserialize()` class method.\n \"\"\"\n if \"__nautobot_type__\" in data:\n qual_name = data.pop(\"__nautobot_type__\")\n logger.debug(\"Performing nautobot deserialization for type %s\", qual_name)\n cls = import_string(qual_name) # fully qualified dotted import path\n if cls:\n return cls.nautobot_deserialize(data)\n else:\n raise TypeError(f\"Unable to import {qual_name} during nautobot deserialization\")\n else:\n return data\n\n\n# Encoder function\ndef _dumps(obj):\n return json.dumps(obj, cls=NautobotKombuJSONEncoder)\n\n\n# Decoder function\ndef _loads(obj):\n return json.loads(obj, object_hook=nautobot_kombu_json_loads_hook)\n\n\n# Register the custom serialization type\nregister(\"nautobot_json\", _dumps, _loads, content_type=\"application/x-nautobot-json\", content_encoding=\"utf-8\")\n\n\n#\n# nautobot_task\n#\n# By exposing `shared_task` within our own namespace, we leave the door open to\n# extending and expanding the usage and meaning of shared_task without having\n# to undergo further refactoring of task's decorators. We could also transparently\n# swap out shared_task to a custom base task.\n#\n\nnautobot_task = shared_task\n", "path": "nautobot/core/celery.py"}]}
| 3,479 | 212 |
gh_patches_debug_8884
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-3421
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Audit log crash on IPv6
Affected version: 8.5.0
I was just saving some preferences and faced this error:
```
DataError: ERREUR: syntaxe en entrée invalide pour le type inet : « 2001 »
LINE 1: [email protected]', 2, NULL, 1, NULL, 11, '2001', 'e...
^
SQL: INSERT INTO "sentry_auditlogentry" ("organization_id", "actor_label", "actor_id", "actor_key_id", "target_object", "target_user_id", "event", "ip_address", "data", "datetime") VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s) RETURNING "sentry_auditlogentry"."id"
```
Looks like IPv6 addresses are not handled properly.
Also reproduced by deleting a project.
Oddly this wasn't triggered on project creation and my IPv4 was logged instead of my IPv6.
</issue>
<code>
[start of src/sentry/middleware/proxy.py]
1 from __future__ import absolute_import
2
3
4 class SetRemoteAddrFromForwardedFor(object):
5 def process_request(self, request):
6 try:
7 real_ip = request.META['HTTP_X_FORWARDED_FOR']
8 except KeyError:
9 pass
10 else:
11 # HTTP_X_FORWARDED_FOR can be a comma-separated list of IPs.
12 # Take just the first one.
13 real_ip = real_ip.split(",")[0]
14 if ':' in real_ip:
15 real_ip = real_ip.split(':', 1)[0]
16 request.META['REMOTE_ADDR'] = real_ip
17
18
19 class ContentLengthHeaderMiddleware(object):
20 """
21 Ensure that we have a proper Content-Length/Transfer-Encoding header
22 """
23
24 def process_response(self, request, response):
25 if 'Transfer-Encoding' in response or 'Content-Length' in response:
26 return response
27
28 if not response.streaming:
29 response['Content-Length'] = str(len(response.content))
30
31 return response
32
[end of src/sentry/middleware/proxy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/sentry/middleware/proxy.py b/src/sentry/middleware/proxy.py
--- a/src/sentry/middleware/proxy.py
+++ b/src/sentry/middleware/proxy.py
@@ -11,7 +11,8 @@
# HTTP_X_FORWARDED_FOR can be a comma-separated list of IPs.
# Take just the first one.
real_ip = real_ip.split(",")[0]
- if ':' in real_ip:
+ if ':' in real_ip and '.' in real_ip:
+ # Strip the port number off of an IPv4 FORWARDED_FOR entry.
real_ip = real_ip.split(':', 1)[0]
request.META['REMOTE_ADDR'] = real_ip
|
{"golden_diff": "diff --git a/src/sentry/middleware/proxy.py b/src/sentry/middleware/proxy.py\n--- a/src/sentry/middleware/proxy.py\n+++ b/src/sentry/middleware/proxy.py\n@@ -11,7 +11,8 @@\n # HTTP_X_FORWARDED_FOR can be a comma-separated list of IPs.\n # Take just the first one.\n real_ip = real_ip.split(\",\")[0]\n- if ':' in real_ip:\n+ if ':' in real_ip and '.' in real_ip:\n+ # Strip the port number off of an IPv4 FORWARDED_FOR entry.\n real_ip = real_ip.split(':', 1)[0]\n request.META['REMOTE_ADDR'] = real_ip\n", "issue": "Audit log crash on IPv6\nAffected version: 8.5.0\n\nI was just saving some preferences and faced this error:\n\n```\nDataError: ERREUR: syntaxe en entr\u00e9e invalide pour le type inet : \u00ab 2001 \u00bb\nLINE 1: [email protected]', 2, NULL, 1, NULL, 11, '2001', 'e...\n ^\n\nSQL: INSERT INTO \"sentry_auditlogentry\" (\"organization_id\", \"actor_label\", \"actor_id\", \"actor_key_id\", \"target_object\", \"target_user_id\", \"event\", \"ip_address\", \"data\", \"datetime\") VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s) RETURNING \"sentry_auditlogentry\".\"id\"\n```\n\nLooks like IPv6 addresses are not handled properly.\n\nAlso reproduced by deleting a project.\nOddly this wasn't triggered on project creation and my IPv4 was logged instead of my IPv6.\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\n\nclass SetRemoteAddrFromForwardedFor(object):\n def process_request(self, request):\n try:\n real_ip = request.META['HTTP_X_FORWARDED_FOR']\n except KeyError:\n pass\n else:\n # HTTP_X_FORWARDED_FOR can be a comma-separated list of IPs.\n # Take just the first one.\n real_ip = real_ip.split(\",\")[0]\n if ':' in real_ip:\n real_ip = real_ip.split(':', 1)[0]\n request.META['REMOTE_ADDR'] = real_ip\n\n\nclass ContentLengthHeaderMiddleware(object):\n \"\"\"\n Ensure that we have a proper Content-Length/Transfer-Encoding header\n \"\"\"\n\n def process_response(self, request, response):\n if 'Transfer-Encoding' in response or 'Content-Length' in response:\n return response\n\n if not response.streaming:\n response['Content-Length'] = str(len(response.content))\n\n return response\n", "path": "src/sentry/middleware/proxy.py"}]}
| 1,029 | 154 |
gh_patches_debug_36367
|
rasdani/github-patches
|
git_diff
|
searx__searx-335
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Flickr engine is broken
The html seems to have changed, but it's seems there is [REST API](https://api.flickr.com/services/rest?sort=relevance&parse_tags=1&content_type=7&extras=can_comment%2Ccount_comments%2Ccount_faves%2Cisfavorite%2Clicense%2Cmedia%2Cneeds_interstitial%2Cowner_name%2Cpath_alias%2Crealname%2Crotation%2Curl_c%2Curl_l%2Curl_m%2Curl_n%2Curl_q%2Curl_s%2Curl_sq%2Curl_t%2Curl_z&per_page=25&page=1&lang=en-US&rb=1&text=proxy&viewerNSID=&method=flickr.photos.search&csrf=&api_key=3e5918155f464baad83cce2efcf8b57e&format=json&hermes=1&hermesClient=1&reqId=rgb38n1&nojsoncallback=1)
In all parameters there is an api_key : I don't know how long it is valid, in which condition.
The call to this URL is trigger inside another minified javascript.
</issue>
<code>
[start of searx/engines/flickr_noapi.py]
1 #!/usr/bin/env python
2
3 """
4 Flickr (Images)
5
6 @website https://www.flickr.com
7 @provide-api yes (https://secure.flickr.com/services/api/flickr.photos.search.html)
8
9 @using-api no
10 @results HTML
11 @stable no
12 @parse url, title, thumbnail, img_src
13 """
14
15 from urllib import urlencode
16 from json import loads
17 import re
18 from searx.engines import logger
19
20
21 logger = logger.getChild('flickr-noapi')
22
23 categories = ['images']
24
25 url = 'https://www.flickr.com/'
26 search_url = url + 'search?{query}&page={page}'
27 photo_url = 'https://www.flickr.com/photos/{userid}/{photoid}'
28 regex = re.compile(r"\"search-photos-models\",\"photos\":(.*}),\"totalItems\":", re.DOTALL)
29 image_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')
30
31 paging = True
32
33
34 def build_flickr_url(user_id, photo_id):
35 return photo_url.format(userid=user_id, photoid=photo_id)
36
37
38 def request(query, params):
39 params['url'] = search_url.format(query=urlencode({'text': query}),
40 page=params['pageno'])
41 return params
42
43
44 def response(resp):
45 results = []
46
47 matches = regex.search(resp.text)
48
49 if matches is None:
50 return results
51
52 match = matches.group(1)
53 search_results = loads(match)
54
55 if '_data' not in search_results:
56 return []
57
58 photos = search_results['_data']
59
60 for photo in photos:
61
62 # In paged configuration, the first pages' photos
63 # are represented by a None object
64 if photo is None:
65 continue
66
67 img_src = None
68 # From the biggest to the lowest format
69 for image_size in image_sizes:
70 if image_size in photo['sizes']:
71 img_src = photo['sizes'][image_size]['url']
72 break
73
74 if not img_src:
75 logger.debug('cannot find valid image size: {0}'.format(repr(photo)))
76 continue
77
78 if 'id' not in photo['owner']:
79 continue
80
81 # For a bigger thumbnail, keep only the url_z, not the url_n
82 if 'n' in photo['sizes']:
83 thumbnail_src = photo['sizes']['n']['url']
84 elif 'z' in photo['sizes']:
85 thumbnail_src = photo['sizes']['z']['url']
86 else:
87 thumbnail_src = img_src
88
89 url = build_flickr_url(photo['owner']['id'], photo['id'])
90
91 title = photo.get('title', '')
92
93 content = '<span class="photo-author">' +\
94 photo['owner']['username'] +\
95 '</span><br />'
96
97 if 'description' in photo:
98 content = content +\
99 '<span class="description">' +\
100 photo['description'] +\
101 '</span>'
102
103 # append result
104 results.append({'url': url,
105 'title': title,
106 'img_src': img_src,
107 'thumbnail_src': thumbnail_src,
108 'content': content,
109 'template': 'images.html'})
110
111 return results
112
[end of searx/engines/flickr_noapi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/searx/engines/flickr_noapi.py b/searx/engines/flickr_noapi.py
--- a/searx/engines/flickr_noapi.py
+++ b/searx/engines/flickr_noapi.py
@@ -25,7 +25,7 @@
url = 'https://www.flickr.com/'
search_url = url + 'search?{query}&page={page}'
photo_url = 'https://www.flickr.com/photos/{userid}/{photoid}'
-regex = re.compile(r"\"search-photos-models\",\"photos\":(.*}),\"totalItems\":", re.DOTALL)
+regex = re.compile(r"\"search-photos-lite-models\",\"photos\":(.*}),\"totalItems\":", re.DOTALL)
image_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')
paging = True
@@ -38,6 +38,7 @@
def request(query, params):
params['url'] = search_url.format(query=urlencode({'text': query}),
page=params['pageno'])
+
return params
@@ -75,10 +76,10 @@
logger.debug('cannot find valid image size: {0}'.format(repr(photo)))
continue
- if 'id' not in photo['owner']:
+ if 'ownerNsid' not in photo:
continue
-# For a bigger thumbnail, keep only the url_z, not the url_n
+ # For a bigger thumbnail, keep only the url_z, not the url_n
if 'n' in photo['sizes']:
thumbnail_src = photo['sizes']['n']['url']
elif 'z' in photo['sizes']:
@@ -86,20 +87,14 @@
else:
thumbnail_src = img_src
- url = build_flickr_url(photo['owner']['id'], photo['id'])
+ url = build_flickr_url(photo['ownerNsid'], photo['id'])
title = photo.get('title', '')
content = '<span class="photo-author">' +\
- photo['owner']['username'] +\
+ photo['username'] +\
'</span><br />'
- if 'description' in photo:
- content = content +\
- '<span class="description">' +\
- photo['description'] +\
- '</span>'
-
# append result
results.append({'url': url,
'title': title,
|
{"golden_diff": "diff --git a/searx/engines/flickr_noapi.py b/searx/engines/flickr_noapi.py\n--- a/searx/engines/flickr_noapi.py\n+++ b/searx/engines/flickr_noapi.py\n@@ -25,7 +25,7 @@\n url = 'https://www.flickr.com/'\n search_url = url + 'search?{query}&page={page}'\n photo_url = 'https://www.flickr.com/photos/{userid}/{photoid}'\n-regex = re.compile(r\"\\\"search-photos-models\\\",\\\"photos\\\":(.*}),\\\"totalItems\\\":\", re.DOTALL)\n+regex = re.compile(r\"\\\"search-photos-lite-models\\\",\\\"photos\\\":(.*}),\\\"totalItems\\\":\", re.DOTALL)\n image_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')\n \n paging = True\n@@ -38,6 +38,7 @@\n def request(query, params):\n params['url'] = search_url.format(query=urlencode({'text': query}),\n page=params['pageno'])\n+\n return params\n \n \n@@ -75,10 +76,10 @@\n logger.debug('cannot find valid image size: {0}'.format(repr(photo)))\n continue\n \n- if 'id' not in photo['owner']:\n+ if 'ownerNsid' not in photo:\n continue\n \n-# For a bigger thumbnail, keep only the url_z, not the url_n\n+ # For a bigger thumbnail, keep only the url_z, not the url_n\n if 'n' in photo['sizes']:\n thumbnail_src = photo['sizes']['n']['url']\n elif 'z' in photo['sizes']:\n@@ -86,20 +87,14 @@\n else:\n thumbnail_src = img_src\n \n- url = build_flickr_url(photo['owner']['id'], photo['id'])\n+ url = build_flickr_url(photo['ownerNsid'], photo['id'])\n \n title = photo.get('title', '')\n \n content = '<span class=\"photo-author\">' +\\\n- photo['owner']['username'] +\\\n+ photo['username'] +\\\n '</span><br />'\n \n- if 'description' in photo:\n- content = content +\\\n- '<span class=\"description\">' +\\\n- photo['description'] +\\\n- '</span>'\n-\n # append result\n results.append({'url': url,\n 'title': title,\n", "issue": "Flickr engine is broken\nThe html seems to have changed, but it's seems there is [REST API](https://api.flickr.com/services/rest?sort=relevance&parse_tags=1&content_type=7&extras=can_comment%2Ccount_comments%2Ccount_faves%2Cisfavorite%2Clicense%2Cmedia%2Cneeds_interstitial%2Cowner_name%2Cpath_alias%2Crealname%2Crotation%2Curl_c%2Curl_l%2Curl_m%2Curl_n%2Curl_q%2Curl_s%2Curl_sq%2Curl_t%2Curl_z&per_page=25&page=1&lang=en-US&rb=1&text=proxy&viewerNSID=&method=flickr.photos.search&csrf=&api_key=3e5918155f464baad83cce2efcf8b57e&format=json&hermes=1&hermesClient=1&reqId=rgb38n1&nojsoncallback=1)\n\nIn all parameters there is an api_key : I don't know how long it is valid, in which condition.\nThe call to this URL is trigger inside another minified javascript.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\n Flickr (Images)\n\n @website https://www.flickr.com\n @provide-api yes (https://secure.flickr.com/services/api/flickr.photos.search.html)\n\n @using-api no\n @results HTML\n @stable no\n @parse url, title, thumbnail, img_src\n\"\"\"\n\nfrom urllib import urlencode\nfrom json import loads\nimport re\nfrom searx.engines import logger\n\n\nlogger = logger.getChild('flickr-noapi')\n\ncategories = ['images']\n\nurl = 'https://www.flickr.com/'\nsearch_url = url + 'search?{query}&page={page}'\nphoto_url = 'https://www.flickr.com/photos/{userid}/{photoid}'\nregex = re.compile(r\"\\\"search-photos-models\\\",\\\"photos\\\":(.*}),\\\"totalItems\\\":\", re.DOTALL)\nimage_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')\n\npaging = True\n\n\ndef build_flickr_url(user_id, photo_id):\n return photo_url.format(userid=user_id, photoid=photo_id)\n\n\ndef request(query, params):\n params['url'] = search_url.format(query=urlencode({'text': query}),\n page=params['pageno'])\n return params\n\n\ndef response(resp):\n results = []\n\n matches = regex.search(resp.text)\n\n if matches is None:\n return results\n\n match = matches.group(1)\n search_results = loads(match)\n\n if '_data' not in search_results:\n return []\n\n photos = search_results['_data']\n\n for photo in photos:\n\n # In paged configuration, the first pages' photos\n # are represented by a None object\n if photo is None:\n continue\n\n img_src = None\n # From the biggest to the lowest format\n for image_size in image_sizes:\n if image_size in photo['sizes']:\n img_src = photo['sizes'][image_size]['url']\n break\n\n if not img_src:\n logger.debug('cannot find valid image size: {0}'.format(repr(photo)))\n continue\n\n if 'id' not in photo['owner']:\n continue\n\n# For a bigger thumbnail, keep only the url_z, not the url_n\n if 'n' in photo['sizes']:\n thumbnail_src = photo['sizes']['n']['url']\n elif 'z' in photo['sizes']:\n thumbnail_src = photo['sizes']['z']['url']\n else:\n thumbnail_src = img_src\n\n url = build_flickr_url(photo['owner']['id'], photo['id'])\n\n title = photo.get('title', '')\n\n content = '<span class=\"photo-author\">' +\\\n photo['owner']['username'] +\\\n '</span><br />'\n\n if 'description' in photo:\n content = content +\\\n '<span class=\"description\">' +\\\n photo['description'] +\\\n '</span>'\n\n # append result\n results.append({'url': url,\n 'title': title,\n 'img_src': img_src,\n 'thumbnail_src': thumbnail_src,\n 'content': content,\n 'template': 'images.html'})\n\n return results\n", "path": "searx/engines/flickr_noapi.py"}]}
| 1,765 | 557 |
gh_patches_debug_2152
|
rasdani/github-patches
|
git_diff
|
wright-group__WrightTools-552
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
setter for null
Currently null is not settable on a channel
It can be worked around with `channel.attrs['null']`
</issue>
<code>
[start of WrightTools/data/_channel.py]
1 """Channel class and associated."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import numpy as np
8
9 import h5py
10
11 from .. import kit as wt_kit
12 from .._dataset import Dataset
13
14
15 # --- class ---------------------------------------------------------------------------------------
16
17
18 class Channel(Dataset):
19 """Channel."""
20
21 class_name = 'Channel'
22
23 def __init__(self, parent, id, *, units=None, null=None, signed=None, label=None,
24 label_seed=None, **kwargs):
25 """Construct a channel object.
26
27 Parameters
28 ----------
29 values : array-like
30 Values.
31 name : string
32 Channel name.
33 units : string (optional)
34 Channel units. Default is None.
35 null : number (optional)
36 Channel null. Default is None (0).
37 signed : booelan (optional)
38 Channel signed flag. Default is None (guess).
39 label : string.
40 Label. Default is None.
41 label_seed : list of strings
42 Label seed. Default is None.
43 **kwargs
44 Additional keyword arguments are added to the attrs dictionary
45 and to the natural namespace of the object (if possible).
46 """
47 self._parent = parent
48 super().__init__(id)
49 self.label = label
50 self.label_seed = label_seed
51 self.units = units
52 self.dimensionality = len(self.shape)
53 # attrs
54 self.attrs.update(kwargs)
55 self.attrs['name'] = h5py.h5i.get_name(self.id).decode().split('/')[-1]
56 self.attrs['class'] = 'Channel'
57 if signed is not None:
58 self.attrs['signed'] = signed
59 if null is not None:
60 self.attrs['null'] = null
61 for key, value in self.attrs.items():
62 identifier = wt_kit.string2identifier(key)
63 if not hasattr(self, identifier):
64 setattr(self, identifier, value)
65
66 @property
67 def minor_extent(self):
68 """Minimum deviation from null."""
69 return min((self.max() - self.null, self.null - self.min()))
70
71 @property
72 def natural_name(self):
73 """Natural name of the dataset. May be different from name."""
74 try:
75 assert self._natural_name is not None
76 except (AssertionError, AttributeError):
77 self._natural_name = self.attrs['name']
78 finally:
79 return self._natural_name
80
81 @natural_name.setter
82 def natural_name(self, value):
83 index = wt_kit.get_index(self.parent.channel_names, self.natural_name)
84 new = list(self.parent.channel_names)
85 new[index] = value
86 self.parent.channel_names = new
87 self.attrs['name'] = value
88 self._natural_name = None
89
90 @property
91 def null(self):
92 if 'null' not in self.attrs.keys():
93 self.attrs['null'] = 0
94 return self.attrs['null']
95
96 @property
97 def major_extent(self):
98 """Maximum deviation from null."""
99 return max((self.max() - self.null, self.null - self.min()))
100
101 @property
102 def signed(self):
103 if 'signed' not in self.attrs.keys():
104 self.attrs['signed'] = False
105 return self.attrs['signed']
106
107 @signed.setter
108 def signed(self, value):
109 self.attrs['signed'] = value
110
111 def mag(self):
112 """Channel magnitude (maximum deviation from null)."""
113 return self.major_extent
114
115 def normalize(self):
116 """Normalize a Channel, set `null` to 0 and the mag to 1."""
117 def f(dataset, s, null, mag):
118 dataset[s] -= null
119 dataset[s] /= mag
120 if self.signed:
121 mag = self.mag()
122 else:
123 mag = self.max()
124 self.chunkwise(f, null=self.null, mag=mag)
125 self._null = 0
126
127 def trim(self, neighborhood, method='ztest', factor=3, replace='nan',
128 verbose=True):
129 """Remove outliers from the dataset.
130
131 Identifies outliers by comparing each point to its
132 neighbors using a statistical test.
133
134 Parameters
135 ----------
136 neighborhood : list of integers
137 Size of the neighborhood in each dimension. Length of the list must
138 be equal to the dimensionality of the channel.
139 method : {'ztest'} (optional)
140 Statistical test used to detect outliers. Default is ztest.
141
142 ztest
143 Compare point deviation from neighborhood mean to neighborhood
144 standard deviation.
145
146 factor : number (optional)
147 Tolerance factor. Default is 3.
148 replace : {'nan', 'mean', 'mask', number} (optional)
149 Behavior of outlier replacement. Default is nan.
150
151 nan
152 Outliers are replaced by numpy nans.
153
154 mean
155 Outliers are replaced by the mean of its neighborhood.
156
157 mask
158 Array is masked at outliers.
159
160 number
161 Array becomes given number.
162
163 Returns
164 -------
165 list of tuples
166 Indicies of trimmed outliers.
167
168 See Also
169 --------
170 clip
171 Remove pixels outside of a certain range.
172 """
173 raise NotImplementedError
174 outliers = []
175 means = []
176 # find outliers
177 for idx in np.ndindex(self.shape):
178 slices = []
179 for i, di, size in zip(idx, neighborhood, self.shape):
180 start = max(0, i - di)
181 stop = min(size, i + di + 1)
182 slices.append(slice(start, stop, 1))
183 neighbors = self[slices]
184 mean = np.nanmean(neighbors)
185 limit = np.nanstd(neighbors) * factor
186 if np.abs(self[idx] - mean) > limit:
187 outliers.append(idx)
188 means.append(mean)
189 # replace outliers
190 i = tuple(zip(*outliers))
191 if replace == 'nan':
192 self[i] = np.nan
193 elif replace == 'mean':
194 self[i] = means
195 elif replace == 'mask':
196 self[:] = np.ma.array(self[:])
197 self[i] = np.ma.masked
198 elif type(replace) in [int, float]:
199 self[i] = replace
200 else:
201 raise KeyError('replace must be one of {nan, mean, mask} or some number')
202 # finish
203 if verbose:
204 print('%i outliers removed' % len(outliers))
205 return outliers
206
[end of WrightTools/data/_channel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/WrightTools/data/_channel.py b/WrightTools/data/_channel.py
--- a/WrightTools/data/_channel.py
+++ b/WrightTools/data/_channel.py
@@ -93,6 +93,10 @@
self.attrs['null'] = 0
return self.attrs['null']
+ @null.setter
+ def null(self, value):
+ self.attrs['null'] = value
+
@property
def major_extent(self):
"""Maximum deviation from null."""
|
{"golden_diff": "diff --git a/WrightTools/data/_channel.py b/WrightTools/data/_channel.py\n--- a/WrightTools/data/_channel.py\n+++ b/WrightTools/data/_channel.py\n@@ -93,6 +93,10 @@\n self.attrs['null'] = 0\n return self.attrs['null']\n \n+ @null.setter\n+ def null(self, value):\n+ self.attrs['null'] = value\n+\n @property\n def major_extent(self):\n \"\"\"Maximum deviation from null.\"\"\"\n", "issue": "setter for null\nCurrently null is not settable on a channel\r\n\r\nIt can be worked around with `channel.attrs['null']`\n", "before_files": [{"content": "\"\"\"Channel class and associated.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport numpy as np\n\nimport h5py\n\nfrom .. import kit as wt_kit\nfrom .._dataset import Dataset\n\n\n# --- class ---------------------------------------------------------------------------------------\n\n\nclass Channel(Dataset):\n \"\"\"Channel.\"\"\"\n\n class_name = 'Channel'\n\n def __init__(self, parent, id, *, units=None, null=None, signed=None, label=None,\n label_seed=None, **kwargs):\n \"\"\"Construct a channel object.\n\n Parameters\n ----------\n values : array-like\n Values.\n name : string\n Channel name.\n units : string (optional)\n Channel units. Default is None.\n null : number (optional)\n Channel null. Default is None (0).\n signed : booelan (optional)\n Channel signed flag. Default is None (guess).\n label : string.\n Label. Default is None.\n label_seed : list of strings\n Label seed. Default is None.\n **kwargs\n Additional keyword arguments are added to the attrs dictionary\n and to the natural namespace of the object (if possible).\n \"\"\"\n self._parent = parent\n super().__init__(id)\n self.label = label\n self.label_seed = label_seed\n self.units = units\n self.dimensionality = len(self.shape)\n # attrs\n self.attrs.update(kwargs)\n self.attrs['name'] = h5py.h5i.get_name(self.id).decode().split('/')[-1]\n self.attrs['class'] = 'Channel'\n if signed is not None:\n self.attrs['signed'] = signed\n if null is not None:\n self.attrs['null'] = null\n for key, value in self.attrs.items():\n identifier = wt_kit.string2identifier(key)\n if not hasattr(self, identifier):\n setattr(self, identifier, value)\n\n @property\n def minor_extent(self):\n \"\"\"Minimum deviation from null.\"\"\"\n return min((self.max() - self.null, self.null - self.min()))\n\n @property\n def natural_name(self):\n \"\"\"Natural name of the dataset. May be different from name.\"\"\"\n try:\n assert self._natural_name is not None\n except (AssertionError, AttributeError):\n self._natural_name = self.attrs['name']\n finally:\n return self._natural_name\n\n @natural_name.setter\n def natural_name(self, value):\n index = wt_kit.get_index(self.parent.channel_names, self.natural_name)\n new = list(self.parent.channel_names)\n new[index] = value\n self.parent.channel_names = new\n self.attrs['name'] = value\n self._natural_name = None\n\n @property\n def null(self):\n if 'null' not in self.attrs.keys():\n self.attrs['null'] = 0\n return self.attrs['null']\n\n @property\n def major_extent(self):\n \"\"\"Maximum deviation from null.\"\"\"\n return max((self.max() - self.null, self.null - self.min()))\n\n @property\n def signed(self):\n if 'signed' not in self.attrs.keys():\n self.attrs['signed'] = False\n return self.attrs['signed']\n\n @signed.setter\n def signed(self, value):\n self.attrs['signed'] = value\n\n def mag(self):\n \"\"\"Channel magnitude (maximum deviation from null).\"\"\"\n return self.major_extent\n\n def normalize(self):\n \"\"\"Normalize a Channel, set `null` to 0 and the mag to 1.\"\"\"\n def f(dataset, s, null, mag):\n dataset[s] -= null\n dataset[s] /= mag\n if self.signed:\n mag = self.mag()\n else:\n mag = self.max()\n self.chunkwise(f, null=self.null, mag=mag)\n self._null = 0\n\n def trim(self, neighborhood, method='ztest', factor=3, replace='nan',\n verbose=True):\n \"\"\"Remove outliers from the dataset.\n\n Identifies outliers by comparing each point to its\n neighbors using a statistical test.\n\n Parameters\n ----------\n neighborhood : list of integers\n Size of the neighborhood in each dimension. Length of the list must\n be equal to the dimensionality of the channel.\n method : {'ztest'} (optional)\n Statistical test used to detect outliers. Default is ztest.\n\n ztest\n Compare point deviation from neighborhood mean to neighborhood\n standard deviation.\n\n factor : number (optional)\n Tolerance factor. Default is 3.\n replace : {'nan', 'mean', 'mask', number} (optional)\n Behavior of outlier replacement. Default is nan.\n\n nan\n Outliers are replaced by numpy nans.\n\n mean\n Outliers are replaced by the mean of its neighborhood.\n\n mask\n Array is masked at outliers.\n\n number\n Array becomes given number.\n\n Returns\n -------\n list of tuples\n Indicies of trimmed outliers.\n\n See Also\n --------\n clip\n Remove pixels outside of a certain range.\n \"\"\"\n raise NotImplementedError\n outliers = []\n means = []\n # find outliers\n for idx in np.ndindex(self.shape):\n slices = []\n for i, di, size in zip(idx, neighborhood, self.shape):\n start = max(0, i - di)\n stop = min(size, i + di + 1)\n slices.append(slice(start, stop, 1))\n neighbors = self[slices]\n mean = np.nanmean(neighbors)\n limit = np.nanstd(neighbors) * factor\n if np.abs(self[idx] - mean) > limit:\n outliers.append(idx)\n means.append(mean)\n # replace outliers\n i = tuple(zip(*outliers))\n if replace == 'nan':\n self[i] = np.nan\n elif replace == 'mean':\n self[i] = means\n elif replace == 'mask':\n self[:] = np.ma.array(self[:])\n self[i] = np.ma.masked\n elif type(replace) in [int, float]:\n self[i] = replace\n else:\n raise KeyError('replace must be one of {nan, mean, mask} or some number')\n # finish\n if verbose:\n print('%i outliers removed' % len(outliers))\n return outliers\n", "path": "WrightTools/data/_channel.py"}]}
| 2,435 | 113 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.