problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_478
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-7579
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
test_multiple_heads is not raising the expected error
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
On having multiple heads, the travis build fails with error -
```
error: Hooks handler process 'dredd-hooks-python ./tests/hook_main.py' exited with status: 1
warn: Hook handling timed out.
error: Hooks handler process 'dredd-hooks-python ./tests/hook_main.py' exited with status: 1
info: Backend server process exited
The command "dredd" failed and exited with 1 during .
```
It should raise error as expected in - https://github.com/fossasia/open-event-server/blob/development/scripts/test_multiple_heads.sh
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
Expected error should be raised - `Error: Multiple Migration Heads`
<!-- If applicable, add stacktrace to help explain your problem. -->
**Additional context**
<!-- Add any other context about the problem here. -->
On it
</issue>
<code>
[start of migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py]
1 """empty message
2
3 Revision ID: 3b29ea38f0cb
4 Revises: 2d0760003a8a
5 Create Date: 2021-01-07 05:19:49.749923
6
7 """
8
9 from alembic import op
10 import sqlalchemy as sa
11 import sqlalchemy_utils
12
13
14 # revision identifiers, used by Alembic.
15 revision = '3b29ea38f0cb'
16 down_revision = '2d0760003a8a'
17
18
19 def upgrade():
20 # ### commands auto generated by Alembic - please adjust! ###
21 op.add_column('speaker', sa.Column('rank', sa.Integer(), nullable=False, server_default='0'))
22 # ### end Alembic commands ###
23
24
25 def downgrade():
26 # ### commands auto generated by Alembic - please adjust! ###
27 op.drop_column('speaker', 'rank')
28 # ### end Alembic commands ###
29
[end of migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py b/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py
--- a/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py
+++ b/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py
@@ -13,7 +13,7 @@
# revision identifiers, used by Alembic.
revision = '3b29ea38f0cb'
-down_revision = '2d0760003a8a'
+down_revision = '4e61d4df3516'
def upgrade():
|
{"golden_diff": "diff --git a/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py b/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py\n--- a/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py\n+++ b/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py\n@@ -13,7 +13,7 @@\n \n # revision identifiers, used by Alembic.\n revision = '3b29ea38f0cb'\n-down_revision = '2d0760003a8a'\n+down_revision = '4e61d4df3516'\n \n \n def upgrade():\n", "issue": "test_multiple_heads is not raising the expected error\n**Describe the bug**\r\n<!-- A clear and concise description of what the bug is. -->\r\nOn having multiple heads, the travis build fails with error -\r\n```\r\nerror: Hooks handler process 'dredd-hooks-python ./tests/hook_main.py' exited with status: 1\r\nwarn: Hook handling timed out.\r\nerror: Hooks handler process 'dredd-hooks-python ./tests/hook_main.py' exited with status: 1\r\ninfo: Backend server process exited\r\nThe command \"dredd\" failed and exited with 1 during .\r\n```\r\nIt should raise error as expected in - https://github.com/fossasia/open-event-server/blob/development/scripts/test_multiple_heads.sh\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nExpected error should be raised - `Error: Multiple Migration Heads`\r\n\r\n<!-- If applicable, add stacktrace to help explain your problem. -->\r\n\r\n\r\n**Additional context**\r\n<!-- Add any other context about the problem here. -->\r\nOn it\n", "before_files": [{"content": "\"\"\"empty message\n\nRevision ID: 3b29ea38f0cb\nRevises: 2d0760003a8a\nCreate Date: 2021-01-07 05:19:49.749923\n\n\"\"\"\n\nfrom alembic import op\nimport sqlalchemy as sa\nimport sqlalchemy_utils\n\n\n# revision identifiers, used by Alembic.\nrevision = '3b29ea38f0cb'\ndown_revision = '2d0760003a8a'\n\n\ndef upgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.add_column('speaker', sa.Column('rank', sa.Integer(), nullable=False, server_default='0'))\n # ### end Alembic commands ###\n\n\ndef downgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.drop_column('speaker', 'rank')\n # ### end Alembic commands ###\n", "path": "migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py"}]}
| 1,086 | 243 |
gh_patches_debug_2213
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-5630
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SMS Authenticator Setup Stage with generic provider does not work without mapping
**Describe the bug**
Setting up the stage and flow works but trying to set up an mfa device results in an error after entering the phone number
**To Reproduce**
Create a SMS Authenticator Setup Stage with a generic provider and without mapping
**Expected behavior**
sms should be send
**Version and Deployment (please complete the following information):**
- current main
</issue>
<code>
[start of authentik/stages/authenticator_sms/models.py]
1 """SMS Authenticator models"""
2 from hashlib import sha256
3 from typing import Optional
4
5 from django.contrib.auth import get_user_model
6 from django.db import models
7 from django.utils.translation import gettext_lazy as _
8 from django.views import View
9 from django_otp.models import SideChannelDevice
10 from requests.exceptions import RequestException
11 from rest_framework.exceptions import ValidationError
12 from rest_framework.serializers import BaseSerializer
13 from structlog.stdlib import get_logger
14 from twilio.base.exceptions import TwilioRestException
15 from twilio.rest import Client
16
17 from authentik.core.types import UserSettingSerializer
18 from authentik.events.models import Event, EventAction, NotificationWebhookMapping
19 from authentik.events.utils import sanitize_item
20 from authentik.flows.models import ConfigurableStage, FriendlyNamedStage, Stage
21 from authentik.lib.models import SerializerModel
22 from authentik.lib.utils.errors import exception_to_string
23 from authentik.lib.utils.http import get_http_session
24
25 LOGGER = get_logger()
26
27
28 class SMSProviders(models.TextChoices):
29 """Supported SMS Providers"""
30
31 TWILIO = "twilio"
32 GENERIC = "generic"
33
34
35 class SMSAuthTypes(models.TextChoices):
36 """Supported SMS Auth Types"""
37
38 BASIC = "basic"
39 BEARER = "bearer"
40
41
42 class AuthenticatorSMSStage(ConfigurableStage, FriendlyNamedStage, Stage):
43 """Use SMS-based TOTP instead of authenticator-based."""
44
45 provider = models.TextField(choices=SMSProviders.choices)
46
47 from_number = models.TextField()
48
49 account_sid = models.TextField()
50 auth = models.TextField()
51 auth_password = models.TextField(default="", blank=True)
52 auth_type = models.TextField(choices=SMSAuthTypes.choices, default=SMSAuthTypes.BASIC)
53
54 verify_only = models.BooleanField(
55 default=False,
56 help_text=_(
57 "When enabled, the Phone number is only used during enrollment to verify the "
58 "users authenticity. Only a hash of the phone number is saved to ensure it is "
59 "not re-used in the future."
60 ),
61 )
62
63 mapping = models.ForeignKey(
64 NotificationWebhookMapping,
65 null=True,
66 default=None,
67 on_delete=models.SET_NULL,
68 help_text=_("Optionally modify the payload being sent to custom providers."),
69 )
70
71 def send(self, token: str, device: "SMSDevice"):
72 """Send message via selected provider"""
73 if self.provider == SMSProviders.TWILIO:
74 return self.send_twilio(token, device)
75 if self.provider == SMSProviders.GENERIC:
76 return self.send_generic(token, device)
77 raise ValueError(f"invalid provider {self.provider}")
78
79 def get_message(self, token: str) -> str:
80 """Get SMS message"""
81 return _("Use this code to authenticate in authentik: %(token)s" % {"token": token})
82
83 def send_twilio(self, token: str, device: "SMSDevice"):
84 """send sms via twilio provider"""
85 client = Client(self.account_sid, self.auth)
86
87 try:
88 message = client.messages.create(
89 to=device.phone_number, from_=self.from_number, body=str(self.get_message(token))
90 )
91 LOGGER.debug("Sent SMS", to=device, message=message.sid)
92 except TwilioRestException as exc:
93 LOGGER.warning("Error sending token by Twilio SMS", exc=exc, msg=exc.msg)
94 raise ValidationError(exc.msg)
95
96 def send_generic(self, token: str, device: "SMSDevice"):
97 """Send SMS via outside API"""
98 payload = {
99 "From": self.from_number,
100 "To": device.phone_number,
101 "Body": token,
102 "Message": self.get_message(token),
103 }
104
105 if self.mapping:
106 payload = sanitize_item(
107 self.mapping.evaluate(
108 user=device.user,
109 request=None,
110 device=device,
111 token=token,
112 stage=self,
113 )
114 )
115
116 if self.auth_type == SMSAuthTypes.BEARER:
117 response = get_http_session().post(
118 self.account_sid,
119 json=payload,
120 headers={"Authorization": f"Bearer {self.auth}"},
121 )
122 elif self.auth_type == SMSAuthTypes.BASIC:
123 response = get_http_session().post(
124 self.account_sid,
125 json=payload,
126 auth=(self.auth, self.auth_password),
127 )
128 else:
129 raise ValueError(f"Invalid Auth type '{self.auth_type}'")
130
131 LOGGER.debug("Sent SMS", to=device.phone_number)
132 try:
133 response.raise_for_status()
134 except RequestException as exc:
135 LOGGER.warning(
136 "Error sending token by generic SMS",
137 exc=exc,
138 status=response.status_code,
139 body=response.text[:100],
140 )
141 Event.new(
142 EventAction.CONFIGURATION_ERROR,
143 message="Error sending SMS",
144 exc=exception_to_string(exc),
145 status_code=response.status_code,
146 body=response.text,
147 ).set_user(device.user).save()
148 if response.status_code >= 400:
149 raise ValidationError(response.text)
150 raise
151
152 @property
153 def serializer(self) -> type[BaseSerializer]:
154 from authentik.stages.authenticator_sms.api import AuthenticatorSMSStageSerializer
155
156 return AuthenticatorSMSStageSerializer
157
158 @property
159 def type(self) -> type[View]:
160 from authentik.stages.authenticator_sms.stage import AuthenticatorSMSStageView
161
162 return AuthenticatorSMSStageView
163
164 @property
165 def component(self) -> str:
166 return "ak-stage-authenticator-sms-form"
167
168 def ui_user_settings(self) -> Optional[UserSettingSerializer]:
169 return UserSettingSerializer(
170 data={
171 "title": self.friendly_name or str(self._meta.verbose_name),
172 "component": "ak-user-settings-authenticator-sms",
173 }
174 )
175
176 def __str__(self) -> str:
177 return f"SMS Authenticator Setup Stage {self.name}"
178
179 class Meta:
180 verbose_name = _("SMS Authenticator Setup Stage")
181 verbose_name_plural = _("SMS Authenticator Setup Stages")
182
183
184 def hash_phone_number(phone_number: str) -> str:
185 """Hash phone number with prefix"""
186 return "hash:" + sha256(phone_number.encode()).hexdigest()
187
188
189 class SMSDevice(SerializerModel, SideChannelDevice):
190 """SMS Device"""
191
192 user = models.ForeignKey(get_user_model(), on_delete=models.CASCADE)
193
194 # Connect to the stage to when validating access we know the API Credentials
195 stage = models.ForeignKey(AuthenticatorSMSStage, on_delete=models.CASCADE)
196
197 phone_number = models.TextField()
198
199 last_t = models.DateTimeField(auto_now=True)
200
201 def set_hashed_number(self):
202 """Set phone_number to hashed number"""
203 self.phone_number = hash_phone_number(self.phone_number)
204
205 @property
206 def is_hashed(self) -> bool:
207 """Check if the phone number is hashed"""
208 return self.phone_number.startswith("hash:")
209
210 @property
211 def serializer(self) -> type[BaseSerializer]:
212 from authentik.stages.authenticator_sms.api import SMSDeviceSerializer
213
214 return SMSDeviceSerializer
215
216 def verify_token(self, token):
217 valid = super().verify_token(token)
218 if valid:
219 self.save()
220 return valid
221
222 def __str__(self):
223 return str(self.name) or str(self.user)
224
225 class Meta:
226 verbose_name = _("SMS Device")
227 verbose_name_plural = _("SMS Devices")
228 unique_together = (("stage", "phone_number"),)
229
[end of authentik/stages/authenticator_sms/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/stages/authenticator_sms/models.py b/authentik/stages/authenticator_sms/models.py
--- a/authentik/stages/authenticator_sms/models.py
+++ b/authentik/stages/authenticator_sms/models.py
@@ -99,7 +99,7 @@
"From": self.from_number,
"To": device.phone_number,
"Body": token,
- "Message": self.get_message(token),
+ "Message": str(self.get_message(token)),
}
if self.mapping:
|
{"golden_diff": "diff --git a/authentik/stages/authenticator_sms/models.py b/authentik/stages/authenticator_sms/models.py\n--- a/authentik/stages/authenticator_sms/models.py\n+++ b/authentik/stages/authenticator_sms/models.py\n@@ -99,7 +99,7 @@\n \"From\": self.from_number,\n \"To\": device.phone_number,\n \"Body\": token,\n- \"Message\": self.get_message(token),\n+ \"Message\": str(self.get_message(token)),\n }\n \n if self.mapping:\n", "issue": "SMS Authenticator Setup Stage with generic provider does not work without mapping \n**Describe the bug**\r\nSetting up the stage and flow works but trying to set up an mfa device results in an error after entering the phone number\r\n\r\n**To Reproduce**\r\nCreate a SMS Authenticator Setup Stage with a generic provider and without mapping\r\n\r\n**Expected behavior**\r\nsms should be send\r\n\r\n**Version and Deployment (please complete the following information):**\r\n- current main\r\n\n", "before_files": [{"content": "\"\"\"SMS Authenticator models\"\"\"\nfrom hashlib import sha256\nfrom typing import Optional\n\nfrom django.contrib.auth import get_user_model\nfrom django.db import models\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\nfrom django_otp.models import SideChannelDevice\nfrom requests.exceptions import RequestException\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.serializers import BaseSerializer\nfrom structlog.stdlib import get_logger\nfrom twilio.base.exceptions import TwilioRestException\nfrom twilio.rest import Client\n\nfrom authentik.core.types import UserSettingSerializer\nfrom authentik.events.models import Event, EventAction, NotificationWebhookMapping\nfrom authentik.events.utils import sanitize_item\nfrom authentik.flows.models import ConfigurableStage, FriendlyNamedStage, Stage\nfrom authentik.lib.models import SerializerModel\nfrom authentik.lib.utils.errors import exception_to_string\nfrom authentik.lib.utils.http import get_http_session\n\nLOGGER = get_logger()\n\n\nclass SMSProviders(models.TextChoices):\n \"\"\"Supported SMS Providers\"\"\"\n\n TWILIO = \"twilio\"\n GENERIC = \"generic\"\n\n\nclass SMSAuthTypes(models.TextChoices):\n \"\"\"Supported SMS Auth Types\"\"\"\n\n BASIC = \"basic\"\n BEARER = \"bearer\"\n\n\nclass AuthenticatorSMSStage(ConfigurableStage, FriendlyNamedStage, Stage):\n \"\"\"Use SMS-based TOTP instead of authenticator-based.\"\"\"\n\n provider = models.TextField(choices=SMSProviders.choices)\n\n from_number = models.TextField()\n\n account_sid = models.TextField()\n auth = models.TextField()\n auth_password = models.TextField(default=\"\", blank=True)\n auth_type = models.TextField(choices=SMSAuthTypes.choices, default=SMSAuthTypes.BASIC)\n\n verify_only = models.BooleanField(\n default=False,\n help_text=_(\n \"When enabled, the Phone number is only used during enrollment to verify the \"\n \"users authenticity. Only a hash of the phone number is saved to ensure it is \"\n \"not re-used in the future.\"\n ),\n )\n\n mapping = models.ForeignKey(\n NotificationWebhookMapping,\n null=True,\n default=None,\n on_delete=models.SET_NULL,\n help_text=_(\"Optionally modify the payload being sent to custom providers.\"),\n )\n\n def send(self, token: str, device: \"SMSDevice\"):\n \"\"\"Send message via selected provider\"\"\"\n if self.provider == SMSProviders.TWILIO:\n return self.send_twilio(token, device)\n if self.provider == SMSProviders.GENERIC:\n return self.send_generic(token, device)\n raise ValueError(f\"invalid provider {self.provider}\")\n\n def get_message(self, token: str) -> str:\n \"\"\"Get SMS message\"\"\"\n return _(\"Use this code to authenticate in authentik: %(token)s\" % {\"token\": token})\n\n def send_twilio(self, token: str, device: \"SMSDevice\"):\n \"\"\"send sms via twilio provider\"\"\"\n client = Client(self.account_sid, self.auth)\n\n try:\n message = client.messages.create(\n to=device.phone_number, from_=self.from_number, body=str(self.get_message(token))\n )\n LOGGER.debug(\"Sent SMS\", to=device, message=message.sid)\n except TwilioRestException as exc:\n LOGGER.warning(\"Error sending token by Twilio SMS\", exc=exc, msg=exc.msg)\n raise ValidationError(exc.msg)\n\n def send_generic(self, token: str, device: \"SMSDevice\"):\n \"\"\"Send SMS via outside API\"\"\"\n payload = {\n \"From\": self.from_number,\n \"To\": device.phone_number,\n \"Body\": token,\n \"Message\": self.get_message(token),\n }\n\n if self.mapping:\n payload = sanitize_item(\n self.mapping.evaluate(\n user=device.user,\n request=None,\n device=device,\n token=token,\n stage=self,\n )\n )\n\n if self.auth_type == SMSAuthTypes.BEARER:\n response = get_http_session().post(\n self.account_sid,\n json=payload,\n headers={\"Authorization\": f\"Bearer {self.auth}\"},\n )\n elif self.auth_type == SMSAuthTypes.BASIC:\n response = get_http_session().post(\n self.account_sid,\n json=payload,\n auth=(self.auth, self.auth_password),\n )\n else:\n raise ValueError(f\"Invalid Auth type '{self.auth_type}'\")\n\n LOGGER.debug(\"Sent SMS\", to=device.phone_number)\n try:\n response.raise_for_status()\n except RequestException as exc:\n LOGGER.warning(\n \"Error sending token by generic SMS\",\n exc=exc,\n status=response.status_code,\n body=response.text[:100],\n )\n Event.new(\n EventAction.CONFIGURATION_ERROR,\n message=\"Error sending SMS\",\n exc=exception_to_string(exc),\n status_code=response.status_code,\n body=response.text,\n ).set_user(device.user).save()\n if response.status_code >= 400:\n raise ValidationError(response.text)\n raise\n\n @property\n def serializer(self) -> type[BaseSerializer]:\n from authentik.stages.authenticator_sms.api import AuthenticatorSMSStageSerializer\n\n return AuthenticatorSMSStageSerializer\n\n @property\n def type(self) -> type[View]:\n from authentik.stages.authenticator_sms.stage import AuthenticatorSMSStageView\n\n return AuthenticatorSMSStageView\n\n @property\n def component(self) -> str:\n return \"ak-stage-authenticator-sms-form\"\n\n def ui_user_settings(self) -> Optional[UserSettingSerializer]:\n return UserSettingSerializer(\n data={\n \"title\": self.friendly_name or str(self._meta.verbose_name),\n \"component\": \"ak-user-settings-authenticator-sms\",\n }\n )\n\n def __str__(self) -> str:\n return f\"SMS Authenticator Setup Stage {self.name}\"\n\n class Meta:\n verbose_name = _(\"SMS Authenticator Setup Stage\")\n verbose_name_plural = _(\"SMS Authenticator Setup Stages\")\n\n\ndef hash_phone_number(phone_number: str) -> str:\n \"\"\"Hash phone number with prefix\"\"\"\n return \"hash:\" + sha256(phone_number.encode()).hexdigest()\n\n\nclass SMSDevice(SerializerModel, SideChannelDevice):\n \"\"\"SMS Device\"\"\"\n\n user = models.ForeignKey(get_user_model(), on_delete=models.CASCADE)\n\n # Connect to the stage to when validating access we know the API Credentials\n stage = models.ForeignKey(AuthenticatorSMSStage, on_delete=models.CASCADE)\n\n phone_number = models.TextField()\n\n last_t = models.DateTimeField(auto_now=True)\n\n def set_hashed_number(self):\n \"\"\"Set phone_number to hashed number\"\"\"\n self.phone_number = hash_phone_number(self.phone_number)\n\n @property\n def is_hashed(self) -> bool:\n \"\"\"Check if the phone number is hashed\"\"\"\n return self.phone_number.startswith(\"hash:\")\n\n @property\n def serializer(self) -> type[BaseSerializer]:\n from authentik.stages.authenticator_sms.api import SMSDeviceSerializer\n\n return SMSDeviceSerializer\n\n def verify_token(self, token):\n valid = super().verify_token(token)\n if valid:\n self.save()\n return valid\n\n def __str__(self):\n return str(self.name) or str(self.user)\n\n class Meta:\n verbose_name = _(\"SMS Device\")\n verbose_name_plural = _(\"SMS Devices\")\n unique_together = ((\"stage\", \"phone_number\"),)\n", "path": "authentik/stages/authenticator_sms/models.py"}]}
| 2,804 | 115 |
gh_patches_debug_29664
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-1542
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
</issue>
<code>
[start of docs/conf.py]
1 import importlib.metadata
2 import os
3 import re
4 from functools import partial
5 from typing import Any
6
7 from sphinx.addnodes import document
8 from sphinx.application import Sphinx
9
10 __all__ = ["setup", "update_html_context"]
11
12
13 project = "Litestar"
14 copyright = "2023, Litestar-Org"
15 author = "Litestar-Org"
16 release = os.getenv("_LITESTAR_DOCS_BUILD_VERSION", importlib.metadata.version("litestar").rsplit(".")[0])
17
18 extensions = [
19 "sphinx.ext.intersphinx",
20 "sphinx.ext.autosectionlabel",
21 "sphinx.ext.autodoc",
22 "sphinx.ext.napoleon",
23 "sphinx_design",
24 "auto_pytabs.sphinx_ext",
25 "tools.sphinx_ext",
26 "sphinx_copybutton",
27 "sphinxcontrib.mermaid",
28 ]
29
30 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
31
32
33 intersphinx_mapping = {
34 "python": ("https://docs.python.org/3", None),
35 "msgspec": ("https://jcristharif.com/msgspec/", None),
36 "anyio": ("https://anyio.readthedocs.io/en/stable/", None),
37 "multidict": ("https://multidict.aio-libs.org/en/stable/", None),
38 "sqlalchemy": ("https://docs.sqlalchemy.org/en/20/", None),
39 "click": ("https://click.palletsprojects.com/en/8.1.x/", None),
40 "redis": ("https://redis-py.readthedocs.io/en/stable/", None),
41 "picologging": ("https://microsoft.github.io/picologging", None),
42 "structlog": ("https://www.structlog.org/en/stable/", None),
43 "tortoise": ("https://tortoise.github.io/", None),
44 "piccolo": ("https://piccolo-orm.readthedocs.io/en/latest", None),
45 "opentelemetry": ("https://opentelemetry-python.readthedocs.io/en/latest/", None),
46 }
47
48
49 napoleon_google_docstring = True
50 napoleon_include_special_with_doc = True
51 napoleon_use_admonition_for_examples = True
52 napoleon_use_admonition_for_notes = True
53 napoleon_use_admonition_for_references = False
54 napoleon_attr_annotations = True
55
56 autoclass_content = "class"
57 autodoc_class_signature = "separated"
58 autodoc_default_options = {"special-members": "__init__", "show-inheritance": True, "members": True}
59 autodoc_member_order = "bysource"
60 autodoc_typehints_format = "short"
61
62
63 nitpicky = True
64 nitpick_ignore = [
65 # external library / undocumented external
66 ("py:class", "BaseModel"),
67 ("py:class", "pydantic.main.BaseModel"),
68 ("py:class", "pydantic.generics.GenericModel"),
69 ("py:class", "redis.asyncio.Redis"),
70 ("py:class", "sqlalchemy.orm.decl_api.DeclarativeMeta"),
71 ("py:class", "sqlalchemy.sql.sqltypes.TupleType"),
72 ("py:class", "sqlalchemy.dialects.postgresql.named_types.ENUM"),
73 # type vars and aliases / intentionally undocumented
74 ("py:class", "RouteHandlerType"),
75 ("py:obj", "litestar.security.base.AuthType"),
76 ("py:class", "ControllerRouterHandler"),
77 ("py:class", "PathParameterDefinition"),
78 ("py:class", "BaseSessionBackendT"),
79 ("py:class", "AnyIOBackend"),
80 ("py:class", "T"),
81 ("py:class", "C"),
82 ("py:class", "EmptyType"),
83 # intentionally undocumented
84 ("py:class", "NoneType"),
85 ("py:class", "litestar._signature.field.SignatureField"),
86 ("py:class", "litestar.utils.signature.ParsedType"),
87 ("py:class", "litestar.utils.signature.ParsedSignature"),
88 ("py:class", "litestar.utils.signature.ParsedParameter"),
89 ("py:class", "litestar.utils.sync.AsyncCallable"),
90 ]
91 nitpick_ignore_regex = [
92 (r"py:.*", r"litestar\.types.*"),
93 (r"py:.*", r"litestar.*\.T"),
94 (r"py:.*", r".*R_co"),
95 (r"py:.*", r".*UserType"),
96 (r"py:.*", r"litestar\.middleware\.session\.base\.BaseSessionBackendT"),
97 (r"py:obj", r"typing\..*"),
98 (r"py:.*", r"httpx.*"),
99 # type vars
100 ("py:.*", r"litestar\.pagination\.C"),
101 ("py:.*", r"litestar.middleware.session.base.ConfigT"),
102 ("py:.*", r"multidict\..*"),
103 (r"py:.*", r"litestar\.connection\.base\.UserT"),
104 (r"py:.*", r"litestar\.connection\.base\.AuthT"),
105 (r"py:.*", r"litestar\.connection\.base\.StateT"),
106 (r"py:.*", r"litestar\.connection\.base\.HandlerT"),
107 ]
108
109 # Warnings about missing references to those targets in the specified location will be ignored.
110 # The source of the references is taken 1:1 from the warnings as reported by Sphinx, e.g
111 # **/litestar/testing/client/async_client.py:docstring of litestar.testing.AsyncTestClient.exit_stack:1: WARNING: py:class reference target not found: AsyncExitStack
112 # would be added as: "litestar.testing.AsyncTestClient.exit_stack": {"AsyncExitStack"},
113 ignore_missing_refs = {
114 # No idea what autodoc is doing here. Possibly unfixable on our end
115 "litestar.template.base.TemplateEngineProtocol.get_template": {"litestar.template.base.T_co"},
116 "litestar.template": {"litestar.template.base.T_co"},
117 "litestar.openapi.OpenAPIController.security": {"SecurityRequirement"},
118 "litestar.contrib.sqlalchemy_1.plugin.SQLAlchemyPlugin.handle_string_type": {"BINARY", "VARBINARY", "LargeBinary"},
119 "litestar.contrib.sqlalchemy_1.plugin.SQLAlchemyPlugin.is_plugin_supported_type": {"DeclarativeMeta"},
120 re.compile(r"litestar\.plugins.*"): re.compile(".*(ModelT|DataContainerT)"),
121 re.compile(r"litestar\.contrib\.sqlalchemy\.init_plugin\.config.*"): re.compile(
122 ".*(ConnectionT|EngineT|SessionT|SessionMakerT)"
123 ),
124 }
125
126
127 auto_pytabs_min_version = (3, 8)
128 auto_pytabs_max_version = (3, 11)
129 auto_pytabs_compat_mode = True
130
131 autosectionlabel_prefix_document = True
132
133 suppress_warnings = [
134 "autosectionlabel.*",
135 "ref.python", # TODO: remove when https://github.com/sphinx-doc/sphinx/issues/4961 is fixed
136 ]
137
138 html_theme = "litestar_sphinx_theme"
139 html_static_path = ["_static"]
140 html_js_files = ["versioning.js"]
141 html_css_files = ["style.css"]
142 html_show_sourcelink = False
143 html_title = "Litestar Framework"
144
145
146 html_theme_options = {
147 "use_page_nav": False,
148 "github_repo_name": "litestar",
149 "logo": {
150 "link": "https://litestar.dev",
151 },
152 "extra_navbar_items": {
153 "Documentation": "index",
154 "Community": {
155 "Contribution Guide": "contribution-guide",
156 "Code of Conduct": "https://github.com/litestar-org/.github/blob/main/CODE_OF_CONDUCT.md",
157 },
158 "About": {
159 "Organization": "https://litestar.dev/about/organization",
160 "Releases": "https://litestar.dev/about/litestar-releases",
161 },
162 "Release notes": {
163 "2.0 migration guide": "release-notes/migration_guide_2",
164 "2.x Changelog": "https://docs.litestar.dev/2/release-notes/changelog.html",
165 "1.x Changelog": "https://docs.litestar.dev/1/release-notes/changelog.html",
166 },
167 },
168 }
169
170
171 def update_html_context(
172 app: Sphinx, pagename: str, templatename: str, context: dict[str, Any], doctree: document
173 ) -> None:
174 context["generate_toctree_html"] = partial(context["generate_toctree_html"], startdepth=0)
175
176
177 def setup(app: Sphinx) -> dict[str, bool]:
178 app.setup_extension("litestar_sphinx_theme")
179 app.setup_extension("pydata_sphinx_theme")
180 app.connect("html-page-context", update_html_context)
181
182 return {"parallel_read_safe": True, "parallel_write_safe": True}
183
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -152,18 +152,35 @@
"extra_navbar_items": {
"Documentation": "index",
"Community": {
- "Contribution Guide": "contribution-guide",
- "Code of Conduct": "https://github.com/litestar-org/.github/blob/main/CODE_OF_CONDUCT.md",
+ "Contributing": {
+ "description": "Learn how to contribute to the Litestar project",
+ "link": "https://docs.litestar.dev/2/contribution-guide.html",
+ "icon": "contributing",
+ },
+ "Code of Conduct": {
+ "description": "Review the etiquette for interacting with the Litestar community",
+ "link": "https://github.com/litestar-org/.github/blob/main/CODE_OF_CONDUCT.md",
+ "icon": "coc",
+ },
},
"About": {
- "Organization": "https://litestar.dev/about/organization",
- "Releases": "https://litestar.dev/about/litestar-releases",
+ "Litestar Organization": {
+ "description": "Details about the Litestar organization",
+ "link": "about/organization",
+ "icon": "org",
+ },
+ "Releases": {
+ "description": "Details about the Litestar release process",
+ "link": "about/litestar-releases",
+ "icon": "releases",
+ },
},
"Release notes": {
"2.0 migration guide": "release-notes/migration_guide_2",
"2.x Changelog": "https://docs.litestar.dev/2/release-notes/changelog.html",
"1.x Changelog": "https://docs.litestar.dev/1/release-notes/changelog.html",
},
+ "Help": "https://github.com/orgs/litestar-org/discussions",
},
}
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -152,18 +152,35 @@\n \"extra_navbar_items\": {\n \"Documentation\": \"index\",\n \"Community\": {\n- \"Contribution Guide\": \"contribution-guide\",\n- \"Code of Conduct\": \"https://github.com/litestar-org/.github/blob/main/CODE_OF_CONDUCT.md\",\n+ \"Contributing\": {\n+ \"description\": \"Learn how to contribute to the Litestar project\",\n+ \"link\": \"https://docs.litestar.dev/2/contribution-guide.html\",\n+ \"icon\": \"contributing\",\n+ },\n+ \"Code of Conduct\": {\n+ \"description\": \"Review the etiquette for interacting with the Litestar community\",\n+ \"link\": \"https://github.com/litestar-org/.github/blob/main/CODE_OF_CONDUCT.md\",\n+ \"icon\": \"coc\",\n+ },\n },\n \"About\": {\n- \"Organization\": \"https://litestar.dev/about/organization\",\n- \"Releases\": \"https://litestar.dev/about/litestar-releases\",\n+ \"Litestar Organization\": {\n+ \"description\": \"Details about the Litestar organization\",\n+ \"link\": \"about/organization\",\n+ \"icon\": \"org\",\n+ },\n+ \"Releases\": {\n+ \"description\": \"Details about the Litestar release process\",\n+ \"link\": \"about/litestar-releases\",\n+ \"icon\": \"releases\",\n+ },\n },\n \"Release notes\": {\n \"2.0 migration guide\": \"release-notes/migration_guide_2\",\n \"2.x Changelog\": \"https://docs.litestar.dev/2/release-notes/changelog.html\",\n \"1.x Changelog\": \"https://docs.litestar.dev/1/release-notes/changelog.html\",\n },\n+ \"Help\": \"https://github.com/orgs/litestar-org/discussions\",\n },\n }\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "import importlib.metadata\nimport os\nimport re\nfrom functools import partial\nfrom typing import Any\n\nfrom sphinx.addnodes import document\nfrom sphinx.application import Sphinx\n\n__all__ = [\"setup\", \"update_html_context\"]\n\n\nproject = \"Litestar\"\ncopyright = \"2023, Litestar-Org\"\nauthor = \"Litestar-Org\"\nrelease = os.getenv(\"_LITESTAR_DOCS_BUILD_VERSION\", importlib.metadata.version(\"litestar\").rsplit(\".\")[0])\n\nextensions = [\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.napoleon\",\n \"sphinx_design\",\n \"auto_pytabs.sphinx_ext\",\n \"tools.sphinx_ext\",\n \"sphinx_copybutton\",\n \"sphinxcontrib.mermaid\",\n]\n\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"msgspec\": (\"https://jcristharif.com/msgspec/\", None),\n \"anyio\": (\"https://anyio.readthedocs.io/en/stable/\", None),\n \"multidict\": (\"https://multidict.aio-libs.org/en/stable/\", None),\n \"sqlalchemy\": (\"https://docs.sqlalchemy.org/en/20/\", None),\n \"click\": (\"https://click.palletsprojects.com/en/8.1.x/\", None),\n \"redis\": (\"https://redis-py.readthedocs.io/en/stable/\", None),\n \"picologging\": (\"https://microsoft.github.io/picologging\", None),\n \"structlog\": (\"https://www.structlog.org/en/stable/\", None),\n \"tortoise\": (\"https://tortoise.github.io/\", None),\n \"piccolo\": (\"https://piccolo-orm.readthedocs.io/en/latest\", None),\n \"opentelemetry\": (\"https://opentelemetry-python.readthedocs.io/en/latest/\", None),\n}\n\n\nnapoleon_google_docstring = True\nnapoleon_include_special_with_doc = True\nnapoleon_use_admonition_for_examples = True\nnapoleon_use_admonition_for_notes = True\nnapoleon_use_admonition_for_references = False\nnapoleon_attr_annotations = True\n\nautoclass_content = \"class\"\nautodoc_class_signature = \"separated\"\nautodoc_default_options = {\"special-members\": \"__init__\", \"show-inheritance\": True, \"members\": True}\nautodoc_member_order = \"bysource\"\nautodoc_typehints_format = \"short\"\n\n\nnitpicky = True\nnitpick_ignore = [\n # external library / undocumented external\n (\"py:class\", \"BaseModel\"),\n (\"py:class\", \"pydantic.main.BaseModel\"),\n (\"py:class\", \"pydantic.generics.GenericModel\"),\n (\"py:class\", \"redis.asyncio.Redis\"),\n (\"py:class\", \"sqlalchemy.orm.decl_api.DeclarativeMeta\"),\n (\"py:class\", \"sqlalchemy.sql.sqltypes.TupleType\"),\n (\"py:class\", \"sqlalchemy.dialects.postgresql.named_types.ENUM\"),\n # type vars and aliases / intentionally undocumented\n (\"py:class\", \"RouteHandlerType\"),\n (\"py:obj\", \"litestar.security.base.AuthType\"),\n (\"py:class\", \"ControllerRouterHandler\"),\n (\"py:class\", \"PathParameterDefinition\"),\n (\"py:class\", \"BaseSessionBackendT\"),\n (\"py:class\", \"AnyIOBackend\"),\n (\"py:class\", \"T\"),\n (\"py:class\", \"C\"),\n (\"py:class\", \"EmptyType\"),\n # intentionally undocumented\n (\"py:class\", \"NoneType\"),\n (\"py:class\", \"litestar._signature.field.SignatureField\"),\n (\"py:class\", \"litestar.utils.signature.ParsedType\"),\n (\"py:class\", \"litestar.utils.signature.ParsedSignature\"),\n (\"py:class\", \"litestar.utils.signature.ParsedParameter\"),\n (\"py:class\", \"litestar.utils.sync.AsyncCallable\"),\n]\nnitpick_ignore_regex = [\n (r\"py:.*\", r\"litestar\\.types.*\"),\n (r\"py:.*\", r\"litestar.*\\.T\"),\n (r\"py:.*\", r\".*R_co\"),\n (r\"py:.*\", r\".*UserType\"),\n (r\"py:.*\", r\"litestar\\.middleware\\.session\\.base\\.BaseSessionBackendT\"),\n (r\"py:obj\", r\"typing\\..*\"),\n (r\"py:.*\", r\"httpx.*\"),\n # type vars\n (\"py:.*\", r\"litestar\\.pagination\\.C\"),\n (\"py:.*\", r\"litestar.middleware.session.base.ConfigT\"),\n (\"py:.*\", r\"multidict\\..*\"),\n (r\"py:.*\", r\"litestar\\.connection\\.base\\.UserT\"),\n (r\"py:.*\", r\"litestar\\.connection\\.base\\.AuthT\"),\n (r\"py:.*\", r\"litestar\\.connection\\.base\\.StateT\"),\n (r\"py:.*\", r\"litestar\\.connection\\.base\\.HandlerT\"),\n]\n\n# Warnings about missing references to those targets in the specified location will be ignored.\n# The source of the references is taken 1:1 from the warnings as reported by Sphinx, e.g\n# **/litestar/testing/client/async_client.py:docstring of litestar.testing.AsyncTestClient.exit_stack:1: WARNING: py:class reference target not found: AsyncExitStack\n# would be added as: \"litestar.testing.AsyncTestClient.exit_stack\": {\"AsyncExitStack\"},\nignore_missing_refs = {\n # No idea what autodoc is doing here. Possibly unfixable on our end\n \"litestar.template.base.TemplateEngineProtocol.get_template\": {\"litestar.template.base.T_co\"},\n \"litestar.template\": {\"litestar.template.base.T_co\"},\n \"litestar.openapi.OpenAPIController.security\": {\"SecurityRequirement\"},\n \"litestar.contrib.sqlalchemy_1.plugin.SQLAlchemyPlugin.handle_string_type\": {\"BINARY\", \"VARBINARY\", \"LargeBinary\"},\n \"litestar.contrib.sqlalchemy_1.plugin.SQLAlchemyPlugin.is_plugin_supported_type\": {\"DeclarativeMeta\"},\n re.compile(r\"litestar\\.plugins.*\"): re.compile(\".*(ModelT|DataContainerT)\"),\n re.compile(r\"litestar\\.contrib\\.sqlalchemy\\.init_plugin\\.config.*\"): re.compile(\n \".*(ConnectionT|EngineT|SessionT|SessionMakerT)\"\n ),\n}\n\n\nauto_pytabs_min_version = (3, 8)\nauto_pytabs_max_version = (3, 11)\nauto_pytabs_compat_mode = True\n\nautosectionlabel_prefix_document = True\n\nsuppress_warnings = [\n \"autosectionlabel.*\",\n \"ref.python\", # TODO: remove when https://github.com/sphinx-doc/sphinx/issues/4961 is fixed\n]\n\nhtml_theme = \"litestar_sphinx_theme\"\nhtml_static_path = [\"_static\"]\nhtml_js_files = [\"versioning.js\"]\nhtml_css_files = [\"style.css\"]\nhtml_show_sourcelink = False\nhtml_title = \"Litestar Framework\"\n\n\nhtml_theme_options = {\n \"use_page_nav\": False,\n \"github_repo_name\": \"litestar\",\n \"logo\": {\n \"link\": \"https://litestar.dev\",\n },\n \"extra_navbar_items\": {\n \"Documentation\": \"index\",\n \"Community\": {\n \"Contribution Guide\": \"contribution-guide\",\n \"Code of Conduct\": \"https://github.com/litestar-org/.github/blob/main/CODE_OF_CONDUCT.md\",\n },\n \"About\": {\n \"Organization\": \"https://litestar.dev/about/organization\",\n \"Releases\": \"https://litestar.dev/about/litestar-releases\",\n },\n \"Release notes\": {\n \"2.0 migration guide\": \"release-notes/migration_guide_2\",\n \"2.x Changelog\": \"https://docs.litestar.dev/2/release-notes/changelog.html\",\n \"1.x Changelog\": \"https://docs.litestar.dev/1/release-notes/changelog.html\",\n },\n },\n}\n\n\ndef update_html_context(\n app: Sphinx, pagename: str, templatename: str, context: dict[str, Any], doctree: document\n) -> None:\n context[\"generate_toctree_html\"] = partial(context[\"generate_toctree_html\"], startdepth=0)\n\n\ndef setup(app: Sphinx) -> dict[str, bool]:\n app.setup_extension(\"litestar_sphinx_theme\")\n app.setup_extension(\"pydata_sphinx_theme\")\n app.connect(\"html-page-context\", update_html_context)\n\n return {\"parallel_read_safe\": True, \"parallel_write_safe\": True}\n", "path": "docs/conf.py"}]}
| 3,044 | 439 |
gh_patches_debug_1477
|
rasdani/github-patches
|
git_diff
|
ibis-project__ibis-5647
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug(postgres): cannot connect to postgres table with `tsvector` column
### What happened?
Reposting from https://stackoverflow.com/questions/74520302/why-cant-i-connect-ibis-to-a-postgres-table-with-a-tsvector-column
Implementing whatever postgres functionality exists around tsvector may not be something we want to do, but I think we should at least allow connecting to the table.
### What version of ibis are you using?
3.2.0
### What backend(s) are you using, if any?
postgres
### Relevant log output
```sh
KeyError Traceback (most recent call last)
File ~/anaconda3/envs/ec_extract/lib/python3.9/site-packages/multipledispatch/dispatcher.py:269, in Dispatcher.__call__(self, *args, **kwargs)
268 try:
--> 269 func = self._cache[types]
270 except KeyError:
KeyError: (<class 'sqlalchemy.dialects.postgresql.psycopg2.PGDialect_psycopg2'>, <class 'sqlalchemy.dialects.postgresql.base.TSVECTOR'>)
During handling of the above exception, another exception occurred:
NotImplementedError Traceback (most recent call last)
***
----> 29 main_table = con.table(table_name)[columns['column_list']]
File ~/anaconda3/envs/ec_extract/lib/python3.9/site-packages/ibis/backends/base/sql/alchemy/__init__.py:438, in BaseAlchemyBackend.table(self, name, database, schema)
428 return self.database(database=database).table(
429 name=name,
430 database=database,
431 schema=schema,
432 )
433 sqla_table = self._get_sqla_table(
434 name,
...
275 (self.name, str_signature(types)))
276 self._cache[types] = func
277 try:
NotImplementedError: Could not find signature for dtype: <PGDialect_psycopg2, TSVECTOR>
```
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
</issue>
<code>
[start of ibis/backends/postgres/datatypes.py]
1 from __future__ import annotations
2
3 import parsy
4 import sqlalchemy as sa
5 import toolz
6 from sqlalchemy.dialects import postgresql
7 from sqlalchemy.dialects.postgresql.base import PGDialect
8
9 import ibis.expr.datatypes as dt
10 from ibis.backends.base.sql.alchemy import to_sqla_type
11 from ibis.common.parsing import (
12 COMMA,
13 LBRACKET,
14 LPAREN,
15 PRECISION,
16 RBRACKET,
17 RPAREN,
18 SCALE,
19 spaceless,
20 spaceless_string,
21 )
22
23 _BRACKETS = "[]"
24
25
26 def _parse_numeric(
27 text: str, default_decimal_parameters: tuple[int | None, int | None] = (None, None)
28 ) -> dt.DataType:
29 decimal = spaceless_string("decimal", "numeric").then(
30 parsy.seq(LPAREN.then(PRECISION.skip(COMMA)), SCALE.skip(RPAREN))
31 .optional(default_decimal_parameters)
32 .combine(dt.Decimal)
33 )
34
35 brackets = spaceless(LBRACKET).then(spaceless(RBRACKET))
36
37 pg_array = parsy.seq(decimal, brackets.at_least(1).map(len)).combine(
38 lambda value_type, n: toolz.nth(n, toolz.iterate(dt.Array, value_type))
39 )
40
41 ty = pg_array | decimal
42 return ty.parse(text)
43
44
45 def _get_type(typestr: str) -> dt.DataType:
46 is_array = typestr.endswith(_BRACKETS)
47 if (typ := _type_mapping.get(typestr.replace(_BRACKETS, ""))) is not None:
48 return dt.Array(typ) if is_array else typ
49 return _parse_numeric(typestr)
50
51
52 _type_mapping = {
53 "bigint": dt.int64,
54 "boolean": dt.bool,
55 "bytea": dt.binary,
56 "character varying": dt.string,
57 "character": dt.string,
58 "character(1)": dt.string,
59 "date": dt.date,
60 "double precision": dt.float64,
61 "geography": dt.geography,
62 "geometry": dt.geometry,
63 "inet": dt.inet,
64 "integer": dt.int32,
65 "interval": dt.interval,
66 "json": dt.json,
67 "jsonb": dt.json,
68 "line": dt.linestring,
69 "macaddr": dt.macaddr,
70 "macaddr8": dt.macaddr,
71 "numeric": dt.decimal,
72 "point": dt.point,
73 "polygon": dt.polygon,
74 "real": dt.float32,
75 "smallint": dt.int16,
76 "text": dt.string,
77 # NB: this isn't correct because we're losing the "with time zone"
78 # information (ibis doesn't have time type that is time-zone aware), but we
79 # try to do _something_ here instead of failing
80 "time with time zone": dt.time,
81 "time without time zone": dt.time,
82 "timestamp with time zone": dt.Timestamp("UTC"),
83 "timestamp without time zone": dt.timestamp,
84 "uuid": dt.uuid,
85 }
86
87
88 @to_sqla_type.register(PGDialect, dt.Array)
89 def _pg_array(dialect, itype):
90 # Unwrap the array element type because sqlalchemy doesn't allow arrays of
91 # arrays. This doesn't affect the underlying data.
92 while itype.is_array():
93 itype = itype.value_type
94 return sa.ARRAY(to_sqla_type(dialect, itype))
95
96
97 @to_sqla_type.register(PGDialect, dt.Map)
98 def _pg_map(dialect, itype):
99 if not (itype.key_type.is_string() and itype.value_type.is_string()):
100 raise TypeError(f"PostgreSQL only supports map<string, string>, got: {itype}")
101 return postgresql.HSTORE()
102
103
104 @dt.dtype.register(PGDialect, postgresql.DOUBLE_PRECISION)
105 def sa_double(_, satype, nullable=True):
106 return dt.Float64(nullable=nullable)
107
108
109 @dt.dtype.register(PGDialect, postgresql.UUID)
110 def sa_uuid(_, satype, nullable=True):
111 return dt.UUID(nullable=nullable)
112
113
114 @dt.dtype.register(PGDialect, postgresql.MACADDR)
115 def sa_macaddr(_, satype, nullable=True):
116 return dt.MACADDR(nullable=nullable)
117
118
119 @dt.dtype.register(PGDialect, postgresql.HSTORE)
120 def sa_hstore(_, satype, nullable=True):
121 return dt.Map(dt.string, dt.string, nullable=nullable)
122
123
124 @dt.dtype.register(PGDialect, postgresql.INET)
125 def sa_inet(_, satype, nullable=True):
126 return dt.INET(nullable=nullable)
127
128
129 @dt.dtype.register(PGDialect, postgresql.JSONB)
130 def sa_json(_, satype, nullable=True):
131 return dt.JSON(nullable=nullable)
132
133
134 _POSTGRES_FIELD_TO_IBIS_UNIT = {
135 "YEAR": "Y",
136 "MONTH": "M",
137 "DAY": "D",
138 "HOUR": "h",
139 "MINUTE": "m",
140 "SECOND": "s",
141 "YEAR TO MONTH": "M",
142 "DAY TO HOUR": "h",
143 "DAY TO MINUTE": "m",
144 "DAY TO SECOND": "s",
145 "HOUR TO MINUTE": "m",
146 "HOUR TO SECOND": "s",
147 "MINUTE TO SECOND": "s",
148 }
149
150
151 @dt.dtype.register(PGDialect, postgresql.INTERVAL)
152 def sa_postgres_interval(_, satype, nullable=True):
153 field = satype.fields.upper()
154 if (unit := _POSTGRES_FIELD_TO_IBIS_UNIT.get(field, None)) is None:
155 raise ValueError(f"Unknown PostgreSQL interval field {field!r}")
156 elif unit in {"Y", "M"}:
157 raise ValueError(
158 "Variable length intervals are not yet supported with PostgreSQL"
159 )
160 return dt.Interval(unit=unit, nullable=nullable)
161
162
163 @dt.dtype.register(PGDialect, sa.ARRAY)
164 def sa_pg_array(dialect, satype, nullable=True):
165 dimensions = satype.dimensions
166 if dimensions is not None and dimensions != 1:
167 raise NotImplementedError(
168 f"Nested array types not yet supported for {dialect.name} dialect"
169 )
170
171 value_dtype = dt.dtype(dialect, satype.item_type)
172 return dt.Array(value_dtype, nullable=nullable)
173
[end of ibis/backends/postgres/datatypes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ibis/backends/postgres/datatypes.py b/ibis/backends/postgres/datatypes.py
--- a/ibis/backends/postgres/datatypes.py
+++ b/ibis/backends/postgres/datatypes.py
@@ -170,3 +170,8 @@
value_dtype = dt.dtype(dialect, satype.item_type)
return dt.Array(value_dtype, nullable=nullable)
+
+
[email protected](PGDialect, postgresql.TSVECTOR)
+def sa_postgres_tsvector(_, satype, nullable=True):
+ return dt.String(nullable=nullable)
|
{"golden_diff": "diff --git a/ibis/backends/postgres/datatypes.py b/ibis/backends/postgres/datatypes.py\n--- a/ibis/backends/postgres/datatypes.py\n+++ b/ibis/backends/postgres/datatypes.py\n@@ -170,3 +170,8 @@\n \n value_dtype = dt.dtype(dialect, satype.item_type)\n return dt.Array(value_dtype, nullable=nullable)\n+\n+\[email protected](PGDialect, postgresql.TSVECTOR)\n+def sa_postgres_tsvector(_, satype, nullable=True):\n+ return dt.String(nullable=nullable)\n", "issue": "bug(postgres): cannot connect to postgres table with `tsvector` column\n### What happened?\n\nReposting from https://stackoverflow.com/questions/74520302/why-cant-i-connect-ibis-to-a-postgres-table-with-a-tsvector-column\r\n\r\nImplementing whatever postgres functionality exists around tsvector may not be something we want to do, but I think we should at least allow connecting to the table.\n\n### What version of ibis are you using?\n\n3.2.0\n\n### What backend(s) are you using, if any?\n\npostgres\n\n### Relevant log output\n\n```sh\nKeyError Traceback (most recent call last)\r\nFile ~/anaconda3/envs/ec_extract/lib/python3.9/site-packages/multipledispatch/dispatcher.py:269, in Dispatcher.__call__(self, *args, **kwargs)\r\n 268 try:\r\n--> 269 func = self._cache[types]\r\n 270 except KeyError:\r\n\r\nKeyError: (<class 'sqlalchemy.dialects.postgresql.psycopg2.PGDialect_psycopg2'>, <class 'sqlalchemy.dialects.postgresql.base.TSVECTOR'>)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nNotImplementedError Traceback (most recent call last)\r\n*** \r\n----> 29 main_table = con.table(table_name)[columns['column_list']]\r\n\r\nFile ~/anaconda3/envs/ec_extract/lib/python3.9/site-packages/ibis/backends/base/sql/alchemy/__init__.py:438, in BaseAlchemyBackend.table(self, name, database, schema)\r\n 428 return self.database(database=database).table(\r\n 429 name=name,\r\n 430 database=database,\r\n 431 schema=schema,\r\n 432 )\r\n 433 sqla_table = self._get_sqla_table(\r\n 434 name,\r\n...\r\n 275 (self.name, str_signature(types)))\r\n 276 self._cache[types] = func\r\n 277 try:\r\n\r\nNotImplementedError: Could not find signature for dtype: <PGDialect_psycopg2, TSVECTOR>\n```\n\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n", "before_files": [{"content": "from __future__ import annotations\n\nimport parsy\nimport sqlalchemy as sa\nimport toolz\nfrom sqlalchemy.dialects import postgresql\nfrom sqlalchemy.dialects.postgresql.base import PGDialect\n\nimport ibis.expr.datatypes as dt\nfrom ibis.backends.base.sql.alchemy import to_sqla_type\nfrom ibis.common.parsing import (\n COMMA,\n LBRACKET,\n LPAREN,\n PRECISION,\n RBRACKET,\n RPAREN,\n SCALE,\n spaceless,\n spaceless_string,\n)\n\n_BRACKETS = \"[]\"\n\n\ndef _parse_numeric(\n text: str, default_decimal_parameters: tuple[int | None, int | None] = (None, None)\n) -> dt.DataType:\n decimal = spaceless_string(\"decimal\", \"numeric\").then(\n parsy.seq(LPAREN.then(PRECISION.skip(COMMA)), SCALE.skip(RPAREN))\n .optional(default_decimal_parameters)\n .combine(dt.Decimal)\n )\n\n brackets = spaceless(LBRACKET).then(spaceless(RBRACKET))\n\n pg_array = parsy.seq(decimal, brackets.at_least(1).map(len)).combine(\n lambda value_type, n: toolz.nth(n, toolz.iterate(dt.Array, value_type))\n )\n\n ty = pg_array | decimal\n return ty.parse(text)\n\n\ndef _get_type(typestr: str) -> dt.DataType:\n is_array = typestr.endswith(_BRACKETS)\n if (typ := _type_mapping.get(typestr.replace(_BRACKETS, \"\"))) is not None:\n return dt.Array(typ) if is_array else typ\n return _parse_numeric(typestr)\n\n\n_type_mapping = {\n \"bigint\": dt.int64,\n \"boolean\": dt.bool,\n \"bytea\": dt.binary,\n \"character varying\": dt.string,\n \"character\": dt.string,\n \"character(1)\": dt.string,\n \"date\": dt.date,\n \"double precision\": dt.float64,\n \"geography\": dt.geography,\n \"geometry\": dt.geometry,\n \"inet\": dt.inet,\n \"integer\": dt.int32,\n \"interval\": dt.interval,\n \"json\": dt.json,\n \"jsonb\": dt.json,\n \"line\": dt.linestring,\n \"macaddr\": dt.macaddr,\n \"macaddr8\": dt.macaddr,\n \"numeric\": dt.decimal,\n \"point\": dt.point,\n \"polygon\": dt.polygon,\n \"real\": dt.float32,\n \"smallint\": dt.int16,\n \"text\": dt.string,\n # NB: this isn't correct because we're losing the \"with time zone\"\n # information (ibis doesn't have time type that is time-zone aware), but we\n # try to do _something_ here instead of failing\n \"time with time zone\": dt.time,\n \"time without time zone\": dt.time,\n \"timestamp with time zone\": dt.Timestamp(\"UTC\"),\n \"timestamp without time zone\": dt.timestamp,\n \"uuid\": dt.uuid,\n}\n\n\n@to_sqla_type.register(PGDialect, dt.Array)\ndef _pg_array(dialect, itype):\n # Unwrap the array element type because sqlalchemy doesn't allow arrays of\n # arrays. This doesn't affect the underlying data.\n while itype.is_array():\n itype = itype.value_type\n return sa.ARRAY(to_sqla_type(dialect, itype))\n\n\n@to_sqla_type.register(PGDialect, dt.Map)\ndef _pg_map(dialect, itype):\n if not (itype.key_type.is_string() and itype.value_type.is_string()):\n raise TypeError(f\"PostgreSQL only supports map<string, string>, got: {itype}\")\n return postgresql.HSTORE()\n\n\[email protected](PGDialect, postgresql.DOUBLE_PRECISION)\ndef sa_double(_, satype, nullable=True):\n return dt.Float64(nullable=nullable)\n\n\[email protected](PGDialect, postgresql.UUID)\ndef sa_uuid(_, satype, nullable=True):\n return dt.UUID(nullable=nullable)\n\n\[email protected](PGDialect, postgresql.MACADDR)\ndef sa_macaddr(_, satype, nullable=True):\n return dt.MACADDR(nullable=nullable)\n\n\[email protected](PGDialect, postgresql.HSTORE)\ndef sa_hstore(_, satype, nullable=True):\n return dt.Map(dt.string, dt.string, nullable=nullable)\n\n\[email protected](PGDialect, postgresql.INET)\ndef sa_inet(_, satype, nullable=True):\n return dt.INET(nullable=nullable)\n\n\[email protected](PGDialect, postgresql.JSONB)\ndef sa_json(_, satype, nullable=True):\n return dt.JSON(nullable=nullable)\n\n\n_POSTGRES_FIELD_TO_IBIS_UNIT = {\n \"YEAR\": \"Y\",\n \"MONTH\": \"M\",\n \"DAY\": \"D\",\n \"HOUR\": \"h\",\n \"MINUTE\": \"m\",\n \"SECOND\": \"s\",\n \"YEAR TO MONTH\": \"M\",\n \"DAY TO HOUR\": \"h\",\n \"DAY TO MINUTE\": \"m\",\n \"DAY TO SECOND\": \"s\",\n \"HOUR TO MINUTE\": \"m\",\n \"HOUR TO SECOND\": \"s\",\n \"MINUTE TO SECOND\": \"s\",\n}\n\n\[email protected](PGDialect, postgresql.INTERVAL)\ndef sa_postgres_interval(_, satype, nullable=True):\n field = satype.fields.upper()\n if (unit := _POSTGRES_FIELD_TO_IBIS_UNIT.get(field, None)) is None:\n raise ValueError(f\"Unknown PostgreSQL interval field {field!r}\")\n elif unit in {\"Y\", \"M\"}:\n raise ValueError(\n \"Variable length intervals are not yet supported with PostgreSQL\"\n )\n return dt.Interval(unit=unit, nullable=nullable)\n\n\[email protected](PGDialect, sa.ARRAY)\ndef sa_pg_array(dialect, satype, nullable=True):\n dimensions = satype.dimensions\n if dimensions is not None and dimensions != 1:\n raise NotImplementedError(\n f\"Nested array types not yet supported for {dialect.name} dialect\"\n )\n\n value_dtype = dt.dtype(dialect, satype.item_type)\n return dt.Array(value_dtype, nullable=nullable)\n", "path": "ibis/backends/postgres/datatypes.py"}]}
| 2,810 | 132 |
gh_patches_debug_6175
|
rasdani/github-patches
|
git_diff
|
google__fuzzbench-148
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[reports] Data.csv.gz don't need to contain id column
It has these columns because data.csv.gz contains data from a join query of snapshots on trials.
time_started and time_ended are from trials but they are probably not useful for the analysis people want to do so they just take up space at this point.
</issue>
<code>
[start of analysis/queries.py]
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Database queries for acquiring experiment data."""
15
16 import pandas as pd
17 import sqlalchemy
18
19 from database import models
20 from database import utils as db_utils
21
22
23 def get_experiment_data(experiment_names):
24 """Get measurements (such as coverage) on experiments from the database."""
25 snapshots_query = db_utils.query(models.Snapshot).options(
26 sqlalchemy.orm.joinedload('trial')).filter(
27 models.Snapshot.trial.has(
28 models.Trial.experiment.in_(experiment_names)))
29 return pd.read_sql_query(snapshots_query.statement, db_utils.engine)
30
[end of analysis/queries.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/analysis/queries.py b/analysis/queries.py
--- a/analysis/queries.py
+++ b/analysis/queries.py
@@ -26,4 +26,8 @@
sqlalchemy.orm.joinedload('trial')).filter(
models.Snapshot.trial.has(
models.Trial.experiment.in_(experiment_names)))
- return pd.read_sql_query(snapshots_query.statement, db_utils.engine)
+
+ # id must be loaded to do the join but get rid of it now since
+ # trial_id provides the same info.
+ data = pd.read_sql_query(snapshots_query.statement, db_utils.engine)
+ return data.drop(columns=['id'])
|
{"golden_diff": "diff --git a/analysis/queries.py b/analysis/queries.py\n--- a/analysis/queries.py\n+++ b/analysis/queries.py\n@@ -26,4 +26,8 @@\n sqlalchemy.orm.joinedload('trial')).filter(\n models.Snapshot.trial.has(\n models.Trial.experiment.in_(experiment_names)))\n- return pd.read_sql_query(snapshots_query.statement, db_utils.engine)\n+\n+ # id must be loaded to do the join but get rid of it now since\n+ # trial_id provides the same info.\n+ data = pd.read_sql_query(snapshots_query.statement, db_utils.engine)\n+ return data.drop(columns=['id'])\n", "issue": "[reports] Data.csv.gz don't need to contain id column\nIt has these columns because data.csv.gz contains data from a join query of snapshots on trials.\r\ntime_started and time_ended are from trials but they are probably not useful for the analysis people want to do so they just take up space at this point.\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Database queries for acquiring experiment data.\"\"\"\n\nimport pandas as pd\nimport sqlalchemy\n\nfrom database import models\nfrom database import utils as db_utils\n\n\ndef get_experiment_data(experiment_names):\n \"\"\"Get measurements (such as coverage) on experiments from the database.\"\"\"\n snapshots_query = db_utils.query(models.Snapshot).options(\n sqlalchemy.orm.joinedload('trial')).filter(\n models.Snapshot.trial.has(\n models.Trial.experiment.in_(experiment_names)))\n return pd.read_sql_query(snapshots_query.statement, db_utils.engine)\n", "path": "analysis/queries.py"}]}
| 895 | 150 |
gh_patches_debug_24520
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-4945
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
measure.label speed
This is triggered by [this Stackoverflow question](https://stackoverflow.com/questions/62804953/performance-differences-between-bwlabeln-on-matlab-and-skimage-measure-label-on/62842582#62842582). When I have large binary arrays to label and performance issues, I usually resort to calling the ndimage version. Could we imagine having a `fast_binary` flag which would call the ndimage function? A factor of 3-4 (from a few tests I just ran) is not bad...
</issue>
<code>
[start of skimage/measure/_label.py]
1 from ._ccomp import label_cython as clabel
2
3
4 def label(input, background=None, return_num=False, connectivity=None):
5 r"""Label connected regions of an integer array.
6
7 Two pixels are connected when they are neighbors and have the same value.
8 In 2D, they can be neighbors either in a 1- or 2-connected sense.
9 The value refers to the maximum number of orthogonal hops to consider a
10 pixel/voxel a neighbor::
11
12 1-connectivity 2-connectivity diagonal connection close-up
13
14 [ ] [ ] [ ] [ ] [ ]
15 | \ | / | <- hop 2
16 [ ]--[x]--[ ] [ ]--[x]--[ ] [x]--[ ]
17 | / | \ hop 1
18 [ ] [ ] [ ] [ ]
19
20 Parameters
21 ----------
22 input : ndarray of dtype int
23 Image to label.
24 background : int, optional
25 Consider all pixels with this value as background pixels, and label
26 them as 0. By default, 0-valued pixels are considered as background
27 pixels.
28 return_num : bool, optional
29 Whether to return the number of assigned labels.
30 connectivity : int, optional
31 Maximum number of orthogonal hops to consider a pixel/voxel
32 as a neighbor.
33 Accepted values are ranging from 1 to input.ndim. If ``None``, a full
34 connectivity of ``input.ndim`` is used.
35
36 Returns
37 -------
38 labels : ndarray of dtype int
39 Labeled array, where all connected regions are assigned the
40 same integer value.
41 num : int, optional
42 Number of labels, which equals the maximum label index and is only
43 returned if return_num is `True`.
44
45 See Also
46 --------
47 regionprops
48
49 References
50 ----------
51 .. [1] Christophe Fiorio and Jens Gustedt, "Two linear time Union-Find
52 strategies for image processing", Theoretical Computer Science
53 154 (1996), pp. 165-181.
54 .. [2] Kensheng Wu, Ekow Otoo and Arie Shoshani, "Optimizing connected
55 component labeling algorithms", Paper LBNL-56864, 2005,
56 Lawrence Berkeley National Laboratory (University of California),
57 http://repositories.cdlib.org/lbnl/LBNL-56864
58
59 Examples
60 --------
61 >>> import numpy as np
62 >>> x = np.eye(3).astype(int)
63 >>> print(x)
64 [[1 0 0]
65 [0 1 0]
66 [0 0 1]]
67 >>> print(label(x, connectivity=1))
68 [[1 0 0]
69 [0 2 0]
70 [0 0 3]]
71 >>> print(label(x, connectivity=2))
72 [[1 0 0]
73 [0 1 0]
74 [0 0 1]]
75 >>> print(label(x, background=-1))
76 [[1 2 2]
77 [2 1 2]
78 [2 2 1]]
79 >>> x = np.array([[1, 0, 0],
80 ... [1, 1, 5],
81 ... [0, 0, 0]])
82 >>> print(label(x))
83 [[1 0 0]
84 [1 1 2]
85 [0 0 0]]
86 """
87 return clabel(input, background, return_num, connectivity)
88
[end of skimage/measure/_label.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/skimage/measure/_label.py b/skimage/measure/_label.py
--- a/skimage/measure/_label.py
+++ b/skimage/measure/_label.py
@@ -1,6 +1,34 @@
+from scipy import ndimage
from ._ccomp import label_cython as clabel
+def _label_bool(image, background=None, return_num=False, connectivity=None):
+ """Faster implementation of clabel for boolean input.
+
+ See context: https://github.com/scikit-image/scikit-image/issues/4833
+ """
+ from ..morphology._util import _resolve_neighborhood
+ if background == 1:
+ image = ~image
+
+ if connectivity is None:
+ connectivity = image.ndim
+
+ if not 1 <= connectivity <= image.ndim:
+ raise ValueError(
+ f'Connectivity for {image.ndim}D image should '
+ f'be in [1, ..., {image.ndim}]. Got {connectivity}.'
+ )
+
+ selem = _resolve_neighborhood(None, connectivity, image.ndim)
+ result = ndimage.label(image, structure=selem)
+
+ if return_num:
+ return result
+ else:
+ return result[0]
+
+
def label(input, background=None, return_num=False, connectivity=None):
r"""Label connected regions of an integer array.
@@ -84,4 +112,8 @@
[1 1 2]
[0 0 0]]
"""
- return clabel(input, background, return_num, connectivity)
+ if input.dtype == bool:
+ return _label_bool(input, background=background,
+ return_num=return_num, connectivity=connectivity)
+ else:
+ return clabel(input, background, return_num, connectivity)
|
{"golden_diff": "diff --git a/skimage/measure/_label.py b/skimage/measure/_label.py\n--- a/skimage/measure/_label.py\n+++ b/skimage/measure/_label.py\n@@ -1,6 +1,34 @@\n+from scipy import ndimage\n from ._ccomp import label_cython as clabel\n \n \n+def _label_bool(image, background=None, return_num=False, connectivity=None):\n+ \"\"\"Faster implementation of clabel for boolean input.\n+\n+ See context: https://github.com/scikit-image/scikit-image/issues/4833\n+ \"\"\"\n+ from ..morphology._util import _resolve_neighborhood\n+ if background == 1:\n+ image = ~image\n+\n+ if connectivity is None:\n+ connectivity = image.ndim\n+\n+ if not 1 <= connectivity <= image.ndim:\n+ raise ValueError(\n+ f'Connectivity for {image.ndim}D image should '\n+ f'be in [1, ..., {image.ndim}]. Got {connectivity}.'\n+ )\n+\n+ selem = _resolve_neighborhood(None, connectivity, image.ndim)\n+ result = ndimage.label(image, structure=selem)\n+\n+ if return_num:\n+ return result\n+ else:\n+ return result[0]\n+\n+\n def label(input, background=None, return_num=False, connectivity=None):\n r\"\"\"Label connected regions of an integer array.\n \n@@ -84,4 +112,8 @@\n [1 1 2]\n [0 0 0]]\n \"\"\"\n- return clabel(input, background, return_num, connectivity)\n+ if input.dtype == bool:\n+ return _label_bool(input, background=background,\n+ return_num=return_num, connectivity=connectivity)\n+ else:\n+ return clabel(input, background, return_num, connectivity)\n", "issue": "measure.label speed\nThis is triggered by [this Stackoverflow question](https://stackoverflow.com/questions/62804953/performance-differences-between-bwlabeln-on-matlab-and-skimage-measure-label-on/62842582#62842582). When I have large binary arrays to label and performance issues, I usually resort to calling the ndimage version. Could we imagine having a `fast_binary` flag which would call the ndimage function? A factor of 3-4 (from a few tests I just ran) is not bad...\n", "before_files": [{"content": "from ._ccomp import label_cython as clabel\n\n\ndef label(input, background=None, return_num=False, connectivity=None):\n r\"\"\"Label connected regions of an integer array.\n\n Two pixels are connected when they are neighbors and have the same value.\n In 2D, they can be neighbors either in a 1- or 2-connected sense.\n The value refers to the maximum number of orthogonal hops to consider a\n pixel/voxel a neighbor::\n\n 1-connectivity 2-connectivity diagonal connection close-up\n\n [ ] [ ] [ ] [ ] [ ]\n | \\ | / | <- hop 2\n [ ]--[x]--[ ] [ ]--[x]--[ ] [x]--[ ]\n | / | \\ hop 1\n [ ] [ ] [ ] [ ]\n\n Parameters\n ----------\n input : ndarray of dtype int\n Image to label.\n background : int, optional\n Consider all pixels with this value as background pixels, and label\n them as 0. By default, 0-valued pixels are considered as background\n pixels.\n return_num : bool, optional\n Whether to return the number of assigned labels.\n connectivity : int, optional\n Maximum number of orthogonal hops to consider a pixel/voxel\n as a neighbor.\n Accepted values are ranging from 1 to input.ndim. If ``None``, a full\n connectivity of ``input.ndim`` is used.\n\n Returns\n -------\n labels : ndarray of dtype int\n Labeled array, where all connected regions are assigned the\n same integer value.\n num : int, optional\n Number of labels, which equals the maximum label index and is only\n returned if return_num is `True`.\n\n See Also\n --------\n regionprops\n\n References\n ----------\n .. [1] Christophe Fiorio and Jens Gustedt, \"Two linear time Union-Find\n strategies for image processing\", Theoretical Computer Science\n 154 (1996), pp. 165-181.\n .. [2] Kensheng Wu, Ekow Otoo and Arie Shoshani, \"Optimizing connected\n component labeling algorithms\", Paper LBNL-56864, 2005,\n Lawrence Berkeley National Laboratory (University of California),\n http://repositories.cdlib.org/lbnl/LBNL-56864\n\n Examples\n --------\n >>> import numpy as np\n >>> x = np.eye(3).astype(int)\n >>> print(x)\n [[1 0 0]\n [0 1 0]\n [0 0 1]]\n >>> print(label(x, connectivity=1))\n [[1 0 0]\n [0 2 0]\n [0 0 3]]\n >>> print(label(x, connectivity=2))\n [[1 0 0]\n [0 1 0]\n [0 0 1]]\n >>> print(label(x, background=-1))\n [[1 2 2]\n [2 1 2]\n [2 2 1]]\n >>> x = np.array([[1, 0, 0],\n ... [1, 1, 5],\n ... [0, 0, 0]])\n >>> print(label(x))\n [[1 0 0]\n [1 1 2]\n [0 0 0]]\n \"\"\"\n return clabel(input, background, return_num, connectivity)\n", "path": "skimage/measure/_label.py"}]}
| 1,618 | 410 |
gh_patches_debug_26316
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-424
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pin optional dependencies at the minor release level
# Description
To avoid having our prior releases break like `v0.0.15` did in Issue #396 it would be good to pin our optional dependencies at the minor release level for each release. This should safeguard us from old releases getting broken by API changes in the dependencies that we use as applications.
To be clear, I don't think we should limit the dependencies in `install_requires` beyond placing _lower_ bounds, but I do think that we should now be placing upper bounds on all of the optional dependencies as we are really more using those as **applications** in our library.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 from setuptools import setup, find_packages
4 from os import path
5 import sys
6
7 this_directory = path.abspath(path.dirname(__file__))
8 if sys.version_info.major < 3:
9 from io import open
10 with open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:
11 long_description = readme_md.read()
12
13 extras_require = {
14 'tensorflow': [
15 'tensorflow>=1.12.0',
16 'tensorflow-probability>=0.5.0',
17 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass
18 'setuptools<=39.1.0',
19 ],
20 'torch': ['torch>=1.0.0'],
21 'mxnet': [
22 'mxnet>=1.0.0',
23 'requests<2.19.0,>=2.18.4',
24 'numpy<1.15.0,>=1.8.2',
25 'requests<2.19.0,>=2.18.4',
26 ],
27 # 'dask': [
28 # 'dask[array]'
29 # ],
30 'xmlimport': ['uproot'],
31 'minuit': ['iminuit'],
32 'develop': [
33 'pyflakes',
34 'pytest<4.0.0,>=3.5.1',
35 'pytest-cov>=2.5.1',
36 'pytest-mock',
37 'pytest-benchmark[histogram]',
38 'pytest-console-scripts',
39 'python-coveralls',
40 'coverage>=4.0', # coveralls
41 'matplotlib',
42 'jupyter',
43 'nbdime',
44 'uproot>=3.3.0',
45 'papermill>=0.16.0',
46 'graphviz',
47 'bumpversion',
48 'sphinx',
49 'sphinxcontrib-bibtex',
50 'sphinxcontrib-napoleon',
51 'sphinx_rtd_theme',
52 'nbsphinx',
53 'sphinx-issues',
54 'm2r',
55 'jsonpatch',
56 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now
57 'pre-commit',
58 'black;python_version>="3.6"', # Black is Python3 only
59 'twine',
60 ],
61 }
62 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
63
64 setup(
65 name='pyhf',
66 version='0.0.16',
67 description='(partial) pure python histfactory implementation',
68 long_description=long_description,
69 long_description_content_type='text/markdown',
70 url='https://github.com/diana-hep/pyhf',
71 author='Lukas Heinrich',
72 author_email='[email protected]',
73 license='Apache',
74 keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',
75 classifiers=[
76 "Programming Language :: Python :: 2",
77 "Programming Language :: Python :: 2.7",
78 "Programming Language :: Python :: 3",
79 "Programming Language :: Python :: 3.6",
80 "Programming Language :: Python :: 3.7",
81 ],
82 packages=find_packages(),
83 include_package_data=True,
84 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*",
85 install_requires=[
86 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet
87 'click>=6.0', # for console scripts,
88 'tqdm', # for readxml
89 'six', # for modifiers
90 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6
91 'jsonpatch',
92 ],
93 extras_require=extras_require,
94 entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},
95 dependency_links=[],
96 )
97
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,18 +12,13 @@
extras_require = {
'tensorflow': [
- 'tensorflow>=1.12.0',
- 'tensorflow-probability>=0.5.0',
+ 'tensorflow~=1.13',
+ 'tensorflow-probability~=0.5',
'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass
'setuptools<=39.1.0',
],
- 'torch': ['torch>=1.0.0'],
- 'mxnet': [
- 'mxnet>=1.0.0',
- 'requests<2.19.0,>=2.18.4',
- 'numpy<1.15.0,>=1.8.2',
- 'requests<2.19.0,>=2.18.4',
- ],
+ 'torch': ['torch~=1.0'],
+ 'mxnet': ['mxnet~=1.0', 'requests~=2.18.4', 'numpy<1.15.0,>=1.8.2'],
# 'dask': [
# 'dask[array]'
# ],
@@ -31,7 +26,7 @@
'minuit': ['iminuit'],
'develop': [
'pyflakes',
- 'pytest<4.0.0,>=3.5.1',
+ 'pytest~=3.5',
'pytest-cov>=2.5.1',
'pytest-mock',
'pytest-benchmark[histogram]',
@@ -41,8 +36,8 @@
'matplotlib',
'jupyter',
'nbdime',
- 'uproot>=3.3.0',
- 'papermill>=0.16.0',
+ 'uproot~=3.3',
+ 'papermill~=0.16',
'graphviz',
'bumpversion',
'sphinx',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,18 +12,13 @@\n \n extras_require = {\n 'tensorflow': [\n- 'tensorflow>=1.12.0',\n- 'tensorflow-probability>=0.5.0',\n+ 'tensorflow~=1.13',\n+ 'tensorflow-probability~=0.5',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n- 'torch': ['torch>=1.0.0'],\n- 'mxnet': [\n- 'mxnet>=1.0.0',\n- 'requests<2.19.0,>=2.18.4',\n- 'numpy<1.15.0,>=1.8.2',\n- 'requests<2.19.0,>=2.18.4',\n- ],\n+ 'torch': ['torch~=1.0'],\n+ 'mxnet': ['mxnet~=1.0', 'requests~=2.18.4', 'numpy<1.15.0,>=1.8.2'],\n # 'dask': [\n # 'dask[array]'\n # ],\n@@ -31,7 +26,7 @@\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n- 'pytest<4.0.0,>=3.5.1',\n+ 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n@@ -41,8 +36,8 @@\n 'matplotlib',\n 'jupyter',\n 'nbdime',\n- 'uproot>=3.3.0',\n- 'papermill>=0.16.0',\n+ 'uproot~=3.3',\n+ 'papermill~=0.16',\n 'graphviz',\n 'bumpversion',\n 'sphinx',\n", "issue": "Pin optional dependencies at the minor release level\n# Description\r\n\r\nTo avoid having our prior releases break like `v0.0.15` did in Issue #396 it would be good to pin our optional dependencies at the minor release level for each release. This should safeguard us from old releases getting broken by API changes in the dependencies that we use as applications.\r\n\r\nTo be clear, I don't think we should limit the dependencies in `install_requires` beyond placing _lower_ bounds, but I do think that we should now be placing upper bounds on all of the optional dependencies as we are really more using those as **applications** in our library.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\nfrom os import path\nimport sys\n\nthis_directory = path.abspath(path.dirname(__file__))\nif sys.version_info.major < 3:\n from io import open\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:\n long_description = readme_md.read()\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow>=1.12.0',\n 'tensorflow-probability>=0.5.0',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n 'torch': ['torch>=1.0.0'],\n 'mxnet': [\n 'mxnet>=1.0.0',\n 'requests<2.19.0,>=2.18.4',\n 'numpy<1.15.0,>=1.8.2',\n 'requests<2.19.0,>=2.18.4',\n ],\n # 'dask': [\n # 'dask[array]'\n # ],\n 'xmlimport': ['uproot'],\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n 'pytest<4.0.0,>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'python-coveralls',\n 'coverage>=4.0', # coveralls\n 'matplotlib',\n 'jupyter',\n 'nbdime',\n 'uproot>=3.3.0',\n 'papermill>=0.16.0',\n 'graphviz',\n 'bumpversion',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'sphinx-issues',\n 'm2r',\n 'jsonpatch',\n 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now\n 'pre-commit',\n 'black;python_version>=\"3.6\"', # Black is Python3 only\n 'twine',\n ],\n}\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\nsetup(\n name='pyhf',\n version='0.0.16',\n description='(partial) pure python histfactory implementation',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/diana-hep/pyhf',\n author='Lukas Heinrich',\n author_email='[email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(),\n include_package_data=True,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6\n 'jsonpatch',\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},\n dependency_links=[],\n)\n", "path": "setup.py"}]}
| 1,746 | 492 |
gh_patches_debug_17442
|
rasdani/github-patches
|
git_diff
|
spacetelescope__jwql-857
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add code to manage.py to create necessesary symlinks to run web app locally
In order to run the JWQL web app locally, one must create symbolic links to the `outputs`, `thumbnails`, `preview_images`, and `filesystem` directories. We can add some code in `website.manage.py` in order to do this automatically. Something like this:
```python
from jwql.utils.utils import get_config()
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "jwql_proj.settings")
# Create symbolic links here (if they don't already exist)
for directory in ['filesystem', 'outputs', 'preview_image_filesystem', 'thumbnails_filesystem']:
path = get_config()[directory]
# code to create symlink
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
```
Credit @york-stsci for the suggestion!
</issue>
<code>
[start of jwql/website/manage.py]
1 #! /usr/bin/env python
2
3 """Utility module for administrative tasks.
4
5 A python script version of Django's command-line utility for
6 administrative tasks (``django-admin``). Additionally, puts the project
7 package on ``sys.path`` and defines the ``DJANGO_SETTINGS_MODULE``
8 variable to point to the jwql ``settings.py`` file.
9
10 Generated by ``django-admin startproject`` using Django 2.0.1.
11
12 Use
13 ---
14
15 To run the web app server:
16 ::
17
18 python manage.py runserver
19
20 To start the interactive shellL:
21 ::
22
23 python manage.py shell
24
25 To run tests for all installed apps:
26 ::
27
28 python manage.py test
29
30 References
31 ----------
32 For more information please see:
33 ``https://docs.djangoproject.com/en/2.0/ref/django-admin/``
34 """
35
36 import os
37 import sys
38
39 if __name__ == "__main__":
40
41 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "jwql_proj.settings")
42
43 try:
44 from django.core.management import execute_from_command_line
45 except ImportError as exc:
46 raise ImportError(
47 "Couldn't import Django. Are you sure it's installed and "
48 "available on your PYTHONPATH environment variable? Did you "
49 "forget to activate a virtual environment?"
50 ) from exc
51 execute_from_command_line(sys.argv)
52
[end of jwql/website/manage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/jwql/website/manage.py b/jwql/website/manage.py
--- a/jwql/website/manage.py
+++ b/jwql/website/manage.py
@@ -36,10 +36,25 @@
import os
import sys
+from jwql.utils.utils import get_config
+
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "jwql_proj.settings")
+ directory_mapping = {
+ 'filesystem': 'filesystem',
+ 'outputs': 'outputs',
+ 'preview_image_filesystem': 'preview_images',
+ 'thumbnail_filesystem': 'thumbnails'
+ }
+
+ for directory in ['filesystem', 'outputs', 'preview_image_filesystem', 'thumbnail_filesystem']:
+ symlink_location = os.path.join(os.path.dirname(__file__), 'apps', 'jwql', 'static', directory_mapping[directory])
+ if not os.path.exists(symlink_location):
+ symlink_path = get_config()[directory]
+ os.symlink(symlink_path, symlink_location)
+
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
|
{"golden_diff": "diff --git a/jwql/website/manage.py b/jwql/website/manage.py\n--- a/jwql/website/manage.py\n+++ b/jwql/website/manage.py\n@@ -36,10 +36,25 @@\n import os\n import sys\n \n+from jwql.utils.utils import get_config\n+\n if __name__ == \"__main__\":\n \n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"jwql_proj.settings\")\n \n+ directory_mapping = {\n+ 'filesystem': 'filesystem',\n+ 'outputs': 'outputs',\n+ 'preview_image_filesystem': 'preview_images',\n+ 'thumbnail_filesystem': 'thumbnails'\n+ }\n+\n+ for directory in ['filesystem', 'outputs', 'preview_image_filesystem', 'thumbnail_filesystem']:\n+ symlink_location = os.path.join(os.path.dirname(__file__), 'apps', 'jwql', 'static', directory_mapping[directory])\n+ if not os.path.exists(symlink_location):\n+ symlink_path = get_config()[directory]\n+ os.symlink(symlink_path, symlink_location)\n+\n try:\n from django.core.management import execute_from_command_line\n except ImportError as exc:\n", "issue": "Add code to manage.py to create necessesary symlinks to run web app locally \nIn order to run the JWQL web app locally, one must create symbolic links to the `outputs`, `thumbnails`, `preview_images`, and `filesystem` directories. We can add some code in `website.manage.py` in order to do this automatically. Something like this:\r\n\r\n\r\n```python\r\nfrom jwql.utils.utils import get_config()\r\n\r\nif __name__ == \"__main__\":\r\n\r\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"jwql_proj.settings\")\r\n\r\n # Create symbolic links here (if they don't already exist)\r\n for directory in ['filesystem', 'outputs', 'preview_image_filesystem', 'thumbnails_filesystem']:\r\n path = get_config()[directory]\r\n # code to create symlink\r\n\r\n try:\r\n from django.core.management import execute_from_command_line\r\n except ImportError as exc:\r\n raise ImportError(\r\n \"Couldn't import Django. Are you sure it's installed and \"\r\n \"available on your PYTHONPATH environment variable? Did you \"\r\n \"forget to activate a virtual environment?\"\r\n ) from exc\r\n execute_from_command_line(sys.argv)\r\n```\r\n\r\nCredit @york-stsci for the suggestion!\n", "before_files": [{"content": "#! /usr/bin/env python\n\n\"\"\"Utility module for administrative tasks.\n\nA python script version of Django's command-line utility for\nadministrative tasks (``django-admin``). Additionally, puts the project\npackage on ``sys.path`` and defines the ``DJANGO_SETTINGS_MODULE``\nvariable to point to the jwql ``settings.py`` file.\n\nGenerated by ``django-admin startproject`` using Django 2.0.1.\n\nUse\n---\n\n To run the web app server:\n ::\n\n python manage.py runserver\n\n To start the interactive shellL:\n ::\n\n python manage.py shell\n\n To run tests for all installed apps:\n ::\n\n python manage.py test\n\nReferences\n----------\nFor more information please see:\n ``https://docs.djangoproject.com/en/2.0/ref/django-admin/``\n\"\"\"\n\nimport os\nimport sys\n\nif __name__ == \"__main__\":\n\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"jwql_proj.settings\")\n\n try:\n from django.core.management import execute_from_command_line\n except ImportError as exc:\n raise ImportError(\n \"Couldn't import Django. Are you sure it's installed and \"\n \"available on your PYTHONPATH environment variable? Did you \"\n \"forget to activate a virtual environment?\"\n ) from exc\n execute_from_command_line(sys.argv)\n", "path": "jwql/website/manage.py"}]}
| 1,178 | 255 |
gh_patches_debug_20691
|
rasdani/github-patches
|
git_diff
|
ephios-dev__ephios-525
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Select2 on disposition view shows error alert
Closing the select2 field in the disposition view without selecting an entry (e.g. by typing something into the field an then clicking somewhere outside the field) also triggers the form submission. This fails because no valid user has been selected and consequently shows an ugly alert to the user.
</issue>
<code>
[start of ephios/core/context.py]
1 import importlib
2
3 from django.conf import settings
4 from django.templatetags.static import static
5 from django.utils.translation import get_language
6
7 from ephios.core.models import AbstractParticipation
8 from ephios.core.signals import footer_link
9
10 # suggested in https://github.com/python-poetry/poetry/issues/273
11 EPHIOS_VERSION = "v" + importlib.metadata.version("ephios")
12
13
14 def ephios_base_context(request):
15 footer = {}
16 for _, result in footer_link.send(None, request=request):
17 for label, url in result.items():
18 footer[label] = url
19
20 datatables_translation_url = None
21 if get_language() == "de-de":
22 datatables_translation_url = static("datatables/german.json")
23
24 return {
25 "ParticipationStates": AbstractParticipation.States,
26 "footer": footer,
27 "datatables_translation_url": datatables_translation_url,
28 "ephios_version": EPHIOS_VERSION,
29 "SITE_URL": settings.SITE_URL,
30 "PWA_APP_ICONS": settings.PWA_APP_ICONS,
31 }
32
[end of ephios/core/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ephios/core/context.py b/ephios/core/context.py
--- a/ephios/core/context.py
+++ b/ephios/core/context.py
@@ -1,7 +1,6 @@
import importlib
from django.conf import settings
-from django.templatetags.static import static
from django.utils.translation import get_language
from ephios.core.models import AbstractParticipation
@@ -17,14 +16,10 @@
for label, url in result.items():
footer[label] = url
- datatables_translation_url = None
- if get_language() == "de-de":
- datatables_translation_url = static("datatables/german.json")
-
return {
"ParticipationStates": AbstractParticipation.States,
"footer": footer,
- "datatables_translation_url": datatables_translation_url,
+ "LANGUAGE_CODE": get_language(),
"ephios_version": EPHIOS_VERSION,
"SITE_URL": settings.SITE_URL,
"PWA_APP_ICONS": settings.PWA_APP_ICONS,
|
{"golden_diff": "diff --git a/ephios/core/context.py b/ephios/core/context.py\n--- a/ephios/core/context.py\n+++ b/ephios/core/context.py\n@@ -1,7 +1,6 @@\n import importlib\n \n from django.conf import settings\n-from django.templatetags.static import static\n from django.utils.translation import get_language\n \n from ephios.core.models import AbstractParticipation\n@@ -17,14 +16,10 @@\n for label, url in result.items():\n footer[label] = url\n \n- datatables_translation_url = None\n- if get_language() == \"de-de\":\n- datatables_translation_url = static(\"datatables/german.json\")\n-\n return {\n \"ParticipationStates\": AbstractParticipation.States,\n \"footer\": footer,\n- \"datatables_translation_url\": datatables_translation_url,\n+ \"LANGUAGE_CODE\": get_language(),\n \"ephios_version\": EPHIOS_VERSION,\n \"SITE_URL\": settings.SITE_URL,\n \"PWA_APP_ICONS\": settings.PWA_APP_ICONS,\n", "issue": "Select2 on disposition view shows error alert\nClosing the select2 field in the disposition view without selecting an entry (e.g. by typing something into the field an then clicking somewhere outside the field) also triggers the form submission. This fails because no valid user has been selected and consequently shows an ugly alert to the user.\n", "before_files": [{"content": "import importlib\n\nfrom django.conf import settings\nfrom django.templatetags.static import static\nfrom django.utils.translation import get_language\n\nfrom ephios.core.models import AbstractParticipation\nfrom ephios.core.signals import footer_link\n\n# suggested in https://github.com/python-poetry/poetry/issues/273\nEPHIOS_VERSION = \"v\" + importlib.metadata.version(\"ephios\")\n\n\ndef ephios_base_context(request):\n footer = {}\n for _, result in footer_link.send(None, request=request):\n for label, url in result.items():\n footer[label] = url\n\n datatables_translation_url = None\n if get_language() == \"de-de\":\n datatables_translation_url = static(\"datatables/german.json\")\n\n return {\n \"ParticipationStates\": AbstractParticipation.States,\n \"footer\": footer,\n \"datatables_translation_url\": datatables_translation_url,\n \"ephios_version\": EPHIOS_VERSION,\n \"SITE_URL\": settings.SITE_URL,\n \"PWA_APP_ICONS\": settings.PWA_APP_ICONS,\n }\n", "path": "ephios/core/context.py"}]}
| 895 | 232 |
gh_patches_debug_2477
|
rasdani/github-patches
|
git_diff
|
evennia__evennia-2708
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG - Develop] Global scripts are not being started, only restarted.
#### Describe the bug
When creating a new game, global scripts are not automatically started.
#### To Reproduce
Steps to reproduce the behavior:
1. Create a new game dir.
2. Create a script with `at_repeat` and add it to your server conf.
3. `evennia migrate` and `evennia start`
4. Check global scripts status.
5. See error
#### Expected behavior
Global scripts are supposed to start automatically.
#### Develop-branch commit
f093c8bcb
#### Additional context
I used the following dict in my settings:
```python
GLOBAL_SCRIPTS = {
"test_script": {
"typeclass": "typeclasses.scripts.TestScript",
"persistent": True,
"interval": 20,
"desc": "Testing script starting and iteration."
},
}
```
And the following script class:
```python
class TestScript(Script):
def at_repeat(self):
print("hey, listen")
```
The script loaded correctly, and I was able to manually start it. After fully stopping and then starting the server, the script was automatically started again.
However, if I create a new game, start the server, and I _don't_ manually start the script, then fully stopping and then starting the server again does _not_ automatically start the script.
</issue>
<code>
[start of evennia/utils/containers.py]
1 """
2 Containers
3
4 Containers are storage classes usually initialized from a setting. They
5 represent Singletons and acts as a convenient place to find resources (
6 available as properties on the singleton)
7
8 evennia.GLOBAL_SCRIPTS
9 evennia.OPTION_CLASSES
10
11 """
12
13
14 from pickle import dumps
15 from django.conf import settings
16 from evennia.utils.utils import class_from_module, callables_from_module
17 from evennia.utils import logger
18
19
20 SCRIPTDB = None
21
22
23 class Container:
24 """
25 Base container class. A container is simply a storage object whose
26 properties can be acquired as a property on it. This is generally
27 considered a read-only affair.
28
29 The container is initialized by a list of modules containing callables.
30
31 """
32
33 storage_modules = []
34
35 def __init__(self):
36 """
37 Read data from module.
38
39 """
40 self.loaded_data = None
41
42 def load_data(self):
43 """
44 Delayed import to avoid eventual circular imports from inside
45 the storage modules.
46
47 """
48 if self.loaded_data is None:
49 self.loaded_data = {}
50 for module in self.storage_modules:
51 self.loaded_data.update(callables_from_module(module))
52
53 def __getattr__(self, key):
54 return self.get(key)
55
56 def get(self, key, default=None):
57 """
58 Retrive data by key (in case of not knowing it beforehand).
59
60 Args:
61 key (str): The name of the script.
62 default (any, optional): Value to return if key is not found.
63
64 Returns:
65 any (any): The data loaded on this container.
66
67 """
68 self.load_data()
69 return self.loaded_data.get(key, default)
70
71 def all(self):
72 """
73 Get all stored data
74
75 Returns:
76 scripts (list): All global script objects stored on the container.
77
78 """
79 self.load_data()
80 return list(self.loaded_data.values())
81
82
83 class OptionContainer(Container):
84 """
85 Loads and stores the final list of OPTION CLASSES.
86
87 Can access these as properties or dictionary-contents.
88 """
89
90 storage_modules = settings.OPTION_CLASS_MODULES
91
92
93 class GlobalScriptContainer(Container):
94 """
95 Simple Handler object loaded by the Evennia API to contain and manage a
96 game's Global Scripts. This will list global Scripts created on their own
97 but will also auto-(re)create scripts defined in `settings.GLOBAL_SCRIPTS`.
98
99 Example:
100 import evennia
101 evennia.GLOBAL_SCRIPTS.scriptname
102
103 Note:
104 This does not use much of the BaseContainer since it's not loading
105 callables from settings but a custom dict of tuples.
106
107 """
108
109 def __init__(self):
110 """
111 Note: We must delay loading of typeclasses since this module may get
112 initialized before Scripts are actually initialized.
113
114 """
115 self.typeclass_storage = None
116 self.loaded_data = {
117 key: {} if data is None else data for key, data in settings.GLOBAL_SCRIPTS.items()
118 }
119
120 def _get_scripts(self, key=None, default=None):
121 global SCRIPTDB
122 if not SCRIPTDB:
123 from evennia.scripts.models import ScriptDB as SCRIPTDB
124 if key:
125 try:
126 return SCRIPTDB.objects.get(db_key__exact=key, db_obj__isnull=True)
127 except SCRIPTDB.DoesNotExist:
128 return default
129 else:
130 return SCRIPTDB.objects.filter(db_obj__isnull=True)
131
132 def _load_script(self, key):
133 self.load_data()
134
135 typeclass = self.typeclass_storage[key]
136 script = typeclass.objects.filter(
137 db_key=key, db_account__isnull=True, db_obj__isnull=True
138 ).first()
139
140 kwargs = {**self.loaded_data[key]}
141 kwargs["key"] = key
142 kwargs["persistent"] = kwargs.get("persistent", True)
143
144 compare_hash = str(dumps(kwargs, protocol=4))
145
146 if script:
147 script_hash = script.attributes.get("global_script_settings", category="settings_hash")
148 if script_hash is None:
149 # legacy - store the hash anew and assume no change
150 script.attributes.add(
151 "global_script_settings", compare_hash, category="settings_hash"
152 )
153 elif script_hash != compare_hash:
154 # wipe the old version and create anew
155 logger.log_info(f"GLOBAL_SCRIPTS: Settings changed for {key} ({typeclass}).")
156 script.stop()
157 script.delete()
158 script = None
159
160 if not script:
161 logger.log_info(f"GLOBAL_SCRIPTS: (Re)creating {key} ({typeclass}).")
162
163 script, errors = typeclass.create(**kwargs)
164 if errors:
165 logger.log_err("\n".join(errors))
166 return None
167
168 # store a hash representation of the setup
169 script.attributes.add("_global_script_settings", compare_hash, category="settings_hash")
170 script.start()
171
172 return script
173
174 def start(self):
175 """
176 Called last in evennia.__init__ to initialize the container late
177 (after script typeclasses have finished loading).
178
179 We include all global scripts in the handler and
180 make sure to auto-load time-based scripts.
181
182 """
183 # populate self.typeclass_storage
184 self.load_data()
185
186 # start registered scripts
187 for key in self.loaded_data:
188 self._load_script(key)
189
190 def load_data(self):
191 """
192 This delayed import avoids trying to load Scripts before they are
193 initialized.
194
195 """
196 if self.typeclass_storage is None:
197 self.typeclass_storage = {}
198 for key, data in self.loaded_data.items():
199 try:
200 typeclass = data.get("typeclass", settings.BASE_SCRIPT_TYPECLASS)
201 self.typeclass_storage[key] = class_from_module(typeclass)
202 except Exception:
203 logger.log_trace(
204 f"GlobalScriptContainer could not start import global script {key}."
205 )
206
207 def get(self, key, default=None):
208 """
209 Retrive data by key (in case of not knowing it beforehand). Any
210 scripts that are in settings.GLOBAL_SCRIPTS that are not found
211 will be recreated on-demand.
212
213 Args:
214 key (str): The name of the script.
215 default (any, optional): Value to return if key is not found
216 at all on this container (i.e it cannot be loaded at all).
217
218 Returns:
219 any (any): The data loaded on this container.
220 """
221 res = self._get_scripts(key)
222 if not res:
223 if key in self.loaded_data:
224 # recreate if we have the info
225 return self._load_script(key) or default
226 return default
227 return res
228
229 def all(self):
230 """
231 Get all global scripts. Note that this will not auto-start
232 scripts defined in settings.
233
234 Returns:
235 scripts (list): All global script objects stored on the container.
236
237 """
238 self.typeclass_storage = None
239 self.load_data()
240 for key in self.loaded_data:
241 self._load_script(key)
242 return self._get_scripts(None)
243
244
245 # Create all singletons
246
247 GLOBAL_SCRIPTS = GlobalScriptContainer()
248 OPTION_CLASSES = OptionContainer()
249
[end of evennia/utils/containers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/evennia/utils/containers.py b/evennia/utils/containers.py
--- a/evennia/utils/containers.py
+++ b/evennia/utils/containers.py
@@ -167,7 +167,7 @@
# store a hash representation of the setup
script.attributes.add("_global_script_settings", compare_hash, category="settings_hash")
- script.start()
+ script.start()
return script
|
{"golden_diff": "diff --git a/evennia/utils/containers.py b/evennia/utils/containers.py\n--- a/evennia/utils/containers.py\n+++ b/evennia/utils/containers.py\n@@ -167,7 +167,7 @@\n \n # store a hash representation of the setup\n script.attributes.add(\"_global_script_settings\", compare_hash, category=\"settings_hash\")\n- script.start()\n+ script.start()\n \n return script\n", "issue": "[BUG - Develop] Global scripts are not being started, only restarted.\n#### Describe the bug\r\nWhen creating a new game, global scripts are not automatically started.\r\n\r\n#### To Reproduce\r\nSteps to reproduce the behavior:\r\n1. Create a new game dir.\r\n2. Create a script with `at_repeat` and add it to your server conf.\r\n3. `evennia migrate` and `evennia start`\r\n4. Check global scripts status.\r\n5. See error\r\n\r\n#### Expected behavior\r\nGlobal scripts are supposed to start automatically.\r\n\r\n#### Develop-branch commit\r\nf093c8bcb\r\n\r\n#### Additional context\r\nI used the following dict in my settings:\r\n```python\r\nGLOBAL_SCRIPTS = {\r\n \"test_script\": {\r\n \"typeclass\": \"typeclasses.scripts.TestScript\",\r\n \"persistent\": True,\r\n \"interval\": 20,\r\n \"desc\": \"Testing script starting and iteration.\"\r\n },\r\n\r\n }\r\n```\r\nAnd the following script class:\r\n```python\r\nclass TestScript(Script):\r\n\r\n def at_repeat(self):\r\n print(\"hey, listen\")\r\n```\r\n\r\nThe script loaded correctly, and I was able to manually start it. After fully stopping and then starting the server, the script was automatically started again.\r\n\r\nHowever, if I create a new game, start the server, and I _don't_ manually start the script, then fully stopping and then starting the server again does _not_ automatically start the script.\n", "before_files": [{"content": "\"\"\"\nContainers\n\nContainers are storage classes usually initialized from a setting. They\nrepresent Singletons and acts as a convenient place to find resources (\navailable as properties on the singleton)\n\nevennia.GLOBAL_SCRIPTS\nevennia.OPTION_CLASSES\n\n\"\"\"\n\n\nfrom pickle import dumps\nfrom django.conf import settings\nfrom evennia.utils.utils import class_from_module, callables_from_module\nfrom evennia.utils import logger\n\n\nSCRIPTDB = None\n\n\nclass Container:\n \"\"\"\n Base container class. A container is simply a storage object whose\n properties can be acquired as a property on it. This is generally\n considered a read-only affair.\n\n The container is initialized by a list of modules containing callables.\n\n \"\"\"\n\n storage_modules = []\n\n def __init__(self):\n \"\"\"\n Read data from module.\n\n \"\"\"\n self.loaded_data = None\n\n def load_data(self):\n \"\"\"\n Delayed import to avoid eventual circular imports from inside\n the storage modules.\n\n \"\"\"\n if self.loaded_data is None:\n self.loaded_data = {}\n for module in self.storage_modules:\n self.loaded_data.update(callables_from_module(module))\n\n def __getattr__(self, key):\n return self.get(key)\n\n def get(self, key, default=None):\n \"\"\"\n Retrive data by key (in case of not knowing it beforehand).\n\n Args:\n key (str): The name of the script.\n default (any, optional): Value to return if key is not found.\n\n Returns:\n any (any): The data loaded on this container.\n\n \"\"\"\n self.load_data()\n return self.loaded_data.get(key, default)\n\n def all(self):\n \"\"\"\n Get all stored data\n\n Returns:\n scripts (list): All global script objects stored on the container.\n\n \"\"\"\n self.load_data()\n return list(self.loaded_data.values())\n\n\nclass OptionContainer(Container):\n \"\"\"\n Loads and stores the final list of OPTION CLASSES.\n\n Can access these as properties or dictionary-contents.\n \"\"\"\n\n storage_modules = settings.OPTION_CLASS_MODULES\n\n\nclass GlobalScriptContainer(Container):\n \"\"\"\n Simple Handler object loaded by the Evennia API to contain and manage a\n game's Global Scripts. This will list global Scripts created on their own\n but will also auto-(re)create scripts defined in `settings.GLOBAL_SCRIPTS`.\n\n Example:\n import evennia\n evennia.GLOBAL_SCRIPTS.scriptname\n\n Note:\n This does not use much of the BaseContainer since it's not loading\n callables from settings but a custom dict of tuples.\n\n \"\"\"\n\n def __init__(self):\n \"\"\"\n Note: We must delay loading of typeclasses since this module may get\n initialized before Scripts are actually initialized.\n\n \"\"\"\n self.typeclass_storage = None\n self.loaded_data = {\n key: {} if data is None else data for key, data in settings.GLOBAL_SCRIPTS.items()\n }\n\n def _get_scripts(self, key=None, default=None):\n global SCRIPTDB\n if not SCRIPTDB:\n from evennia.scripts.models import ScriptDB as SCRIPTDB\n if key:\n try:\n return SCRIPTDB.objects.get(db_key__exact=key, db_obj__isnull=True)\n except SCRIPTDB.DoesNotExist:\n return default\n else:\n return SCRIPTDB.objects.filter(db_obj__isnull=True)\n\n def _load_script(self, key):\n self.load_data()\n\n typeclass = self.typeclass_storage[key]\n script = typeclass.objects.filter(\n db_key=key, db_account__isnull=True, db_obj__isnull=True\n ).first()\n\n kwargs = {**self.loaded_data[key]}\n kwargs[\"key\"] = key\n kwargs[\"persistent\"] = kwargs.get(\"persistent\", True)\n\n compare_hash = str(dumps(kwargs, protocol=4))\n\n if script:\n script_hash = script.attributes.get(\"global_script_settings\", category=\"settings_hash\")\n if script_hash is None:\n # legacy - store the hash anew and assume no change\n script.attributes.add(\n \"global_script_settings\", compare_hash, category=\"settings_hash\"\n )\n elif script_hash != compare_hash:\n # wipe the old version and create anew\n logger.log_info(f\"GLOBAL_SCRIPTS: Settings changed for {key} ({typeclass}).\")\n script.stop()\n script.delete()\n script = None\n\n if not script:\n logger.log_info(f\"GLOBAL_SCRIPTS: (Re)creating {key} ({typeclass}).\")\n\n script, errors = typeclass.create(**kwargs)\n if errors:\n logger.log_err(\"\\n\".join(errors))\n return None\n\n # store a hash representation of the setup\n script.attributes.add(\"_global_script_settings\", compare_hash, category=\"settings_hash\")\n script.start()\n\n return script\n\n def start(self):\n \"\"\"\n Called last in evennia.__init__ to initialize the container late\n (after script typeclasses have finished loading).\n\n We include all global scripts in the handler and\n make sure to auto-load time-based scripts.\n\n \"\"\"\n # populate self.typeclass_storage\n self.load_data()\n\n # start registered scripts\n for key in self.loaded_data:\n self._load_script(key)\n\n def load_data(self):\n \"\"\"\n This delayed import avoids trying to load Scripts before they are\n initialized.\n\n \"\"\"\n if self.typeclass_storage is None:\n self.typeclass_storage = {}\n for key, data in self.loaded_data.items():\n try:\n typeclass = data.get(\"typeclass\", settings.BASE_SCRIPT_TYPECLASS)\n self.typeclass_storage[key] = class_from_module(typeclass)\n except Exception:\n logger.log_trace(\n f\"GlobalScriptContainer could not start import global script {key}.\"\n )\n\n def get(self, key, default=None):\n \"\"\"\n Retrive data by key (in case of not knowing it beforehand). Any\n scripts that are in settings.GLOBAL_SCRIPTS that are not found\n will be recreated on-demand.\n\n Args:\n key (str): The name of the script.\n default (any, optional): Value to return if key is not found\n at all on this container (i.e it cannot be loaded at all).\n\n Returns:\n any (any): The data loaded on this container.\n \"\"\"\n res = self._get_scripts(key)\n if not res:\n if key in self.loaded_data:\n # recreate if we have the info\n return self._load_script(key) or default\n return default\n return res\n\n def all(self):\n \"\"\"\n Get all global scripts. Note that this will not auto-start\n scripts defined in settings.\n\n Returns:\n scripts (list): All global script objects stored on the container.\n\n \"\"\"\n self.typeclass_storage = None\n self.load_data()\n for key in self.loaded_data:\n self._load_script(key)\n return self._get_scripts(None)\n\n\n# Create all singletons\n\nGLOBAL_SCRIPTS = GlobalScriptContainer()\nOPTION_CLASSES = OptionContainer()\n", "path": "evennia/utils/containers.py"}]}
| 3,000 | 97 |
gh_patches_debug_7077
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-21291
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cholesky_solve
</issue>
<code>
[start of ivy/functional/frontends/paddle/tensor/linalg.py]
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
4 from ivy.functional.frontends.paddle import promote_types_of_paddle_inputs
5 from ivy.functional.frontends.paddle.func_wrapper import (
6 to_ivy_arrays_and_back,
7 )
8
9
10 @with_supported_dtypes(
11 {"2.5.1 and below": ("float32", "float64", "int32", "int64")}, "paddle"
12 )
13 @to_ivy_arrays_and_back
14 def cross(x, y, /, *, axis=9, name=None):
15 x, y = promote_types_of_paddle_inputs(x, y)
16 return ivy.cross(x, y, axis=axis)
17
18
19 # matmul
20 @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
21 @to_ivy_arrays_and_back
22 def matmul(x, y, transpose_x=False, transpose_y=False, name=None):
23 x, y = promote_types_of_paddle_inputs(x, y)
24 return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)
25
26
27 # norm
28 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
29 @to_ivy_arrays_and_back
30 def norm(x, p="fro", axis=None, keepdim=False, name=None):
31 if axis is None and p is not None:
32 if p == "fro":
33 p = 2
34 ret = ivy.vector_norm(x.flatten(), ord=p, axis=-1)
35 if keepdim:
36 ret = ret.reshape([1] * len(x.shape))
37 if len(ret.shape) == 0:
38 return ivy.array([ret])
39 return ret
40
41 if isinstance(axis, tuple):
42 axis = list(axis)
43 if isinstance(axis, list) and len(axis) == 1:
44 axis = axis[0]
45
46 if isinstance(axis, int):
47 if p == "fro":
48 p = 2
49 if p in [0, 1, 2, ivy.inf, -ivy.inf]:
50 ret = ivy.vector_norm(x, ord=p, axis=axis, keepdims=keepdim)
51 elif isinstance(p, (int, float)):
52 ret = ivy.pow(
53 ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),
54 float(1.0 / p),
55 )
56
57 elif isinstance(axis, list) and len(axis) == 2:
58 if p == 0:
59 raise ValueError
60 elif p == 1:
61 ret = ivy.sum(ivy.abs(x), axis=axis, keepdims=keepdim)
62 elif p == 2 or p == "fro":
63 ret = ivy.matrix_norm(x, ord="fro", axis=axis, keepdims=keepdim)
64 elif p == ivy.inf:
65 ret = ivy.max(ivy.abs(x), axis=axis, keepdims=keepdim)
66 elif p == -ivy.inf:
67 ret = ivy.min(ivy.abs(x), axis=axis, keepdims=keepdim)
68 elif isinstance(p, (int, float)) and p > 0:
69 ret = ivy.pow(
70 ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),
71 float(1.0 / p),
72 )
73 else:
74 raise ValueError
75
76 else:
77 raise ValueError
78
79 if len(ret.shape) == 0:
80 ret = ivy.array(
81 [ret]
82 ) # this is done so as to match shape of output from paddle
83 return ret
84
85
86 # eig
87 @to_ivy_arrays_and_back
88 def eig(x, name=None):
89 return ivy.eig(x)
90
91
92 # eigvals
93 @to_ivy_arrays_and_back
94 def eigvals(x, name=None):
95 return ivy.eigvals(x)
96
97
98 # eigvalsh
99 @to_ivy_arrays_and_back
100 def eigvalsh(x, UPLO="L", name=None):
101 return ivy.eigvalsh(x, UPLO=UPLO)
102
103
104 # eigh
105 @to_ivy_arrays_and_back
106 def eigh(x, UPLO="L", name=None):
107 return ivy.eigh(x, UPLO=UPLO)
108
109
110 # pinv
111 @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
112 @to_ivy_arrays_and_back
113 def pinv(x, rcond=1e-15, hermitian=False, name=None):
114 # TODO: Add hermitian functionality
115 return ivy.pinv(x, rtol=rcond)
116
117
118 # solve
119 @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
120 @to_ivy_arrays_and_back
121 def solve(x1, x2, name=None):
122 return ivy.solve(x1, x2)
123
124
125 # cholesky
126 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
127 @to_ivy_arrays_and_back
128 def cholesky(x, /, *, upper=False, name=None):
129 return ivy.cholesky(x, upper=upper)
130
131
132 # bmm
133 @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
134 @to_ivy_arrays_and_back
135 def bmm(x, y, transpose_x=False, transpose_y=False, name=None):
136 if len(ivy.shape(x)) != 3 or len(ivy.shape(y)) != 3:
137 raise RuntimeError("input must be 3D matrices")
138 x, y = promote_types_of_paddle_inputs(x, y)
139 return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)
140
141
142 # matrix_power
143 @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
144 @to_ivy_arrays_and_back
145 def matrix_power(x, n, name=None):
146 return ivy.matrix_power(x, n)
147
148
149 # cond
150 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
151 @to_ivy_arrays_and_back
152 def cond(x, p=None, name=None):
153 ret = ivy.cond(x, p=p, out=name)
154 if ret.shape == ():
155 ret = ret.reshape((1,))
156 return ret
157
158
159 # dot
160 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
161 @to_ivy_arrays_and_back
162 def dot(x, y, name=None):
163 x, y = promote_types_of_paddle_inputs(x, y)
164 out = ivy.multiply(x, y)
165 return ivy.sum(out, axis=ivy.get_num_dims(x) - 1, keepdims=False)
166
167
168 # transpose
169 @with_unsupported_dtypes({"2.5.1 and below": ("uint8", "int8", "int16")}, "paddle")
170 @to_ivy_arrays_and_back
171 def transpose(x, perm, name=None):
172 return ivy.permute_dims(x, axes=perm)
173
174
175 @with_supported_dtypes({"2.4.1 and above": ("int64",)}, "paddle")
176 @to_ivy_arrays_and_back
177 def bincount(x, weights=None, minlength=0, name=None):
178 return ivy.bincount(x, weights=weights, minlength=minlength)
179
180
181 @with_supported_dtypes({"2.4.1 and above": ("float64", "float32")}, "paddle")
182 @to_ivy_arrays_and_back
183 def dist(x, y, p=2):
184 ret = ivy.vector_norm(ivy.subtract(x, y), ord=p)
185 return ivy.reshape(ret, (1,))
186
[end of ivy/functional/frontends/paddle/tensor/linalg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ivy/functional/frontends/paddle/tensor/linalg.py b/ivy/functional/frontends/paddle/tensor/linalg.py
--- a/ivy/functional/frontends/paddle/tensor/linalg.py
+++ b/ivy/functional/frontends/paddle/tensor/linalg.py
@@ -122,6 +122,16 @@
return ivy.solve(x1, x2)
+# cholesky_solve
+@with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
+@to_ivy_arrays_and_back
+def cholesky_solve(x, y, /, *, upper=False, name=None):
+ if upper:
+ y = ivy.matrix_transpose(y)
+ Y = ivy.solve(y, x)
+ return ivy.solve(ivy.matrix_transpose(y), Y)
+
+
# cholesky
@with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
|
{"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/linalg.py b/ivy/functional/frontends/paddle/tensor/linalg.py\n--- a/ivy/functional/frontends/paddle/tensor/linalg.py\n+++ b/ivy/functional/frontends/paddle/tensor/linalg.py\n@@ -122,6 +122,16 @@\n return ivy.solve(x1, x2)\n \n \n+# cholesky_solve\n+@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def cholesky_solve(x, y, /, *, upper=False, name=None):\n+ if upper:\n+ y = ivy.matrix_transpose(y)\n+ Y = ivy.solve(y, x)\n+ return ivy.solve(ivy.matrix_transpose(y), Y)\n+\n+\n # cholesky\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n @to_ivy_arrays_and_back\n", "issue": " cholesky_solve\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle import promote_types_of_paddle_inputs\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef cross(x, y, /, *, axis=9, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.cross(x, y, axis=axis)\n\n\n# matmul\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef matmul(x, y, transpose_x=False, transpose_y=False, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n\n\n# norm\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef norm(x, p=\"fro\", axis=None, keepdim=False, name=None):\n if axis is None and p is not None:\n if p == \"fro\":\n p = 2\n ret = ivy.vector_norm(x.flatten(), ord=p, axis=-1)\n if keepdim:\n ret = ret.reshape([1] * len(x.shape))\n if len(ret.shape) == 0:\n return ivy.array([ret])\n return ret\n\n if isinstance(axis, tuple):\n axis = list(axis)\n if isinstance(axis, list) and len(axis) == 1:\n axis = axis[0]\n\n if isinstance(axis, int):\n if p == \"fro\":\n p = 2\n if p in [0, 1, 2, ivy.inf, -ivy.inf]:\n ret = ivy.vector_norm(x, ord=p, axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)):\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n\n elif isinstance(axis, list) and len(axis) == 2:\n if p == 0:\n raise ValueError\n elif p == 1:\n ret = ivy.sum(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == 2 or p == \"fro\":\n ret = ivy.matrix_norm(x, ord=\"fro\", axis=axis, keepdims=keepdim)\n elif p == ivy.inf:\n ret = ivy.max(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == -ivy.inf:\n ret = ivy.min(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)) and p > 0:\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n else:\n raise ValueError\n\n else:\n raise ValueError\n\n if len(ret.shape) == 0:\n ret = ivy.array(\n [ret]\n ) # this is done so as to match shape of output from paddle\n return ret\n\n\n# eig\n@to_ivy_arrays_and_back\ndef eig(x, name=None):\n return ivy.eig(x)\n\n\n# eigvals\n@to_ivy_arrays_and_back\ndef eigvals(x, name=None):\n return ivy.eigvals(x)\n\n\n# eigvalsh\n@to_ivy_arrays_and_back\ndef eigvalsh(x, UPLO=\"L\", name=None):\n return ivy.eigvalsh(x, UPLO=UPLO)\n\n\n# eigh\n@to_ivy_arrays_and_back\ndef eigh(x, UPLO=\"L\", name=None):\n return ivy.eigh(x, UPLO=UPLO)\n\n\n# pinv\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pinv(x, rcond=1e-15, hermitian=False, name=None):\n # TODO: Add hermitian functionality\n return ivy.pinv(x, rtol=rcond)\n\n\n# solve\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef solve(x1, x2, name=None):\n return ivy.solve(x1, x2)\n\n\n# cholesky\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cholesky(x, /, *, upper=False, name=None):\n return ivy.cholesky(x, upper=upper)\n\n\n# bmm\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef bmm(x, y, transpose_x=False, transpose_y=False, name=None):\n if len(ivy.shape(x)) != 3 or len(ivy.shape(y)) != 3:\n raise RuntimeError(\"input must be 3D matrices\")\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n\n\n# matrix_power\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef matrix_power(x, n, name=None):\n return ivy.matrix_power(x, n)\n\n\n# cond\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cond(x, p=None, name=None):\n ret = ivy.cond(x, p=p, out=name)\n if ret.shape == ():\n ret = ret.reshape((1,))\n return ret\n\n\n# dot\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef dot(x, y, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n out = ivy.multiply(x, y)\n return ivy.sum(out, axis=ivy.get_num_dims(x) - 1, keepdims=False)\n\n\n# transpose\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"uint8\", \"int8\", \"int16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef transpose(x, perm, name=None):\n return ivy.permute_dims(x, axes=perm)\n\n\n@with_supported_dtypes({\"2.4.1 and above\": (\"int64\",)}, \"paddle\")\n@to_ivy_arrays_and_back\ndef bincount(x, weights=None, minlength=0, name=None):\n return ivy.bincount(x, weights=weights, minlength=minlength)\n\n\n@with_supported_dtypes({\"2.4.1 and above\": (\"float64\", \"float32\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef dist(x, y, p=2):\n ret = ivy.vector_norm(ivy.subtract(x, y), ord=p)\n return ivy.reshape(ret, (1,))\n", "path": "ivy/functional/frontends/paddle/tensor/linalg.py"}]}
| 2,789 | 241 |
gh_patches_debug_26734
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-4268
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
kivy/examples/android/takepicture/ fails on Android
Example cloned form GIT build with:
buildozer android debug
deployed to Android 4.4.4 crashes, from adb logcat output teh following lines seem to be relevant:
I/python (25790): /data/data/org.test.takepicture/files/lib/python2.7/site-packages/kivy/core/image/img_pygame.py:13: RuntimeWarning: import cdrom: No module named cdrom
I/python (25790): Traceback (most recent call last):
I/python (25790): File "/home/jb/python/mread/.buildozer/android/app/main.py", line 32, in <module>
I/python (25790): ImportError: No module named PIL
I/python (25790): Python for android ended.
Second line indicates problem with image library, unfortunately I have no clue how to fix it.
</issue>
<code>
[start of examples/android/takepicture/main.py]
1 '''
2 Take picture
3 ============
4
5 .. author:: Mathieu Virbel <[email protected]>
6
7 Little example to demonstrate how to start an Intent, and get the result.
8 When you use the Android.startActivityForResult(), the result will be dispatched
9 into onActivityResult. You can catch the event with the android.activity API
10 from python-for-android project.
11
12 If you want to compile it, don't forget to add the CAMERA permission::
13
14 ./build.py --name 'TakePicture' --package org.test.takepicture \
15 --permission CAMERA --version 1 \
16 --private ~/code/kivy/examples/android/takepicture \
17 debug installd
18
19 '''
20
21 __version__ = '0.1'
22
23 from kivy.app import App
24 from os.path import exists
25 from jnius import autoclass, cast
26 from android import activity
27 from functools import partial
28 from kivy.clock import Clock
29 from kivy.uix.scatter import Scatter
30 from kivy.properties import StringProperty
31
32 from PIL import Image
33
34 Intent = autoclass('android.content.Intent')
35 PythonActivity = autoclass('org.renpy.android.PythonActivity')
36 MediaStore = autoclass('android.provider.MediaStore')
37 Uri = autoclass('android.net.Uri')
38 Environment = autoclass('android.os.Environment')
39
40
41 class Picture(Scatter):
42 source = StringProperty(None)
43
44
45 class TakePictureApp(App):
46 def build(self):
47 self.index = 0
48 activity.bind(on_activity_result=self.on_activity_result)
49
50 def get_filename(self):
51 while True:
52 self.index += 1
53 fn = (Environment.getExternalStorageDirectory().getPath() +
54 '/takepicture{}.jpg'.format(self.index))
55 if not exists(fn):
56 return fn
57
58 def take_picture(self):
59 intent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
60 self.last_fn = self.get_filename()
61 self.uri = Uri.parse('file://' + self.last_fn)
62 self.uri = cast('android.os.Parcelable', self.uri)
63 intent.putExtra(MediaStore.EXTRA_OUTPUT, self.uri)
64 PythonActivity.mActivity.startActivityForResult(intent, 0x123)
65
66 def on_activity_result(self, requestCode, resultCode, intent):
67 if requestCode == 0x123:
68 Clock.schedule_once(partial(self.add_picture, self.last_fn), 0)
69
70 def add_picture(self, fn, *args):
71 im = Image.open(fn)
72 width, height = im.size
73 im.thumbnail((width / 4, height / 4), Image.ANTIALIAS)
74 im.save(fn, quality=95)
75 self.root.add_widget(Picture(source=fn, center=self.root.center))
76
77 def on_pause(self):
78 return True
79
80 TakePictureApp().run()
81
[end of examples/android/takepicture/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/android/takepicture/main.py b/examples/android/takepicture/main.py
--- a/examples/android/takepicture/main.py
+++ b/examples/android/takepicture/main.py
@@ -23,7 +23,7 @@
from kivy.app import App
from os.path import exists
from jnius import autoclass, cast
-from android import activity
+from android import activity, mActivity
from functools import partial
from kivy.clock import Clock
from kivy.uix.scatter import Scatter
@@ -32,7 +32,6 @@
from PIL import Image
Intent = autoclass('android.content.Intent')
-PythonActivity = autoclass('org.renpy.android.PythonActivity')
MediaStore = autoclass('android.provider.MediaStore')
Uri = autoclass('android.net.Uri')
Environment = autoclass('android.os.Environment')
@@ -61,7 +60,7 @@
self.uri = Uri.parse('file://' + self.last_fn)
self.uri = cast('android.os.Parcelable', self.uri)
intent.putExtra(MediaStore.EXTRA_OUTPUT, self.uri)
- PythonActivity.mActivity.startActivityForResult(intent, 0x123)
+ mActivity.startActivityForResult(intent, 0x123)
def on_activity_result(self, requestCode, resultCode, intent):
if requestCode == 0x123:
|
{"golden_diff": "diff --git a/examples/android/takepicture/main.py b/examples/android/takepicture/main.py\n--- a/examples/android/takepicture/main.py\n+++ b/examples/android/takepicture/main.py\n@@ -23,7 +23,7 @@\n from kivy.app import App\n from os.path import exists\n from jnius import autoclass, cast\n-from android import activity\n+from android import activity, mActivity\n from functools import partial\n from kivy.clock import Clock\n from kivy.uix.scatter import Scatter\n@@ -32,7 +32,6 @@\n from PIL import Image\n \n Intent = autoclass('android.content.Intent')\n-PythonActivity = autoclass('org.renpy.android.PythonActivity')\n MediaStore = autoclass('android.provider.MediaStore')\n Uri = autoclass('android.net.Uri')\n Environment = autoclass('android.os.Environment')\n@@ -61,7 +60,7 @@\n self.uri = Uri.parse('file://' + self.last_fn)\n self.uri = cast('android.os.Parcelable', self.uri)\n intent.putExtra(MediaStore.EXTRA_OUTPUT, self.uri)\n- PythonActivity.mActivity.startActivityForResult(intent, 0x123)\n+ mActivity.startActivityForResult(intent, 0x123)\n \n def on_activity_result(self, requestCode, resultCode, intent):\n if requestCode == 0x123:\n", "issue": "kivy/examples/android/takepicture/ fails on Android\nExample cloned form GIT build with:\nbuildozer android debug\ndeployed to Android 4.4.4 crashes, from adb logcat output teh following lines seem to be relevant:\n\nI/python (25790): /data/data/org.test.takepicture/files/lib/python2.7/site-packages/kivy/core/image/img_pygame.py:13: RuntimeWarning: import cdrom: No module named cdrom\n\nI/python (25790): Traceback (most recent call last):\nI/python (25790): File \"/home/jb/python/mread/.buildozer/android/app/main.py\", line 32, in <module>\nI/python (25790): ImportError: No module named PIL\nI/python (25790): Python for android ended.\n\nSecond line indicates problem with image library, unfortunately I have no clue how to fix it.\n\n", "before_files": [{"content": "'''\nTake picture\n============\n\n.. author:: Mathieu Virbel <[email protected]>\n\nLittle example to demonstrate how to start an Intent, and get the result.\nWhen you use the Android.startActivityForResult(), the result will be dispatched\ninto onActivityResult. You can catch the event with the android.activity API\nfrom python-for-android project.\n\nIf you want to compile it, don't forget to add the CAMERA permission::\n\n ./build.py --name 'TakePicture' --package org.test.takepicture \\\n --permission CAMERA --version 1 \\\n --private ~/code/kivy/examples/android/takepicture \\\n debug installd\n\n'''\n\n__version__ = '0.1'\n\nfrom kivy.app import App\nfrom os.path import exists\nfrom jnius import autoclass, cast\nfrom android import activity\nfrom functools import partial\nfrom kivy.clock import Clock\nfrom kivy.uix.scatter import Scatter\nfrom kivy.properties import StringProperty\n\nfrom PIL import Image\n\nIntent = autoclass('android.content.Intent')\nPythonActivity = autoclass('org.renpy.android.PythonActivity')\nMediaStore = autoclass('android.provider.MediaStore')\nUri = autoclass('android.net.Uri')\nEnvironment = autoclass('android.os.Environment')\n\n\nclass Picture(Scatter):\n source = StringProperty(None)\n\n\nclass TakePictureApp(App):\n def build(self):\n self.index = 0\n activity.bind(on_activity_result=self.on_activity_result)\n\n def get_filename(self):\n while True:\n self.index += 1\n fn = (Environment.getExternalStorageDirectory().getPath() +\n '/takepicture{}.jpg'.format(self.index))\n if not exists(fn):\n return fn\n\n def take_picture(self):\n intent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)\n self.last_fn = self.get_filename()\n self.uri = Uri.parse('file://' + self.last_fn)\n self.uri = cast('android.os.Parcelable', self.uri)\n intent.putExtra(MediaStore.EXTRA_OUTPUT, self.uri)\n PythonActivity.mActivity.startActivityForResult(intent, 0x123)\n\n def on_activity_result(self, requestCode, resultCode, intent):\n if requestCode == 0x123:\n Clock.schedule_once(partial(self.add_picture, self.last_fn), 0)\n\n def add_picture(self, fn, *args):\n im = Image.open(fn)\n width, height = im.size\n im.thumbnail((width / 4, height / 4), Image.ANTIALIAS)\n im.save(fn, quality=95)\n self.root.add_widget(Picture(source=fn, center=self.root.center))\n\n def on_pause(self):\n return True\n\nTakePictureApp().run()\n", "path": "examples/android/takepicture/main.py"}]}
| 1,472 | 286 |
gh_patches_debug_6843
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__models-211
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Encounter exception when running tools/compute_mean_std.py
```
Traceback (most recent call last):
File "tools/compute_mean_std.py", line 64, in <module>
main()
File "tools/compute_mean_std.py", line 59, in main
num_samples=args.num_samples)
File "/home/disk1/yangyaming/workspace/paddle/ds2/mandarin_unicom/default_conf/tools/../data_utils/normalizer.py", line 46, in __init__
self._compute_mean_std(manifest_path, featurize_func, num_samples)
File "/home/disk1/yangyaming/workspace/paddle/ds2/mandarin_unicom/default_conf/tools/../data_utils/normalizer.py", line 84, in _compute_mean_std
AudioSegment.from_file(instance["audio_filepath"])))
File "tools/compute_mean_std.py", line 53, in augment_and_featurize
return audio_featurizer.featurize(audio_segment)
File "/home/disk1/yangyaming/workspace/paddle/ds2/mandarin_unicom/default_conf/tools/../data_utils/featurizer/audio_featurizer.py", line 79, in featurize
allow_upsampling)):
NameError: global name 'allow_upsampling' is not defined
```
Look like a typo problem.
</issue>
<code>
[start of deep_speech_2/data_utils/featurizer/audio_featurizer.py]
1 """Contains the audio featurizer class."""
2 from __future__ import absolute_import
3 from __future__ import division
4 from __future__ import print_function
5
6 import numpy as np
7 from data_utils import utils
8 from data_utils.audio import AudioSegment
9 from python_speech_features import mfcc
10 from python_speech_features import delta
11
12
13 class AudioFeaturizer(object):
14 """Audio featurizer, for extracting features from audio contents of
15 AudioSegment or SpeechSegment.
16
17 Currently, it supports feature types of linear spectrogram and mfcc.
18
19 :param specgram_type: Specgram feature type. Options: 'linear'.
20 :type specgram_type: str
21 :param stride_ms: Striding size (in milliseconds) for generating frames.
22 :type stride_ms: float
23 :param window_ms: Window size (in milliseconds) for generating frames.
24 :type window_ms: float
25 :param max_freq: When specgram_type is 'linear', only FFT bins
26 corresponding to frequencies between [0, max_freq] are
27 returned; when specgram_type is 'mfcc', max_feq is the
28 highest band edge of mel filters.
29 :types max_freq: None|float
30 :param target_sample_rate: Audio are resampled (if upsampling or
31 downsampling is allowed) to this before
32 extracting spectrogram features.
33 :type target_sample_rate: float
34 :param use_dB_normalization: Whether to normalize the audio to a certain
35 decibels before extracting the features.
36 :type use_dB_normalization: bool
37 :param target_dB: Target audio decibels for normalization.
38 :type target_dB: float
39 """
40
41 def __init__(self,
42 specgram_type='linear',
43 stride_ms=10.0,
44 window_ms=20.0,
45 max_freq=None,
46 target_sample_rate=16000,
47 use_dB_normalization=True,
48 target_dB=-20):
49 self._specgram_type = specgram_type
50 self._stride_ms = stride_ms
51 self._window_ms = window_ms
52 self._max_freq = max_freq
53 self._target_sample_rate = target_sample_rate
54 self._use_dB_normalization = use_dB_normalization
55 self._target_dB = target_dB
56
57 def featurize(self,
58 audio_segment,
59 allow_downsampling=True,
60 allow_upsamplling=True):
61 """Extract audio features from AudioSegment or SpeechSegment.
62
63 :param audio_segment: Audio/speech segment to extract features from.
64 :type audio_segment: AudioSegment|SpeechSegment
65 :param allow_downsampling: Whether to allow audio downsampling before
66 featurizing.
67 :type allow_downsampling: bool
68 :param allow_upsampling: Whether to allow audio upsampling before
69 featurizing.
70 :type allow_upsampling: bool
71 :return: Spectrogram audio feature in 2darray.
72 :rtype: ndarray
73 :raises ValueError: If audio sample rate is not supported.
74 """
75 # upsampling or downsampling
76 if ((audio_segment.sample_rate > self._target_sample_rate and
77 allow_downsampling) or
78 (audio_segment.sample_rate < self._target_sample_rate and
79 allow_upsampling)):
80 audio_segment.resample(self._target_sample_rate)
81 if audio_segment.sample_rate != self._target_sample_rate:
82 raise ValueError("Audio sample rate is not supported. "
83 "Turn allow_downsampling or allow up_sampling on.")
84 # decibel normalization
85 if self._use_dB_normalization:
86 audio_segment.normalize(target_db=self._target_dB)
87 # extract spectrogram
88 return self._compute_specgram(audio_segment.samples,
89 audio_segment.sample_rate)
90
91 def _compute_specgram(self, samples, sample_rate):
92 """Extract various audio features."""
93 if self._specgram_type == 'linear':
94 return self._compute_linear_specgram(
95 samples, sample_rate, self._stride_ms, self._window_ms,
96 self._max_freq)
97 elif self._specgram_type == 'mfcc':
98 return self._compute_mfcc(samples, sample_rate, self._stride_ms,
99 self._window_ms, self._max_freq)
100 else:
101 raise ValueError("Unknown specgram_type %s. "
102 "Supported values: linear." % self._specgram_type)
103
104 def _compute_linear_specgram(self,
105 samples,
106 sample_rate,
107 stride_ms=10.0,
108 window_ms=20.0,
109 max_freq=None,
110 eps=1e-14):
111 """Compute the linear spectrogram from FFT energy."""
112 if max_freq is None:
113 max_freq = sample_rate / 2
114 if max_freq > sample_rate / 2:
115 raise ValueError("max_freq must be greater than half of "
116 "sample rate.")
117 if stride_ms > window_ms:
118 raise ValueError("Stride size must not be greater than "
119 "window size.")
120 stride_size = int(0.001 * sample_rate * stride_ms)
121 window_size = int(0.001 * sample_rate * window_ms)
122 specgram, freqs = self._specgram_real(
123 samples,
124 window_size=window_size,
125 stride_size=stride_size,
126 sample_rate=sample_rate)
127 ind = np.where(freqs <= max_freq)[0][-1] + 1
128 return np.log(specgram[:ind, :] + eps)
129
130 def _specgram_real(self, samples, window_size, stride_size, sample_rate):
131 """Compute the spectrogram for samples from a real signal."""
132 # extract strided windows
133 truncate_size = (len(samples) - window_size) % stride_size
134 samples = samples[:len(samples) - truncate_size]
135 nshape = (window_size, (len(samples) - window_size) // stride_size + 1)
136 nstrides = (samples.strides[0], samples.strides[0] * stride_size)
137 windows = np.lib.stride_tricks.as_strided(
138 samples, shape=nshape, strides=nstrides)
139 assert np.all(
140 windows[:, 1] == samples[stride_size:(stride_size + window_size)])
141 # window weighting, squared Fast Fourier Transform (fft), scaling
142 weighting = np.hanning(window_size)[:, None]
143 fft = np.fft.rfft(windows * weighting, axis=0)
144 fft = np.absolute(fft)**2
145 scale = np.sum(weighting**2) * sample_rate
146 fft[1:-1, :] *= (2.0 / scale)
147 fft[(0, -1), :] /= scale
148 # prepare fft frequency list
149 freqs = float(sample_rate) / window_size * np.arange(fft.shape[0])
150 return fft, freqs
151
152 def _compute_mfcc(self,
153 samples,
154 sample_rate,
155 stride_ms=10.0,
156 window_ms=20.0,
157 max_freq=None):
158 """Compute mfcc from samples."""
159 if max_freq is None:
160 max_freq = sample_rate / 2
161 if max_freq > sample_rate / 2:
162 raise ValueError("max_freq must not be greater than half of "
163 "sample rate.")
164 if stride_ms > window_ms:
165 raise ValueError("Stride size must not be greater than "
166 "window size.")
167 # compute the 13 cepstral coefficients, and the first one is replaced
168 # by log(frame energy)
169 mfcc_feat = mfcc(
170 signal=samples,
171 samplerate=sample_rate,
172 winlen=0.001 * window_ms,
173 winstep=0.001 * stride_ms,
174 highfreq=max_freq)
175 # Deltas
176 d_mfcc_feat = delta(mfcc_feat, 2)
177 # Deltas-Deltas
178 dd_mfcc_feat = delta(d_mfcc_feat, 2)
179 # transpose
180 mfcc_feat = np.transpose(mfcc_feat)
181 d_mfcc_feat = np.transpose(d_mfcc_feat)
182 dd_mfcc_feat = np.transpose(dd_mfcc_feat)
183 # concat above three features
184 concat_mfcc_feat = np.concatenate(
185 (mfcc_feat, d_mfcc_feat, dd_mfcc_feat))
186 return concat_mfcc_feat
187
[end of deep_speech_2/data_utils/featurizer/audio_featurizer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/deep_speech_2/data_utils/featurizer/audio_featurizer.py b/deep_speech_2/data_utils/featurizer/audio_featurizer.py
--- a/deep_speech_2/data_utils/featurizer/audio_featurizer.py
+++ b/deep_speech_2/data_utils/featurizer/audio_featurizer.py
@@ -57,7 +57,7 @@
def featurize(self,
audio_segment,
allow_downsampling=True,
- allow_upsamplling=True):
+ allow_upsampling=True):
"""Extract audio features from AudioSegment or SpeechSegment.
:param audio_segment: Audio/speech segment to extract features from.
|
{"golden_diff": "diff --git a/deep_speech_2/data_utils/featurizer/audio_featurizer.py b/deep_speech_2/data_utils/featurizer/audio_featurizer.py\n--- a/deep_speech_2/data_utils/featurizer/audio_featurizer.py\n+++ b/deep_speech_2/data_utils/featurizer/audio_featurizer.py\n@@ -57,7 +57,7 @@\n def featurize(self,\n audio_segment,\n allow_downsampling=True,\n- allow_upsamplling=True):\n+ allow_upsampling=True):\n \"\"\"Extract audio features from AudioSegment or SpeechSegment.\n \n :param audio_segment: Audio/speech segment to extract features from.\n", "issue": "Encounter exception when running tools/compute_mean_std.py\n```\r\nTraceback (most recent call last):\r\n File \"tools/compute_mean_std.py\", line 64, in <module>\r\n main()\r\n File \"tools/compute_mean_std.py\", line 59, in main\r\n num_samples=args.num_samples)\r\n File \"/home/disk1/yangyaming/workspace/paddle/ds2/mandarin_unicom/default_conf/tools/../data_utils/normalizer.py\", line 46, in __init__\r\n self._compute_mean_std(manifest_path, featurize_func, num_samples)\r\n File \"/home/disk1/yangyaming/workspace/paddle/ds2/mandarin_unicom/default_conf/tools/../data_utils/normalizer.py\", line 84, in _compute_mean_std\r\n AudioSegment.from_file(instance[\"audio_filepath\"])))\r\n File \"tools/compute_mean_std.py\", line 53, in augment_and_featurize\r\n return audio_featurizer.featurize(audio_segment)\r\n File \"/home/disk1/yangyaming/workspace/paddle/ds2/mandarin_unicom/default_conf/tools/../data_utils/featurizer/audio_featurizer.py\", line 79, in featurize\r\n allow_upsampling)):\r\nNameError: global name 'allow_upsampling' is not defined\r\n```\r\nLook like a typo problem.\n", "before_files": [{"content": "\"\"\"Contains the audio featurizer class.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nfrom data_utils import utils\nfrom data_utils.audio import AudioSegment\nfrom python_speech_features import mfcc\nfrom python_speech_features import delta\n\n\nclass AudioFeaturizer(object):\n \"\"\"Audio featurizer, for extracting features from audio contents of\n AudioSegment or SpeechSegment.\n\n Currently, it supports feature types of linear spectrogram and mfcc.\n\n :param specgram_type: Specgram feature type. Options: 'linear'.\n :type specgram_type: str\n :param stride_ms: Striding size (in milliseconds) for generating frames.\n :type stride_ms: float\n :param window_ms: Window size (in milliseconds) for generating frames.\n :type window_ms: float\n :param max_freq: When specgram_type is 'linear', only FFT bins\n corresponding to frequencies between [0, max_freq] are\n returned; when specgram_type is 'mfcc', max_feq is the\n highest band edge of mel filters.\n :types max_freq: None|float\n :param target_sample_rate: Audio are resampled (if upsampling or\n downsampling is allowed) to this before\n extracting spectrogram features.\n :type target_sample_rate: float\n :param use_dB_normalization: Whether to normalize the audio to a certain\n decibels before extracting the features.\n :type use_dB_normalization: bool\n :param target_dB: Target audio decibels for normalization.\n :type target_dB: float\n \"\"\"\n\n def __init__(self,\n specgram_type='linear',\n stride_ms=10.0,\n window_ms=20.0,\n max_freq=None,\n target_sample_rate=16000,\n use_dB_normalization=True,\n target_dB=-20):\n self._specgram_type = specgram_type\n self._stride_ms = stride_ms\n self._window_ms = window_ms\n self._max_freq = max_freq\n self._target_sample_rate = target_sample_rate\n self._use_dB_normalization = use_dB_normalization\n self._target_dB = target_dB\n\n def featurize(self,\n audio_segment,\n allow_downsampling=True,\n allow_upsamplling=True):\n \"\"\"Extract audio features from AudioSegment or SpeechSegment.\n\n :param audio_segment: Audio/speech segment to extract features from.\n :type audio_segment: AudioSegment|SpeechSegment\n :param allow_downsampling: Whether to allow audio downsampling before\n featurizing.\n :type allow_downsampling: bool\n :param allow_upsampling: Whether to allow audio upsampling before\n featurizing.\n :type allow_upsampling: bool\n :return: Spectrogram audio feature in 2darray.\n :rtype: ndarray\n :raises ValueError: If audio sample rate is not supported.\n \"\"\"\n # upsampling or downsampling\n if ((audio_segment.sample_rate > self._target_sample_rate and\n allow_downsampling) or\n (audio_segment.sample_rate < self._target_sample_rate and\n allow_upsampling)):\n audio_segment.resample(self._target_sample_rate)\n if audio_segment.sample_rate != self._target_sample_rate:\n raise ValueError(\"Audio sample rate is not supported. \"\n \"Turn allow_downsampling or allow up_sampling on.\")\n # decibel normalization\n if self._use_dB_normalization:\n audio_segment.normalize(target_db=self._target_dB)\n # extract spectrogram\n return self._compute_specgram(audio_segment.samples,\n audio_segment.sample_rate)\n\n def _compute_specgram(self, samples, sample_rate):\n \"\"\"Extract various audio features.\"\"\"\n if self._specgram_type == 'linear':\n return self._compute_linear_specgram(\n samples, sample_rate, self._stride_ms, self._window_ms,\n self._max_freq)\n elif self._specgram_type == 'mfcc':\n return self._compute_mfcc(samples, sample_rate, self._stride_ms,\n self._window_ms, self._max_freq)\n else:\n raise ValueError(\"Unknown specgram_type %s. \"\n \"Supported values: linear.\" % self._specgram_type)\n\n def _compute_linear_specgram(self,\n samples,\n sample_rate,\n stride_ms=10.0,\n window_ms=20.0,\n max_freq=None,\n eps=1e-14):\n \"\"\"Compute the linear spectrogram from FFT energy.\"\"\"\n if max_freq is None:\n max_freq = sample_rate / 2\n if max_freq > sample_rate / 2:\n raise ValueError(\"max_freq must be greater than half of \"\n \"sample rate.\")\n if stride_ms > window_ms:\n raise ValueError(\"Stride size must not be greater than \"\n \"window size.\")\n stride_size = int(0.001 * sample_rate * stride_ms)\n window_size = int(0.001 * sample_rate * window_ms)\n specgram, freqs = self._specgram_real(\n samples,\n window_size=window_size,\n stride_size=stride_size,\n sample_rate=sample_rate)\n ind = np.where(freqs <= max_freq)[0][-1] + 1\n return np.log(specgram[:ind, :] + eps)\n\n def _specgram_real(self, samples, window_size, stride_size, sample_rate):\n \"\"\"Compute the spectrogram for samples from a real signal.\"\"\"\n # extract strided windows\n truncate_size = (len(samples) - window_size) % stride_size\n samples = samples[:len(samples) - truncate_size]\n nshape = (window_size, (len(samples) - window_size) // stride_size + 1)\n nstrides = (samples.strides[0], samples.strides[0] * stride_size)\n windows = np.lib.stride_tricks.as_strided(\n samples, shape=nshape, strides=nstrides)\n assert np.all(\n windows[:, 1] == samples[stride_size:(stride_size + window_size)])\n # window weighting, squared Fast Fourier Transform (fft), scaling\n weighting = np.hanning(window_size)[:, None]\n fft = np.fft.rfft(windows * weighting, axis=0)\n fft = np.absolute(fft)**2\n scale = np.sum(weighting**2) * sample_rate\n fft[1:-1, :] *= (2.0 / scale)\n fft[(0, -1), :] /= scale\n # prepare fft frequency list\n freqs = float(sample_rate) / window_size * np.arange(fft.shape[0])\n return fft, freqs\n\n def _compute_mfcc(self,\n samples,\n sample_rate,\n stride_ms=10.0,\n window_ms=20.0,\n max_freq=None):\n \"\"\"Compute mfcc from samples.\"\"\"\n if max_freq is None:\n max_freq = sample_rate / 2\n if max_freq > sample_rate / 2:\n raise ValueError(\"max_freq must not be greater than half of \"\n \"sample rate.\")\n if stride_ms > window_ms:\n raise ValueError(\"Stride size must not be greater than \"\n \"window size.\")\n # compute the 13 cepstral coefficients, and the first one is replaced\n # by log(frame energy)\n mfcc_feat = mfcc(\n signal=samples,\n samplerate=sample_rate,\n winlen=0.001 * window_ms,\n winstep=0.001 * stride_ms,\n highfreq=max_freq)\n # Deltas\n d_mfcc_feat = delta(mfcc_feat, 2)\n # Deltas-Deltas\n dd_mfcc_feat = delta(d_mfcc_feat, 2)\n # transpose\n mfcc_feat = np.transpose(mfcc_feat)\n d_mfcc_feat = np.transpose(d_mfcc_feat)\n dd_mfcc_feat = np.transpose(dd_mfcc_feat)\n # concat above three features\n concat_mfcc_feat = np.concatenate(\n (mfcc_feat, d_mfcc_feat, dd_mfcc_feat))\n return concat_mfcc_feat\n", "path": "deep_speech_2/data_utils/featurizer/audio_featurizer.py"}]}
| 3,091 | 153 |
gh_patches_debug_15821
|
rasdani/github-patches
|
git_diff
|
crytic__slither-387
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ABIEncoderV2 flagged as solidity version
The following Solidity code is flagged as being different solidity versions:
```sol
pragma solidity 0.5.12;
pragma experimental ABIEncoderV2;
```
Outputs:
```
INFO:Detectors:
Different versions of Solidity is used in :
- Version used: ['0.5.12', 'ABIEncoderV2']
- 0.5.12 (Contract.sol#1)
- ABIEncoderV2 (Contract.sol#2)
```
</issue>
<code>
[start of slither/core/declarations/pragma_directive.py]
1 from slither.core.source_mapping.source_mapping import SourceMapping
2
3 class Pragma(SourceMapping):
4
5 def __init__(self, directive):
6 super(Pragma, self).__init__()
7 self._directive = directive
8
9 @property
10 def directive(self):
11 '''
12 list(str)
13 '''
14 return self._directive
15
16 @property
17 def version(self):
18 return ''.join(self.directive[1:])
19
20 @property
21 def name(self):
22 return self.version
23
24 def __str__(self):
25 return 'pragma '+''.join(self.directive)
26
[end of slither/core/declarations/pragma_directive.py]
[start of slither/detectors/attributes/constant_pragma.py]
1 """
2 Check that the same pragma is used in all the files
3 """
4
5 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
6 from slither.formatters.attributes.constant_pragma import format
7
8
9 class ConstantPragma(AbstractDetector):
10 """
11 Check that the same pragma is used in all the files
12 """
13
14 ARGUMENT = 'pragma'
15 HELP = 'If different pragma directives are used'
16 IMPACT = DetectorClassification.INFORMATIONAL
17 CONFIDENCE = DetectorClassification.HIGH
18
19 WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#different-pragma-directives-are-used'
20
21
22 WIKI_TITLE = 'Different pragma directives are used'
23 WIKI_DESCRIPTION = 'Detect if different Solidity versions are used.'
24 WIKI_RECOMMENDATION = 'Use one Solidity version.'
25
26 def _detect(self):
27 results = []
28 pragma = self.slither.pragma_directives
29 versions = [p.version for p in pragma]
30 versions = sorted(list(set(versions)))
31
32 if len(versions) > 1:
33 info = [f"Different versions of Solidity is used in {self.filename}:\n"]
34 info += [f"\t- Version used: {[str(v) for v in versions]}\n"]
35
36 for p in pragma:
37 info += ["\t- ", p, "\n"]
38
39 res = self.generate_result(info)
40
41 results.append(res)
42
43 return results
44
45 @staticmethod
46 def _format(slither, result):
47 format(slither, result)
48
[end of slither/detectors/attributes/constant_pragma.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/slither/core/declarations/pragma_directive.py b/slither/core/declarations/pragma_directive.py
--- a/slither/core/declarations/pragma_directive.py
+++ b/slither/core/declarations/pragma_directive.py
@@ -21,5 +21,11 @@
def name(self):
return self.version
+ @property
+ def is_solidity_version(self):
+ if len(self._directive) > 0:
+ return self._directive[0].lower() == 'solidity'
+ return False
+
def __str__(self):
return 'pragma '+''.join(self.directive)
diff --git a/slither/detectors/attributes/constant_pragma.py b/slither/detectors/attributes/constant_pragma.py
--- a/slither/detectors/attributes/constant_pragma.py
+++ b/slither/detectors/attributes/constant_pragma.py
@@ -26,7 +26,7 @@
def _detect(self):
results = []
pragma = self.slither.pragma_directives
- versions = [p.version for p in pragma]
+ versions = [p.version for p in pragma if p.is_solidity_version]
versions = sorted(list(set(versions)))
if len(versions) > 1:
|
{"golden_diff": "diff --git a/slither/core/declarations/pragma_directive.py b/slither/core/declarations/pragma_directive.py\n--- a/slither/core/declarations/pragma_directive.py\n+++ b/slither/core/declarations/pragma_directive.py\n@@ -21,5 +21,11 @@\n def name(self):\n return self.version\n \n+ @property\n+ def is_solidity_version(self):\n+ if len(self._directive) > 0:\n+ return self._directive[0].lower() == 'solidity'\n+ return False\n+\n def __str__(self):\n return 'pragma '+''.join(self.directive)\ndiff --git a/slither/detectors/attributes/constant_pragma.py b/slither/detectors/attributes/constant_pragma.py\n--- a/slither/detectors/attributes/constant_pragma.py\n+++ b/slither/detectors/attributes/constant_pragma.py\n@@ -26,7 +26,7 @@\n def _detect(self):\n results = []\n pragma = self.slither.pragma_directives\n- versions = [p.version for p in pragma]\n+ versions = [p.version for p in pragma if p.is_solidity_version]\n versions = sorted(list(set(versions)))\n \n if len(versions) > 1:\n", "issue": "ABIEncoderV2 flagged as solidity version\nThe following Solidity code is flagged as being different solidity versions:\r\n\r\n```sol\r\npragma solidity 0.5.12;\r\npragma experimental ABIEncoderV2;\r\n```\r\n\r\nOutputs:\r\n\r\n```\r\nINFO:Detectors:\r\nDifferent versions of Solidity is used in :\r\n\t- Version used: ['0.5.12', 'ABIEncoderV2']\r\n\t- 0.5.12 (Contract.sol#1)\r\n\t- ABIEncoderV2 (Contract.sol#2)\r\n```\n", "before_files": [{"content": "from slither.core.source_mapping.source_mapping import SourceMapping\n\nclass Pragma(SourceMapping):\n\n def __init__(self, directive):\n super(Pragma, self).__init__()\n self._directive = directive\n\n @property\n def directive(self):\n '''\n list(str)\n '''\n return self._directive\n\n @property\n def version(self):\n return ''.join(self.directive[1:])\n\n @property\n def name(self):\n return self.version\n\n def __str__(self):\n return 'pragma '+''.join(self.directive)\n", "path": "slither/core/declarations/pragma_directive.py"}, {"content": "\"\"\"\n Check that the same pragma is used in all the files\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\nfrom slither.formatters.attributes.constant_pragma import format\n\n\nclass ConstantPragma(AbstractDetector):\n \"\"\"\n Check that the same pragma is used in all the files\n \"\"\"\n\n ARGUMENT = 'pragma'\n HELP = 'If different pragma directives are used'\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#different-pragma-directives-are-used'\n\n\n WIKI_TITLE = 'Different pragma directives are used'\n WIKI_DESCRIPTION = 'Detect if different Solidity versions are used.'\n WIKI_RECOMMENDATION = 'Use one Solidity version.'\n\n def _detect(self):\n results = []\n pragma = self.slither.pragma_directives\n versions = [p.version for p in pragma]\n versions = sorted(list(set(versions)))\n\n if len(versions) > 1:\n info = [f\"Different versions of Solidity is used in {self.filename}:\\n\"]\n info += [f\"\\t- Version used: {[str(v) for v in versions]}\\n\"]\n\n for p in pragma:\n info += [\"\\t- \", p, \"\\n\"]\n\n res = self.generate_result(info)\n\n results.append(res)\n\n return results\n\n @staticmethod\n def _format(slither, result):\n format(slither, result)\n", "path": "slither/detectors/attributes/constant_pragma.py"}]}
| 1,290 | 294 |
gh_patches_debug_24379
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-11585
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
User invite accept screen leaks 'project names' of projects that the new user does not necessarily have access rights to
<!--
Do you want to ask a question? Are you looking for support? The Sentry message
board is the best place for getting support: https://forum.sentry.io
-->
## Important Details
How are you running Sentry?
* [ ] On-Premise docker [Version xyz]
* [x] Saas (sentry.io)
* [ ] Other [briefly describe your environment]
## Description
We are providing sentry as a service with our products to our customer, as a result we have internal teams but would also like to invite external users (customers) to specific projects. We are able to utilise the team construct for this and setup special customer teams with limited access.
When we however invite customers into our org to become member of this restricted team the customer received a generic org invite email and is redirected to a generic org invite screen:
https://sentry.io/accept/xyz/xyz
This invite accept screen provides a list of 'active' projects with full project name, however the user does not necessary have access to these projects.
We would classify this as a security concern and leak of 'restricted' information.
## Steps to Reproduce
1. open org
2. create 2 new teams
- team A
- team B
2. create 2 new projects
- project A, assign access to team A
- project B, assign access to team B
3. invite new user to org (as member level) and assign acces to team B only
4. user receives invite by email and opens the accept link
5. on the accept webpage notice that both project A and B are listed to be part of this org
### What you expected to happen
The recommendation is to hide the project names completely in this view, or restrict it to the projects the new user has access to based on his team assignment.
</issue>
<code>
[start of src/sentry/web/frontend/accept_organization_invite.py]
1 from __future__ import absolute_import
2
3 from django import forms
4 from django.contrib import messages
5 from django.core.urlresolvers import reverse
6 from django.utils.crypto import constant_time_compare
7 from django.utils.translation import ugettext_lazy as _
8
9 from sentry.models import AuditLogEntryEvent, Authenticator, OrganizationMember, Project
10 from sentry.signals import member_joined
11 from sentry.utils import auth
12 from sentry.web.frontend.base import BaseView
13
14 ERR_INVITE_INVALID = _('The invite link you followed is not valid, or has expired.')
15 PENDING_INVITE = 'pending-invite'
16 COOKIE_MAX_AGE = 60 * 60 * 24 * 7 # 7 days
17
18
19 class AcceptInviteForm(forms.Form):
20 pass
21
22
23 class AcceptOrganizationInviteView(BaseView):
24 auth_required = False
25
26 def get_form(self, request):
27 if request.method == 'POST':
28 return AcceptInviteForm(request.POST)
29 return AcceptInviteForm()
30
31 def redirect_with_err_message(self, request):
32 messages.add_message(
33 request,
34 messages.ERROR,
35 ERR_INVITE_INVALID,
36 )
37
38 return self.redirect(reverse('sentry'))
39
40 def handle(self, request, member_id, token):
41 assert request.method in ('POST', 'GET')
42
43 try:
44 helper = WebInviteHelper(
45 instance=self,
46 request=request,
47 member_id=member_id,
48 token=token
49 )
50 except OrganizationMember.DoesNotExist:
51 return self.redirect_with_err_message(request)
52
53 if not helper.member_pending or not helper.valid_token:
54 return self.redirect_with_err_message(request)
55
56 om = helper.om
57 organization = om.organization
58
59 qs = Project.objects.filter(
60 organization=organization,
61 )
62 project_list = list(qs[:25])
63 project_count = qs.count()
64
65 context = {
66 'org_name': organization.name,
67 'project_list': project_list,
68 'project_count': project_count,
69 'needs_authentication': not helper.user_authenticated,
70 'needs_2fa': helper.needs_2fa,
71 'logout_url': u'{}?next={}'.format(
72 reverse('sentry-logout'),
73 request.path,
74 ),
75 'login_url': u'{}?next={}'.format(
76 reverse('sentry-login'),
77 request.path,
78 ),
79 'register_url': u'{}?next={}'.format(
80 reverse('sentry-register'),
81 request.path,
82 ),
83 }
84
85 if not helper.user_authenticated:
86 # Show login or register form
87 auth.initiate_login(request, next_url=request.get_full_path())
88 request.session['can_register'] = True
89 request.session['invite_email'] = om.email
90
91 return self.respond('sentry/accept-organization-invite.html', context)
92
93 if helper.needs_2fa:
94 # redirect to setup 2fa
95 response = self.respond('sentry/accept-organization-invite.html', context)
96 response.set_cookie(PENDING_INVITE, request.path, max_age=COOKIE_MAX_AGE)
97 return response
98
99 # if they're already a member of the organization its likely they're
100 # using a shared account and either previewing this invite or
101 # are incorrectly expecting this to create a new account for them
102 context['existing_member'] = helper.member_already_exists
103
104 form = self.get_form(request)
105 if form.is_valid():
106 helper.accept_invite()
107
108 request.session.pop('can_register', None)
109 response = self.redirect(reverse('sentry-organization-home', args=[organization.slug]))
110 return helper.remove_invite_cookie(response)
111
112 context['form'] = form
113 return self.respond('sentry/accept-organization-invite.html', context)
114
115
116 class BaseInviteHelper(object):
117 def __init__(self, instance, request, member_id, token, logger=None):
118 self.request = request
119 self.instance = instance
120 self.member_id = member_id
121 self.token = token
122 self.logger = logger
123 self.om = self.get_organization_member()
124
125 def handle_success(self):
126 pass
127
128 def handle_member_already_exists(self):
129 pass
130
131 def get_organization_member(self):
132 return OrganizationMember.objects.select_related('organization').get(pk=self.member_id)
133
134 @property
135 def member_pending(self):
136 return self.om.is_pending
137
138 @property
139 def valid_token(self):
140 if self.om.token_expired:
141 return False
142 return constant_time_compare(self.om.token or self.om.legacy_token, self.token)
143
144 @property
145 def user_authenticated(self):
146 return self.request.user.is_authenticated()
147
148 @property
149 def needs_2fa(self):
150 org_requires_2fa = self.om.organization.flags.require_2fa.is_set
151 user_has_2fa = Authenticator.objects.user_has_2fa(self.request.user.id)
152 return org_requires_2fa and not user_has_2fa
153
154 @property
155 def member_already_exists(self):
156 return OrganizationMember.objects.filter(
157 organization=self.om.organization, user=self.request.user
158 ).exists()
159
160 def accept_invite(self):
161 om = self.om
162
163 if self.member_already_exists:
164 self.handle_member_already_exists()
165 om.delete()
166 else:
167 om.set_user(self.request.user)
168 om.save()
169
170 self.instance.create_audit_entry(
171 self.request,
172 organization=om.organization,
173 target_object=om.id,
174 target_user=self.request.user,
175 event=AuditLogEntryEvent.MEMBER_ACCEPT,
176 data=om.get_audit_log_data(),
177 )
178
179 self.handle_success()
180
181 def remove_invite_cookie(self, response):
182 if PENDING_INVITE in self.request.COOKIES:
183 response.delete_cookie(PENDING_INVITE)
184 return response
185
186
187 class WebInviteHelper(BaseInviteHelper):
188 def handle_success(self):
189 messages.add_message(
190 self.request, messages.SUCCESS,
191 _('You have been added to the %r organization.') %
192 (self.om.organization.name.encode('utf-8'), )
193 )
194
195 member_joined.send_robust(
196 member=self.om, organization=self.om.organization, sender=self.instance)
197
198 def handle_member_already_exists(self):
199 messages.add_message(
200 self.request, messages.SUCCESS,
201 _('You are already a member of the %r organization.') %
202 (self.om.organization.name.encode('utf-8'), )
203 )
204
205
206 class ApiInviteHelper(BaseInviteHelper):
207 def handle_member_already_exists(self):
208 self.logger.info(
209 'Pending org invite not accepted - User already org member',
210 extra={
211 'organization_id': self.om.organization.id,
212 'user_id': self.request.user.id,
213 }
214 )
215
216 def valid_request(self):
217 if (not self.member_pending or
218 not self.valid_token or
219 not self.user_authenticated or
220 self.needs_2fa):
221 return False
222 return True
223
[end of src/sentry/web/frontend/accept_organization_invite.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/sentry/web/frontend/accept_organization_invite.py b/src/sentry/web/frontend/accept_organization_invite.py
--- a/src/sentry/web/frontend/accept_organization_invite.py
+++ b/src/sentry/web/frontend/accept_organization_invite.py
@@ -6,7 +6,7 @@
from django.utils.crypto import constant_time_compare
from django.utils.translation import ugettext_lazy as _
-from sentry.models import AuditLogEntryEvent, Authenticator, OrganizationMember, Project
+from sentry.models import AuditLogEntryEvent, Authenticator, OrganizationMember
from sentry.signals import member_joined
from sentry.utils import auth
from sentry.web.frontend.base import BaseView
@@ -56,16 +56,8 @@
om = helper.om
organization = om.organization
- qs = Project.objects.filter(
- organization=organization,
- )
- project_list = list(qs[:25])
- project_count = qs.count()
-
context = {
'org_name': organization.name,
- 'project_list': project_list,
- 'project_count': project_count,
'needs_authentication': not helper.user_authenticated,
'needs_2fa': helper.needs_2fa,
'logout_url': u'{}?next={}'.format(
|
{"golden_diff": "diff --git a/src/sentry/web/frontend/accept_organization_invite.py b/src/sentry/web/frontend/accept_organization_invite.py\n--- a/src/sentry/web/frontend/accept_organization_invite.py\n+++ b/src/sentry/web/frontend/accept_organization_invite.py\n@@ -6,7 +6,7 @@\n from django.utils.crypto import constant_time_compare\n from django.utils.translation import ugettext_lazy as _\n \n-from sentry.models import AuditLogEntryEvent, Authenticator, OrganizationMember, Project\n+from sentry.models import AuditLogEntryEvent, Authenticator, OrganizationMember\n from sentry.signals import member_joined\n from sentry.utils import auth\n from sentry.web.frontend.base import BaseView\n@@ -56,16 +56,8 @@\n om = helper.om\n organization = om.organization\n \n- qs = Project.objects.filter(\n- organization=organization,\n- )\n- project_list = list(qs[:25])\n- project_count = qs.count()\n-\n context = {\n 'org_name': organization.name,\n- 'project_list': project_list,\n- 'project_count': project_count,\n 'needs_authentication': not helper.user_authenticated,\n 'needs_2fa': helper.needs_2fa,\n 'logout_url': u'{}?next={}'.format(\n", "issue": "User invite accept screen leaks 'project names' of projects that the new user does not necessarily have access rights to\n<!--\r\n\r\nDo you want to ask a question? Are you looking for support? The Sentry message\r\nboard is the best place for getting support: https://forum.sentry.io\r\n-->\r\n\r\n## Important Details\r\n\r\nHow are you running Sentry?\r\n\r\n* [ ] On-Premise docker [Version xyz]\r\n* [x] Saas (sentry.io)\r\n* [ ] Other [briefly describe your environment]\r\n\r\n## Description\r\n\r\nWe are providing sentry as a service with our products to our customer, as a result we have internal teams but would also like to invite external users (customers) to specific projects. We are able to utilise the team construct for this and setup special customer teams with limited access.\r\n\r\nWhen we however invite customers into our org to become member of this restricted team the customer received a generic org invite email and is redirected to a generic org invite screen:\r\nhttps://sentry.io/accept/xyz/xyz\r\n\r\nThis invite accept screen provides a list of 'active' projects with full project name, however the user does not necessary have access to these projects. \r\n\r\nWe would classify this as a security concern and leak of 'restricted' information. \r\n\r\n## Steps to Reproduce\r\n\r\n1. open org\r\n2. create 2 new teams\r\n - team A\r\n - team B\r\n2. create 2 new projects\r\n - project A, assign access to team A\r\n - project B, assign access to team B\r\n3. invite new user to org (as member level) and assign acces to team B only\r\n4. user receives invite by email and opens the accept link\r\n5. on the accept webpage notice that both project A and B are listed to be part of this org\r\n\r\n### What you expected to happen\r\n\r\nThe recommendation is to hide the project names completely in this view, or restrict it to the projects the new user has access to based on his team assignment.\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom django import forms\nfrom django.contrib import messages\nfrom django.core.urlresolvers import reverse\nfrom django.utils.crypto import constant_time_compare\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom sentry.models import AuditLogEntryEvent, Authenticator, OrganizationMember, Project\nfrom sentry.signals import member_joined\nfrom sentry.utils import auth\nfrom sentry.web.frontend.base import BaseView\n\nERR_INVITE_INVALID = _('The invite link you followed is not valid, or has expired.')\nPENDING_INVITE = 'pending-invite'\nCOOKIE_MAX_AGE = 60 * 60 * 24 * 7 # 7 days\n\n\nclass AcceptInviteForm(forms.Form):\n pass\n\n\nclass AcceptOrganizationInviteView(BaseView):\n auth_required = False\n\n def get_form(self, request):\n if request.method == 'POST':\n return AcceptInviteForm(request.POST)\n return AcceptInviteForm()\n\n def redirect_with_err_message(self, request):\n messages.add_message(\n request,\n messages.ERROR,\n ERR_INVITE_INVALID,\n )\n\n return self.redirect(reverse('sentry'))\n\n def handle(self, request, member_id, token):\n assert request.method in ('POST', 'GET')\n\n try:\n helper = WebInviteHelper(\n instance=self,\n request=request,\n member_id=member_id,\n token=token\n )\n except OrganizationMember.DoesNotExist:\n return self.redirect_with_err_message(request)\n\n if not helper.member_pending or not helper.valid_token:\n return self.redirect_with_err_message(request)\n\n om = helper.om\n organization = om.organization\n\n qs = Project.objects.filter(\n organization=organization,\n )\n project_list = list(qs[:25])\n project_count = qs.count()\n\n context = {\n 'org_name': organization.name,\n 'project_list': project_list,\n 'project_count': project_count,\n 'needs_authentication': not helper.user_authenticated,\n 'needs_2fa': helper.needs_2fa,\n 'logout_url': u'{}?next={}'.format(\n reverse('sentry-logout'),\n request.path,\n ),\n 'login_url': u'{}?next={}'.format(\n reverse('sentry-login'),\n request.path,\n ),\n 'register_url': u'{}?next={}'.format(\n reverse('sentry-register'),\n request.path,\n ),\n }\n\n if not helper.user_authenticated:\n # Show login or register form\n auth.initiate_login(request, next_url=request.get_full_path())\n request.session['can_register'] = True\n request.session['invite_email'] = om.email\n\n return self.respond('sentry/accept-organization-invite.html', context)\n\n if helper.needs_2fa:\n # redirect to setup 2fa\n response = self.respond('sentry/accept-organization-invite.html', context)\n response.set_cookie(PENDING_INVITE, request.path, max_age=COOKIE_MAX_AGE)\n return response\n\n # if they're already a member of the organization its likely they're\n # using a shared account and either previewing this invite or\n # are incorrectly expecting this to create a new account for them\n context['existing_member'] = helper.member_already_exists\n\n form = self.get_form(request)\n if form.is_valid():\n helper.accept_invite()\n\n request.session.pop('can_register', None)\n response = self.redirect(reverse('sentry-organization-home', args=[organization.slug]))\n return helper.remove_invite_cookie(response)\n\n context['form'] = form\n return self.respond('sentry/accept-organization-invite.html', context)\n\n\nclass BaseInviteHelper(object):\n def __init__(self, instance, request, member_id, token, logger=None):\n self.request = request\n self.instance = instance\n self.member_id = member_id\n self.token = token\n self.logger = logger\n self.om = self.get_organization_member()\n\n def handle_success(self):\n pass\n\n def handle_member_already_exists(self):\n pass\n\n def get_organization_member(self):\n return OrganizationMember.objects.select_related('organization').get(pk=self.member_id)\n\n @property\n def member_pending(self):\n return self.om.is_pending\n\n @property\n def valid_token(self):\n if self.om.token_expired:\n return False\n return constant_time_compare(self.om.token or self.om.legacy_token, self.token)\n\n @property\n def user_authenticated(self):\n return self.request.user.is_authenticated()\n\n @property\n def needs_2fa(self):\n org_requires_2fa = self.om.organization.flags.require_2fa.is_set\n user_has_2fa = Authenticator.objects.user_has_2fa(self.request.user.id)\n return org_requires_2fa and not user_has_2fa\n\n @property\n def member_already_exists(self):\n return OrganizationMember.objects.filter(\n organization=self.om.organization, user=self.request.user\n ).exists()\n\n def accept_invite(self):\n om = self.om\n\n if self.member_already_exists:\n self.handle_member_already_exists()\n om.delete()\n else:\n om.set_user(self.request.user)\n om.save()\n\n self.instance.create_audit_entry(\n self.request,\n organization=om.organization,\n target_object=om.id,\n target_user=self.request.user,\n event=AuditLogEntryEvent.MEMBER_ACCEPT,\n data=om.get_audit_log_data(),\n )\n\n self.handle_success()\n\n def remove_invite_cookie(self, response):\n if PENDING_INVITE in self.request.COOKIES:\n response.delete_cookie(PENDING_INVITE)\n return response\n\n\nclass WebInviteHelper(BaseInviteHelper):\n def handle_success(self):\n messages.add_message(\n self.request, messages.SUCCESS,\n _('You have been added to the %r organization.') %\n (self.om.organization.name.encode('utf-8'), )\n )\n\n member_joined.send_robust(\n member=self.om, organization=self.om.organization, sender=self.instance)\n\n def handle_member_already_exists(self):\n messages.add_message(\n self.request, messages.SUCCESS,\n _('You are already a member of the %r organization.') %\n (self.om.organization.name.encode('utf-8'), )\n )\n\n\nclass ApiInviteHelper(BaseInviteHelper):\n def handle_member_already_exists(self):\n self.logger.info(\n 'Pending org invite not accepted - User already org member',\n extra={\n 'organization_id': self.om.organization.id,\n 'user_id': self.request.user.id,\n }\n )\n\n def valid_request(self):\n if (not self.member_pending or\n not self.valid_token or\n not self.user_authenticated or\n self.needs_2fa):\n return False\n return True\n", "path": "src/sentry/web/frontend/accept_organization_invite.py"}]}
| 2,986 | 275 |
gh_patches_debug_6765
|
rasdani/github-patches
|
git_diff
|
conda-forge__conda-smithy-971
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix linter error on missing target_platform
Invoking `conda smithy recipe-lint` on the [conda-forge/go1.4-feedstock/meta.yaml](
https://github.com/conda-forge/go1.4-bootstrap-feedstock/blob/master/recipe/meta.yaml) file yields the following exception:
```
± conda smithy recipe-lint
Traceback (most recent call last):
File "/opt/conda/bin/conda-smithy", line 10, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.6/site-packages/conda_smithy/cli.py", line 279, in main
args.subcommand_func(args)
File "/opt/conda/lib/python3.6/site-packages/conda_smithy/cli.py", line 203, in __call__
return_hints=True)
File "/opt/conda/lib/python3.6/site-packages/conda_smithy/lint_recipe.py", line 428, in main
content = render_meta_yaml(''.join(fh))
File "/opt/conda/lib/python3.6/site-packages/conda_smithy/utils.py", line 49, in render_meta_yaml
content = env.from_string(text).render(os=mockos, environ=mockos.environ)
File "/opt/conda/lib/python3.6/site-packages/jinja2/asyncsupport.py", line 76, in render
return original_render(self, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/opt/conda/lib/python3.6/site-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/opt/conda/lib/python3.6/site-packages/jinja2/_compat.py", line 37, in reraise
raise value.with_traceback(tb)
File "<template>", line 29, in top-level template code
jinja2.exceptions.UndefinedError: 'target_platform' is undefined
```
</issue>
<code>
[start of conda_smithy/utils.py]
1 import shutil
2 import tempfile
3 import jinja2
4 import six
5 import datetime
6 import time
7 from collections import defaultdict
8 from contextlib import contextmanager
9
10 @contextmanager
11 def tmp_directory():
12 tmp_dir = tempfile.mkdtemp('_recipe')
13 yield tmp_dir
14 shutil.rmtree(tmp_dir)
15
16
17 class NullUndefined(jinja2.Undefined):
18 def __unicode__(self):
19 return self._undefined_name
20
21 def __getattr__(self, name):
22 return '{}.{}'.format(self, name)
23
24 def __getitem__(self, name):
25 return '{}["{}"]'.format(self, name)
26
27
28 class MockOS(dict):
29 def __init__(self):
30 self.environ = defaultdict(lambda: '')
31
32
33 def render_meta_yaml(text):
34 env = jinja2.Environment(undefined=NullUndefined)
35
36 # stub out cb3 jinja2 functions - they are not important for linting
37 # if we don't stub them out, the ruamel.yaml load fails to interpret them
38 # we can't just use conda-build's api.render functionality, because it would apply selectors
39 env.globals.update(dict(compiler=lambda x: x + '_compiler_stub',
40 pin_subpackage=lambda *args, **kwargs: 'subpackage_stub',
41 pin_compatible=lambda *args, **kwargs: 'compatible_pin_stub',
42 cdt=lambda *args, **kwargs: 'cdt_stub',
43 load_file_regex=lambda *args, **kwargs: \
44 defaultdict(lambda : ''),
45 datetime=datetime,
46 time=time,
47 ))
48 mockos = MockOS()
49 content = env.from_string(text).render(os=mockos, environ=mockos.environ)
50 return content
51
[end of conda_smithy/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conda_smithy/utils.py b/conda_smithy/utils.py
--- a/conda_smithy/utils.py
+++ b/conda_smithy/utils.py
@@ -44,6 +44,7 @@
defaultdict(lambda : ''),
datetime=datetime,
time=time,
+ target_platform="linux-64",
))
mockos = MockOS()
content = env.from_string(text).render(os=mockos, environ=mockos.environ)
|
{"golden_diff": "diff --git a/conda_smithy/utils.py b/conda_smithy/utils.py\n--- a/conda_smithy/utils.py\n+++ b/conda_smithy/utils.py\n@@ -44,6 +44,7 @@\n defaultdict(lambda : ''),\n datetime=datetime,\n time=time,\n+ target_platform=\"linux-64\",\n ))\n mockos = MockOS()\n content = env.from_string(text).render(os=mockos, environ=mockos.environ)\n", "issue": "Fix linter error on missing target_platform\nInvoking `conda smithy recipe-lint` on the [conda-forge/go1.4-feedstock/meta.yaml](\r\nhttps://github.com/conda-forge/go1.4-bootstrap-feedstock/blob/master/recipe/meta.yaml) file yields the following exception:\r\n\r\n```\r\n\u00b1 conda smithy recipe-lint\r\nTraceback (most recent call last):\r\n File \"/opt/conda/bin/conda-smithy\", line 10, in <module>\r\n sys.exit(main())\r\n File \"/opt/conda/lib/python3.6/site-packages/conda_smithy/cli.py\", line 279, in main\r\n args.subcommand_func(args)\r\n File \"/opt/conda/lib/python3.6/site-packages/conda_smithy/cli.py\", line 203, in __call__\r\n return_hints=True)\r\n File \"/opt/conda/lib/python3.6/site-packages/conda_smithy/lint_recipe.py\", line 428, in main\r\n content = render_meta_yaml(''.join(fh))\r\n File \"/opt/conda/lib/python3.6/site-packages/conda_smithy/utils.py\", line 49, in render_meta_yaml\r\n content = env.from_string(text).render(os=mockos, environ=mockos.environ)\r\n File \"/opt/conda/lib/python3.6/site-packages/jinja2/asyncsupport.py\", line 76, in render\r\n return original_render(self, *args, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/jinja2/environment.py\", line 1008, in render\r\n return self.environment.handle_exception(exc_info, True)\r\n File \"/opt/conda/lib/python3.6/site-packages/jinja2/environment.py\", line 780, in handle_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/opt/conda/lib/python3.6/site-packages/jinja2/_compat.py\", line 37, in reraise\r\n raise value.with_traceback(tb)\r\n File \"<template>\", line 29, in top-level template code\r\njinja2.exceptions.UndefinedError: 'target_platform' is undefined\r\n```\n", "before_files": [{"content": "import shutil\nimport tempfile\nimport jinja2\nimport six\nimport datetime\nimport time\nfrom collections import defaultdict\nfrom contextlib import contextmanager\n\n@contextmanager\ndef tmp_directory():\n tmp_dir = tempfile.mkdtemp('_recipe')\n yield tmp_dir\n shutil.rmtree(tmp_dir)\n\n\nclass NullUndefined(jinja2.Undefined):\n def __unicode__(self):\n return self._undefined_name\n\n def __getattr__(self, name):\n return '{}.{}'.format(self, name)\n\n def __getitem__(self, name):\n return '{}[\"{}\"]'.format(self, name)\n\n\nclass MockOS(dict):\n def __init__(self):\n self.environ = defaultdict(lambda: '')\n\n\ndef render_meta_yaml(text):\n env = jinja2.Environment(undefined=NullUndefined)\n\n # stub out cb3 jinja2 functions - they are not important for linting\n # if we don't stub them out, the ruamel.yaml load fails to interpret them\n # we can't just use conda-build's api.render functionality, because it would apply selectors\n env.globals.update(dict(compiler=lambda x: x + '_compiler_stub',\n pin_subpackage=lambda *args, **kwargs: 'subpackage_stub',\n pin_compatible=lambda *args, **kwargs: 'compatible_pin_stub',\n cdt=lambda *args, **kwargs: 'cdt_stub',\n load_file_regex=lambda *args, **kwargs: \\\n defaultdict(lambda : ''),\n datetime=datetime,\n time=time,\n ))\n mockos = MockOS()\n content = env.from_string(text).render(os=mockos, environ=mockos.environ)\n return content\n", "path": "conda_smithy/utils.py"}]}
| 1,460 | 104 |
gh_patches_debug_2928
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-3621
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[modin] Importing Modin before Ray can sometimes cause ImportError
### Describe the problem
<!-- Describe the problem clearly here. -->
When running Modin with Ray installed from source, I am sometimes running into `ImportError` and `ModuleNotFoundError` which is occurring when I am running a modified version of Modin. This forces me to modify Ray's source such that it does not try to use the Modin that is bundled with Ray.
I will work on a solution for this.
### Source code / logs
`import modin.pandas as pd`
```
Traceback (most recent call last):
File "/home/ubuntu/ray/python/ray/function_manager.py", line 165, in fetch_and_register_remote_function
function = pickle.loads(serialized_function)
ModuleNotFoundError: No module named 'modin.data_management.utils'
```
</issue>
<code>
[start of python/ray/__init__.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 import os
6 import sys
7
8 if "pyarrow" in sys.modules:
9 raise ImportError("Ray must be imported before pyarrow because Ray "
10 "requires a specific version of pyarrow (which is "
11 "packaged along with Ray).")
12
13 # Add the directory containing pyarrow to the Python path so that we find the
14 # pyarrow version packaged with ray and not a pre-existing pyarrow.
15 pyarrow_path = os.path.join(
16 os.path.abspath(os.path.dirname(__file__)), "pyarrow_files")
17 sys.path.insert(0, pyarrow_path)
18
19 # See https://github.com/ray-project/ray/issues/131.
20 helpful_message = """
21
22 If you are using Anaconda, try fixing this problem by running:
23
24 conda install libgcc
25 """
26
27 try:
28 import pyarrow # noqa: F401
29 except ImportError as e:
30 if ((hasattr(e, "msg") and isinstance(e.msg, str)
31 and ("libstdc++" in e.msg or "CXX" in e.msg))):
32 # This code path should be taken with Python 3.
33 e.msg += helpful_message
34 elif (hasattr(e, "message") and isinstance(e.message, str)
35 and ("libstdc++" in e.message or "CXX" in e.message)):
36 # This code path should be taken with Python 2.
37 condition = (hasattr(e, "args") and isinstance(e.args, tuple)
38 and len(e.args) == 1 and isinstance(e.args[0], str))
39 if condition:
40 e.args = (e.args[0] + helpful_message, )
41 else:
42 if not hasattr(e, "args"):
43 e.args = ()
44 elif not isinstance(e.args, tuple):
45 e.args = (e.args, )
46 e.args += (helpful_message, )
47 raise
48
49 modin_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), "modin")
50 sys.path.insert(0, modin_path)
51
52 from ray.raylet import ObjectID, _config # noqa: E402
53 from ray.profiling import profile # noqa: E402
54 from ray.worker import (error_info, init, connect, disconnect, get, put, wait,
55 remote, get_gpu_ids, get_resource_ids, get_webui_url,
56 register_custom_serializer, shutdown,
57 is_initialized) # noqa: E402
58 from ray.worker import (SCRIPT_MODE, WORKER_MODE, LOCAL_MODE,
59 PYTHON_MODE) # noqa: E402
60 from ray.worker import global_state # noqa: E402
61 import ray.internal # noqa: E402
62 # We import ray.actor because some code is run in actor.py which initializes
63 # some functions in the worker.
64 import ray.actor # noqa: F401
65 from ray.actor import method # noqa: E402
66
67 # Ray version string.
68 __version__ = "0.6.0"
69
70 __all__ = [
71 "error_info", "init", "connect", "disconnect", "get", "put", "wait",
72 "remote", "profile", "actor", "method", "get_gpu_ids", "get_resource_ids",
73 "get_webui_url", "register_custom_serializer", "shutdown",
74 "is_initialized", "SCRIPT_MODE", "WORKER_MODE", "LOCAL_MODE",
75 "PYTHON_MODE", "global_state", "ObjectID", "_config", "__version__",
76 "internal"
77 ]
78
79 import ctypes # noqa: E402
80 # Windows only
81 if hasattr(ctypes, "windll"):
82 # Makes sure that all child processes die when we die. Also makes sure that
83 # fatal crashes result in process termination rather than an error dialog
84 # (the latter is annoying since we have a lot of processes). This is done
85 # by associating all child processes with a "job" object that imposes this
86 # behavior.
87 (lambda kernel32: (lambda job: (lambda n: kernel32.SetInformationJobObject(job, 9, "\0" * 17 + chr(0x8 | 0x4 | 0x20) + "\0" * (n - 18), n))(0x90 if ctypes.sizeof(ctypes.c_void_p) > ctypes.sizeof(ctypes.c_int) else 0x70) and kernel32.AssignProcessToJobObject(job, ctypes.c_void_p(kernel32.GetCurrentProcess())))(ctypes.c_void_p(kernel32.CreateJobObjectW(None, None))) if kernel32 is not None else None)(ctypes.windll.kernel32) # noqa: E501
88
[end of python/ray/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/ray/__init__.py b/python/ray/__init__.py
--- a/python/ray/__init__.py
+++ b/python/ray/__init__.py
@@ -47,7 +47,7 @@
raise
modin_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), "modin")
-sys.path.insert(0, modin_path)
+sys.path.append(modin_path)
from ray.raylet import ObjectID, _config # noqa: E402
from ray.profiling import profile # noqa: E402
|
{"golden_diff": "diff --git a/python/ray/__init__.py b/python/ray/__init__.py\n--- a/python/ray/__init__.py\n+++ b/python/ray/__init__.py\n@@ -47,7 +47,7 @@\n raise\n \n modin_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), \"modin\")\n-sys.path.insert(0, modin_path)\n+sys.path.append(modin_path)\n \n from ray.raylet import ObjectID, _config # noqa: E402\n from ray.profiling import profile # noqa: E402\n", "issue": "[modin] Importing Modin before Ray can sometimes cause ImportError\n\r\n### Describe the problem\r\n<!-- Describe the problem clearly here. -->\r\nWhen running Modin with Ray installed from source, I am sometimes running into `ImportError` and `ModuleNotFoundError` which is occurring when I am running a modified version of Modin. This forces me to modify Ray's source such that it does not try to use the Modin that is bundled with Ray.\r\n\r\nI will work on a solution for this.\r\n\r\n### Source code / logs\r\n\r\n`import modin.pandas as pd`\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/ray/python/ray/function_manager.py\", line 165, in fetch_and_register_remote_function\r\n function = pickle.loads(serialized_function)\r\nModuleNotFoundError: No module named 'modin.data_management.utils'\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport sys\n\nif \"pyarrow\" in sys.modules:\n raise ImportError(\"Ray must be imported before pyarrow because Ray \"\n \"requires a specific version of pyarrow (which is \"\n \"packaged along with Ray).\")\n\n# Add the directory containing pyarrow to the Python path so that we find the\n# pyarrow version packaged with ray and not a pre-existing pyarrow.\npyarrow_path = os.path.join(\n os.path.abspath(os.path.dirname(__file__)), \"pyarrow_files\")\nsys.path.insert(0, pyarrow_path)\n\n# See https://github.com/ray-project/ray/issues/131.\nhelpful_message = \"\"\"\n\nIf you are using Anaconda, try fixing this problem by running:\n\n conda install libgcc\n\"\"\"\n\ntry:\n import pyarrow # noqa: F401\nexcept ImportError as e:\n if ((hasattr(e, \"msg\") and isinstance(e.msg, str)\n and (\"libstdc++\" in e.msg or \"CXX\" in e.msg))):\n # This code path should be taken with Python 3.\n e.msg += helpful_message\n elif (hasattr(e, \"message\") and isinstance(e.message, str)\n and (\"libstdc++\" in e.message or \"CXX\" in e.message)):\n # This code path should be taken with Python 2.\n condition = (hasattr(e, \"args\") and isinstance(e.args, tuple)\n and len(e.args) == 1 and isinstance(e.args[0], str))\n if condition:\n e.args = (e.args[0] + helpful_message, )\n else:\n if not hasattr(e, \"args\"):\n e.args = ()\n elif not isinstance(e.args, tuple):\n e.args = (e.args, )\n e.args += (helpful_message, )\n raise\n\nmodin_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), \"modin\")\nsys.path.insert(0, modin_path)\n\nfrom ray.raylet import ObjectID, _config # noqa: E402\nfrom ray.profiling import profile # noqa: E402\nfrom ray.worker import (error_info, init, connect, disconnect, get, put, wait,\n remote, get_gpu_ids, get_resource_ids, get_webui_url,\n register_custom_serializer, shutdown,\n is_initialized) # noqa: E402\nfrom ray.worker import (SCRIPT_MODE, WORKER_MODE, LOCAL_MODE,\n PYTHON_MODE) # noqa: E402\nfrom ray.worker import global_state # noqa: E402\nimport ray.internal # noqa: E402\n# We import ray.actor because some code is run in actor.py which initializes\n# some functions in the worker.\nimport ray.actor # noqa: F401\nfrom ray.actor import method # noqa: E402\n\n# Ray version string.\n__version__ = \"0.6.0\"\n\n__all__ = [\n \"error_info\", \"init\", \"connect\", \"disconnect\", \"get\", \"put\", \"wait\",\n \"remote\", \"profile\", \"actor\", \"method\", \"get_gpu_ids\", \"get_resource_ids\",\n \"get_webui_url\", \"register_custom_serializer\", \"shutdown\",\n \"is_initialized\", \"SCRIPT_MODE\", \"WORKER_MODE\", \"LOCAL_MODE\",\n \"PYTHON_MODE\", \"global_state\", \"ObjectID\", \"_config\", \"__version__\",\n \"internal\"\n]\n\nimport ctypes # noqa: E402\n# Windows only\nif hasattr(ctypes, \"windll\"):\n # Makes sure that all child processes die when we die. Also makes sure that\n # fatal crashes result in process termination rather than an error dialog\n # (the latter is annoying since we have a lot of processes). This is done\n # by associating all child processes with a \"job\" object that imposes this\n # behavior.\n (lambda kernel32: (lambda job: (lambda n: kernel32.SetInformationJobObject(job, 9, \"\\0\" * 17 + chr(0x8 | 0x4 | 0x20) + \"\\0\" * (n - 18), n))(0x90 if ctypes.sizeof(ctypes.c_void_p) > ctypes.sizeof(ctypes.c_int) else 0x70) and kernel32.AssignProcessToJobObject(job, ctypes.c_void_p(kernel32.GetCurrentProcess())))(ctypes.c_void_p(kernel32.CreateJobObjectW(None, None))) if kernel32 is not None else None)(ctypes.windll.kernel32) # noqa: E501\n", "path": "python/ray/__init__.py"}]}
| 1,917 | 131 |
gh_patches_debug_34061
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-1194
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Column API should support creating columns without names
## Problem
The `name` parameter for the column API shouldn't be required.
## Proposed solution
We should auto-generate a name if it was not specified.
## Additional context
Similar to #449. Please follow a similar naming scheme of `Column n`.
</issue>
<code>
[start of mathesar/api/serializers/columns.py]
1 from rest_framework import serializers
2 from rest_framework.exceptions import ValidationError
3 from rest_framework.fields import empty
4 from rest_framework.settings import api_settings
5
6 from mathesar.api.exceptions.mixins import MathesarErrorMessageMixin
7 from mathesar.api.serializers.shared_serializers import (
8 DisplayOptionsMappingSerializer,
9 DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY,
10 )
11 from mathesar.models import Column
12
13
14 class InputValueField(serializers.CharField):
15 """
16 Takes in an arbitrary value. Emulates the record creation endpoint,
17 which takes in arbitrary values (un-validated and un-processed request.data).
18 This field replicates that behavior in a serializer.
19 """
20
21 def to_internal_value(self, data):
22 return data
23
24 def to_representation(self, value):
25 return value
26
27
28 class TypeOptionSerializer(MathesarErrorMessageMixin, serializers.Serializer):
29 length = serializers.IntegerField(required=False)
30 precision = serializers.IntegerField(required=False)
31 scale = serializers.IntegerField(required=False)
32 fields = serializers.CharField(required=False)
33
34 def run_validation(self, data=empty):
35 # Ensure that there are no unknown type options passed in.
36 if data is not empty:
37 unknown = set(data) - set(self.fields)
38 if unknown:
39 errors = ['Unknown field: {}'.format(field) for field in unknown]
40 raise serializers.ValidationError({
41 api_settings.NON_FIELD_ERRORS_KEY: errors,
42 })
43
44 return super(TypeOptionSerializer, self).run_validation(data)
45
46
47 class SimpleColumnSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
48 class Meta:
49 model = Column
50 fields = ('id',
51 'name',
52 'type',
53 'type_options',
54 'display_options'
55 )
56 name = serializers.CharField()
57 type = serializers.CharField(source='plain_type')
58 type_options = TypeOptionSerializer(required=False, allow_null=True)
59 display_options = DisplayOptionsMappingSerializer(required=False, allow_null=True)
60
61 def to_representation(self, instance):
62 if isinstance(instance, dict):
63 instance_type = instance.get('type')
64 else:
65 instance_type = instance.plain_type
66 self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = str(instance_type)
67 return super().to_representation(instance)
68
69 def to_internal_value(self, data):
70 if self.partial and 'type' not in data:
71 instance_type = getattr(self.instance, 'plain_type', None)
72 if instance_type is not None:
73 self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = str(instance_type)
74 else:
75 self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = data.get('type', None)
76 return super().to_internal_value(data)
77
78
79 class ColumnDefaultSerializer(MathesarErrorMessageMixin, serializers.Serializer):
80 value = InputValueField()
81 is_dynamic = serializers.BooleanField(read_only=True)
82
83
84 class ColumnSerializer(SimpleColumnSerializer):
85 class Meta(SimpleColumnSerializer.Meta):
86 fields = SimpleColumnSerializer.Meta.fields + (
87 'nullable',
88 'primary_key',
89 'source_column',
90 'copy_source_data',
91 'copy_source_constraints',
92 'index',
93 'valid_target_types',
94 'default'
95 )
96 model_fields = ('display_options',)
97
98 name = serializers.CharField(required=False)
99
100 # From scratch fields
101 type = serializers.CharField(source='plain_type', required=False)
102 nullable = serializers.BooleanField(default=True)
103 primary_key = serializers.BooleanField(default=False)
104 default = ColumnDefaultSerializer(
105 source='column_default_dict', required=False, allow_null=True, default=None
106 )
107
108 # From duplication fields
109 source_column = serializers.PrimaryKeyRelatedField(queryset=Column.current_objects.all(), required=False, write_only=True)
110 copy_source_data = serializers.BooleanField(default=True, write_only=True)
111 copy_source_constraints = serializers.BooleanField(default=True, write_only=True)
112
113 # Read only fields
114 index = serializers.IntegerField(source='column_index', read_only=True)
115 valid_target_types = serializers.ListField(read_only=True)
116
117 def validate(self, data):
118 if not self.partial:
119 from_scratch_required_fields = ['name', 'type']
120 from_scratch_specific_fields = ['type', 'nullable', 'primary_key']
121 from_dupe_required_fields = ['source_column']
122 from_dupe_specific_fields = ['source_column', 'copy_source_data',
123 'copy_source_constraints']
124
125 # Note that we run validation on self.initial_data, as `data` has defaults
126 # filled in for fields that weren't specified by the request
127 from_scratch_required_all = all([
128 f in self.initial_data for f in from_scratch_required_fields
129 ])
130 from_scratch_specific_in = [
131 f for f in from_scratch_specific_fields if f in self.initial_data
132 ]
133 from_dupe_required_all = all([
134 f in self.initial_data for f in from_dupe_required_fields
135 ])
136 from_dupe_specific_in = [
137 f for f in from_dupe_specific_fields if f in self.initial_data
138 ]
139
140 if len(from_dupe_specific_in) and len(from_scratch_specific_in):
141 raise ValidationError(
142 f'{from_scratch_specific_in} cannot be passed in if '
143 f'{from_dupe_specific_in} has also been passed in.'
144 )
145 elif not from_dupe_required_all and not from_scratch_required_all:
146 # We default to from scratch required fields if no fields are passed
147 if len(from_dupe_specific_in) and not len(from_scratch_specific_in):
148 required_fields = from_dupe_required_fields
149 else:
150 required_fields = from_scratch_required_fields
151 raise ValidationError({
152 f: ['This field is required.']
153 for f in required_fields
154 if f not in self.initial_data
155 })
156 return data
157
158 @property
159 def validated_model_fields(self):
160 return {key: self.validated_data[key] for key in self.validated_data if key in self.Meta.model_fields}
161
[end of mathesar/api/serializers/columns.py]
[start of db/columns/operations/create.py]
1 from alembic.migration import MigrationContext
2 from alembic.operations import Operations
3 from sqlalchemy.ext import compiler
4 from sqlalchemy.exc import DataError
5 from sqlalchemy.schema import DDLElement
6 from psycopg2.errors import InvalidTextRepresentation, InvalidParameterValue
7
8 from db.columns.base import MathesarColumn
9 from db.columns.defaults import DEFAULT, NAME, NULLABLE, TYPE
10 from db.columns.exceptions import InvalidDefaultError, InvalidTypeError, InvalidTypeOptionError
11 from db.columns.operations.alter import set_column_default, change_column_nullable
12 from db.columns.operations.select import (
13 get_column_attnum_from_name, get_column_default, get_column_name_from_attnum,
14 )
15 from db.columns.utils import get_mathesar_column_with_engine
16 from db.constraints.operations.create import copy_constraint
17 from db.constraints.operations.select import get_column_constraints
18 from db.constraints import utils as constraint_utils
19 from db.tables.operations.select import reflect_table_from_oid
20 from db.types.operations.cast import get_supported_alter_column_types
21
22
23 def create_column(engine, table_oid, column_data):
24 column_type = column_data.get(TYPE, column_data.get("type"))
25 column_type_options = column_data.get("type_options", {})
26 column_nullable = column_data.get(NULLABLE, True)
27 default_value = column_data.get(DEFAULT, {}).get('value')
28 prepared_default_value = str(default_value) if default_value is not None else None
29 supported_types = get_supported_alter_column_types(
30 engine, friendly_names=False,
31 )
32 sa_type = supported_types.get(column_type)
33 if sa_type is None:
34 # Requested type not supported. falling back to VARCHAR
35 sa_type = supported_types["VARCHAR"]
36 column_type_options = {}
37 table = reflect_table_from_oid(table_oid, engine)
38
39 try:
40 column = MathesarColumn(
41 column_data[NAME], sa_type(**column_type_options), nullable=column_nullable,
42 server_default=prepared_default_value,
43 )
44 except DataError as e:
45 if type(e.orig) == InvalidTextRepresentation:
46 raise InvalidTypeError
47 else:
48 raise e
49
50 table = reflect_table_from_oid(table_oid, engine)
51 try:
52 with engine.begin() as conn:
53 ctx = MigrationContext.configure(conn)
54 op = Operations(ctx)
55 op.add_column(table.name, column, schema=table.schema)
56 except DataError as e:
57 if type(e.orig) == InvalidTextRepresentation:
58 raise InvalidDefaultError
59 elif type(e.orig) == InvalidParameterValue:
60 raise InvalidTypeOptionError
61 else:
62 raise e
63
64 return get_mathesar_column_with_engine(
65 reflect_table_from_oid(table_oid, engine).columns[column_data[NAME]],
66 engine
67 )
68
69
70 def _gen_col_name(table, column_name):
71 num = 1
72 new_column_name = f"{column_name}_{num}"
73 while new_column_name in table.c:
74 num += 1
75 new_column_name = f"{column_name}_{num}"
76 return new_column_name
77
78
79 class CopyColumn(DDLElement):
80 def __init__(self, schema, table, to_column, from_column):
81 self.schema = schema
82 self.table = table
83 self.to_column = to_column
84 self.from_column = from_column
85
86
87 @compiler.compiles(CopyColumn, "postgresql")
88 def compile_copy_column(element, compiler, **_):
89 return 'UPDATE "%s"."%s" SET "%s" = "%s"' % (
90 element.schema,
91 element.table,
92 element.to_column,
93 element.from_column
94 )
95
96
97 def _duplicate_column_data(table_oid, from_column_attnum, to_column_attnum, engine):
98 table = reflect_table_from_oid(table_oid, engine)
99 from_column_name = get_column_name_from_attnum(table_oid, from_column_attnum, engine)
100 to_column_name = get_column_name_from_attnum(table_oid, to_column_attnum, engine)
101 copy = CopyColumn(
102 table.schema,
103 table.name,
104 to_column_name,
105 from_column_name,
106 )
107 with engine.begin() as conn:
108 conn.execute(copy)
109 from_default = get_column_default(table_oid, from_column_attnum, engine)
110 if from_default is not None:
111 with engine.begin() as conn:
112 set_column_default(table_oid, to_column_attnum, engine, conn, from_default)
113
114
115 def _duplicate_column_constraints(table_oid, from_column_attnum, to_column_attnum, engine, copy_nullable=True):
116 table = reflect_table_from_oid(table_oid, engine)
117 from_column_name = get_column_name_from_attnum(table_oid, from_column_attnum, engine)
118 if copy_nullable:
119 with engine.begin() as conn:
120 change_column_nullable(table_oid, to_column_attnum, engine, conn, table.c[from_column_name].nullable)
121 constraints = get_column_constraints(from_column_attnum, table_oid, engine)
122 for constraint in constraints:
123 constraint_type = constraint_utils.get_constraint_type_from_char(constraint.contype)
124 if constraint_type != constraint_utils.ConstraintType.UNIQUE.value:
125 # Don't allow duplication of primary keys
126 continue
127 copy_constraint(
128 table_oid, engine, constraint, from_column_attnum, to_column_attnum
129 )
130
131
132 def duplicate_column(table_oid, copy_from_attnum, engine, new_column_name=None, copy_data=True, copy_constraints=True):
133 table = reflect_table_from_oid(table_oid, engine)
134 copy_from_name = get_column_name_from_attnum(table_oid, copy_from_attnum, engine)
135 from_column = table.c[copy_from_name]
136 if new_column_name is None:
137 new_column_name = _gen_col_name(table, from_column.name)
138
139 column_data = {
140 NAME: new_column_name,
141 "type": from_column.type.compile(dialect=engine.dialect),
142 NULLABLE: True,
143 }
144 new_column = create_column(engine, table_oid, column_data)
145 new_column_attnum = get_column_attnum_from_name(table_oid, new_column.name, engine)
146 if copy_data:
147 _duplicate_column_data(
148 table_oid,
149 copy_from_attnum,
150 new_column_attnum,
151 engine
152 )
153
154 if copy_constraints:
155 _duplicate_column_constraints(
156 table_oid,
157 copy_from_attnum,
158 new_column_attnum,
159 engine,
160 copy_nullable=copy_data
161 )
162
163 table = reflect_table_from_oid(table_oid, engine)
164 column_name = get_column_name_from_attnum(table_oid, new_column_attnum, engine)
165 return get_mathesar_column_with_engine(table.c[column_name], engine)
166
[end of db/columns/operations/create.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/db/columns/operations/create.py b/db/columns/operations/create.py
--- a/db/columns/operations/create.py
+++ b/db/columns/operations/create.py
@@ -19,8 +19,14 @@
from db.tables.operations.select import reflect_table_from_oid
from db.types.operations.cast import get_supported_alter_column_types
+COLUMN_NAME_TEMPLATE = 'Column'
+
def create_column(engine, table_oid, column_data):
+ table = reflect_table_from_oid(table_oid, engine)
+ column_name = column_data.get(NAME, '').strip()
+ if column_name == '':
+ column_data[NAME] = gen_col_name(table)
column_type = column_data.get(TYPE, column_data.get("type"))
column_type_options = column_data.get("type_options", {})
column_nullable = column_data.get(NULLABLE, True)
@@ -67,6 +73,13 @@
)
+def gen_col_name(table):
+ base_name = COLUMN_NAME_TEMPLATE
+ col_num = len(table.c)
+ name = f'{base_name} {col_num}'
+ return name
+
+
def _gen_col_name(table, column_name):
num = 1
new_column_name = f"{column_name}_{num}"
diff --git a/mathesar/api/serializers/columns.py b/mathesar/api/serializers/columns.py
--- a/mathesar/api/serializers/columns.py
+++ b/mathesar/api/serializers/columns.py
@@ -95,7 +95,7 @@
)
model_fields = ('display_options',)
- name = serializers.CharField(required=False)
+ name = serializers.CharField(required=False, allow_blank=True)
# From scratch fields
type = serializers.CharField(source='plain_type', required=False)
@@ -116,7 +116,7 @@
def validate(self, data):
if not self.partial:
- from_scratch_required_fields = ['name', 'type']
+ from_scratch_required_fields = ['type']
from_scratch_specific_fields = ['type', 'nullable', 'primary_key']
from_dupe_required_fields = ['source_column']
from_dupe_specific_fields = ['source_column', 'copy_source_data',
|
{"golden_diff": "diff --git a/db/columns/operations/create.py b/db/columns/operations/create.py\n--- a/db/columns/operations/create.py\n+++ b/db/columns/operations/create.py\n@@ -19,8 +19,14 @@\n from db.tables.operations.select import reflect_table_from_oid\n from db.types.operations.cast import get_supported_alter_column_types\n \n+COLUMN_NAME_TEMPLATE = 'Column'\n+\n \n def create_column(engine, table_oid, column_data):\n+ table = reflect_table_from_oid(table_oid, engine)\n+ column_name = column_data.get(NAME, '').strip()\n+ if column_name == '':\n+ column_data[NAME] = gen_col_name(table)\n column_type = column_data.get(TYPE, column_data.get(\"type\"))\n column_type_options = column_data.get(\"type_options\", {})\n column_nullable = column_data.get(NULLABLE, True)\n@@ -67,6 +73,13 @@\n )\n \n \n+def gen_col_name(table):\n+ base_name = COLUMN_NAME_TEMPLATE\n+ col_num = len(table.c)\n+ name = f'{base_name} {col_num}'\n+ return name\n+\n+\n def _gen_col_name(table, column_name):\n num = 1\n new_column_name = f\"{column_name}_{num}\"\ndiff --git a/mathesar/api/serializers/columns.py b/mathesar/api/serializers/columns.py\n--- a/mathesar/api/serializers/columns.py\n+++ b/mathesar/api/serializers/columns.py\n@@ -95,7 +95,7 @@\n )\n model_fields = ('display_options',)\n \n- name = serializers.CharField(required=False)\n+ name = serializers.CharField(required=False, allow_blank=True)\n \n # From scratch fields\n type = serializers.CharField(source='plain_type', required=False)\n@@ -116,7 +116,7 @@\n \n def validate(self, data):\n if not self.partial:\n- from_scratch_required_fields = ['name', 'type']\n+ from_scratch_required_fields = ['type']\n from_scratch_specific_fields = ['type', 'nullable', 'primary_key']\n from_dupe_required_fields = ['source_column']\n from_dupe_specific_fields = ['source_column', 'copy_source_data',\n", "issue": " Column API should support creating columns without names \n## Problem\r\nThe `name` parameter for the column API shouldn't be required.\r\n\r\n## Proposed solution\r\nWe should auto-generate a name if it was not specified.\r\n\r\n## Additional context\r\nSimilar to #449. Please follow a similar naming scheme of `Column n`.\r\n\n", "before_files": [{"content": "from rest_framework import serializers\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.fields import empty\nfrom rest_framework.settings import api_settings\n\nfrom mathesar.api.exceptions.mixins import MathesarErrorMessageMixin\nfrom mathesar.api.serializers.shared_serializers import (\n DisplayOptionsMappingSerializer,\n DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY,\n)\nfrom mathesar.models import Column\n\n\nclass InputValueField(serializers.CharField):\n \"\"\"\n Takes in an arbitrary value. Emulates the record creation endpoint,\n which takes in arbitrary values (un-validated and un-processed request.data).\n This field replicates that behavior in a serializer.\n \"\"\"\n\n def to_internal_value(self, data):\n return data\n\n def to_representation(self, value):\n return value\n\n\nclass TypeOptionSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n length = serializers.IntegerField(required=False)\n precision = serializers.IntegerField(required=False)\n scale = serializers.IntegerField(required=False)\n fields = serializers.CharField(required=False)\n\n def run_validation(self, data=empty):\n # Ensure that there are no unknown type options passed in.\n if data is not empty:\n unknown = set(data) - set(self.fields)\n if unknown:\n errors = ['Unknown field: {}'.format(field) for field in unknown]\n raise serializers.ValidationError({\n api_settings.NON_FIELD_ERRORS_KEY: errors,\n })\n\n return super(TypeOptionSerializer, self).run_validation(data)\n\n\nclass SimpleColumnSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = Column\n fields = ('id',\n 'name',\n 'type',\n 'type_options',\n 'display_options'\n )\n name = serializers.CharField()\n type = serializers.CharField(source='plain_type')\n type_options = TypeOptionSerializer(required=False, allow_null=True)\n display_options = DisplayOptionsMappingSerializer(required=False, allow_null=True)\n\n def to_representation(self, instance):\n if isinstance(instance, dict):\n instance_type = instance.get('type')\n else:\n instance_type = instance.plain_type\n self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = str(instance_type)\n return super().to_representation(instance)\n\n def to_internal_value(self, data):\n if self.partial and 'type' not in data:\n instance_type = getattr(self.instance, 'plain_type', None)\n if instance_type is not None:\n self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = str(instance_type)\n else:\n self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = data.get('type', None)\n return super().to_internal_value(data)\n\n\nclass ColumnDefaultSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n value = InputValueField()\n is_dynamic = serializers.BooleanField(read_only=True)\n\n\nclass ColumnSerializer(SimpleColumnSerializer):\n class Meta(SimpleColumnSerializer.Meta):\n fields = SimpleColumnSerializer.Meta.fields + (\n 'nullable',\n 'primary_key',\n 'source_column',\n 'copy_source_data',\n 'copy_source_constraints',\n 'index',\n 'valid_target_types',\n 'default'\n )\n model_fields = ('display_options',)\n\n name = serializers.CharField(required=False)\n\n # From scratch fields\n type = serializers.CharField(source='plain_type', required=False)\n nullable = serializers.BooleanField(default=True)\n primary_key = serializers.BooleanField(default=False)\n default = ColumnDefaultSerializer(\n source='column_default_dict', required=False, allow_null=True, default=None\n )\n\n # From duplication fields\n source_column = serializers.PrimaryKeyRelatedField(queryset=Column.current_objects.all(), required=False, write_only=True)\n copy_source_data = serializers.BooleanField(default=True, write_only=True)\n copy_source_constraints = serializers.BooleanField(default=True, write_only=True)\n\n # Read only fields\n index = serializers.IntegerField(source='column_index', read_only=True)\n valid_target_types = serializers.ListField(read_only=True)\n\n def validate(self, data):\n if not self.partial:\n from_scratch_required_fields = ['name', 'type']\n from_scratch_specific_fields = ['type', 'nullable', 'primary_key']\n from_dupe_required_fields = ['source_column']\n from_dupe_specific_fields = ['source_column', 'copy_source_data',\n 'copy_source_constraints']\n\n # Note that we run validation on self.initial_data, as `data` has defaults\n # filled in for fields that weren't specified by the request\n from_scratch_required_all = all([\n f in self.initial_data for f in from_scratch_required_fields\n ])\n from_scratch_specific_in = [\n f for f in from_scratch_specific_fields if f in self.initial_data\n ]\n from_dupe_required_all = all([\n f in self.initial_data for f in from_dupe_required_fields\n ])\n from_dupe_specific_in = [\n f for f in from_dupe_specific_fields if f in self.initial_data\n ]\n\n if len(from_dupe_specific_in) and len(from_scratch_specific_in):\n raise ValidationError(\n f'{from_scratch_specific_in} cannot be passed in if '\n f'{from_dupe_specific_in} has also been passed in.'\n )\n elif not from_dupe_required_all and not from_scratch_required_all:\n # We default to from scratch required fields if no fields are passed\n if len(from_dupe_specific_in) and not len(from_scratch_specific_in):\n required_fields = from_dupe_required_fields\n else:\n required_fields = from_scratch_required_fields\n raise ValidationError({\n f: ['This field is required.']\n for f in required_fields\n if f not in self.initial_data\n })\n return data\n\n @property\n def validated_model_fields(self):\n return {key: self.validated_data[key] for key in self.validated_data if key in self.Meta.model_fields}\n", "path": "mathesar/api/serializers/columns.py"}, {"content": "from alembic.migration import MigrationContext\nfrom alembic.operations import Operations\nfrom sqlalchemy.ext import compiler\nfrom sqlalchemy.exc import DataError\nfrom sqlalchemy.schema import DDLElement\nfrom psycopg2.errors import InvalidTextRepresentation, InvalidParameterValue\n\nfrom db.columns.base import MathesarColumn\nfrom db.columns.defaults import DEFAULT, NAME, NULLABLE, TYPE\nfrom db.columns.exceptions import InvalidDefaultError, InvalidTypeError, InvalidTypeOptionError\nfrom db.columns.operations.alter import set_column_default, change_column_nullable\nfrom db.columns.operations.select import (\n get_column_attnum_from_name, get_column_default, get_column_name_from_attnum,\n)\nfrom db.columns.utils import get_mathesar_column_with_engine\nfrom db.constraints.operations.create import copy_constraint\nfrom db.constraints.operations.select import get_column_constraints\nfrom db.constraints import utils as constraint_utils\nfrom db.tables.operations.select import reflect_table_from_oid\nfrom db.types.operations.cast import get_supported_alter_column_types\n\n\ndef create_column(engine, table_oid, column_data):\n column_type = column_data.get(TYPE, column_data.get(\"type\"))\n column_type_options = column_data.get(\"type_options\", {})\n column_nullable = column_data.get(NULLABLE, True)\n default_value = column_data.get(DEFAULT, {}).get('value')\n prepared_default_value = str(default_value) if default_value is not None else None\n supported_types = get_supported_alter_column_types(\n engine, friendly_names=False,\n )\n sa_type = supported_types.get(column_type)\n if sa_type is None:\n # Requested type not supported. falling back to VARCHAR\n sa_type = supported_types[\"VARCHAR\"]\n column_type_options = {}\n table = reflect_table_from_oid(table_oid, engine)\n\n try:\n column = MathesarColumn(\n column_data[NAME], sa_type(**column_type_options), nullable=column_nullable,\n server_default=prepared_default_value,\n )\n except DataError as e:\n if type(e.orig) == InvalidTextRepresentation:\n raise InvalidTypeError\n else:\n raise e\n\n table = reflect_table_from_oid(table_oid, engine)\n try:\n with engine.begin() as conn:\n ctx = MigrationContext.configure(conn)\n op = Operations(ctx)\n op.add_column(table.name, column, schema=table.schema)\n except DataError as e:\n if type(e.orig) == InvalidTextRepresentation:\n raise InvalidDefaultError\n elif type(e.orig) == InvalidParameterValue:\n raise InvalidTypeOptionError\n else:\n raise e\n\n return get_mathesar_column_with_engine(\n reflect_table_from_oid(table_oid, engine).columns[column_data[NAME]],\n engine\n )\n\n\ndef _gen_col_name(table, column_name):\n num = 1\n new_column_name = f\"{column_name}_{num}\"\n while new_column_name in table.c:\n num += 1\n new_column_name = f\"{column_name}_{num}\"\n return new_column_name\n\n\nclass CopyColumn(DDLElement):\n def __init__(self, schema, table, to_column, from_column):\n self.schema = schema\n self.table = table\n self.to_column = to_column\n self.from_column = from_column\n\n\[email protected](CopyColumn, \"postgresql\")\ndef compile_copy_column(element, compiler, **_):\n return 'UPDATE \"%s\".\"%s\" SET \"%s\" = \"%s\"' % (\n element.schema,\n element.table,\n element.to_column,\n element.from_column\n )\n\n\ndef _duplicate_column_data(table_oid, from_column_attnum, to_column_attnum, engine):\n table = reflect_table_from_oid(table_oid, engine)\n from_column_name = get_column_name_from_attnum(table_oid, from_column_attnum, engine)\n to_column_name = get_column_name_from_attnum(table_oid, to_column_attnum, engine)\n copy = CopyColumn(\n table.schema,\n table.name,\n to_column_name,\n from_column_name,\n )\n with engine.begin() as conn:\n conn.execute(copy)\n from_default = get_column_default(table_oid, from_column_attnum, engine)\n if from_default is not None:\n with engine.begin() as conn:\n set_column_default(table_oid, to_column_attnum, engine, conn, from_default)\n\n\ndef _duplicate_column_constraints(table_oid, from_column_attnum, to_column_attnum, engine, copy_nullable=True):\n table = reflect_table_from_oid(table_oid, engine)\n from_column_name = get_column_name_from_attnum(table_oid, from_column_attnum, engine)\n if copy_nullable:\n with engine.begin() as conn:\n change_column_nullable(table_oid, to_column_attnum, engine, conn, table.c[from_column_name].nullable)\n constraints = get_column_constraints(from_column_attnum, table_oid, engine)\n for constraint in constraints:\n constraint_type = constraint_utils.get_constraint_type_from_char(constraint.contype)\n if constraint_type != constraint_utils.ConstraintType.UNIQUE.value:\n # Don't allow duplication of primary keys\n continue\n copy_constraint(\n table_oid, engine, constraint, from_column_attnum, to_column_attnum\n )\n\n\ndef duplicate_column(table_oid, copy_from_attnum, engine, new_column_name=None, copy_data=True, copy_constraints=True):\n table = reflect_table_from_oid(table_oid, engine)\n copy_from_name = get_column_name_from_attnum(table_oid, copy_from_attnum, engine)\n from_column = table.c[copy_from_name]\n if new_column_name is None:\n new_column_name = _gen_col_name(table, from_column.name)\n\n column_data = {\n NAME: new_column_name,\n \"type\": from_column.type.compile(dialect=engine.dialect),\n NULLABLE: True,\n }\n new_column = create_column(engine, table_oid, column_data)\n new_column_attnum = get_column_attnum_from_name(table_oid, new_column.name, engine)\n if copy_data:\n _duplicate_column_data(\n table_oid,\n copy_from_attnum,\n new_column_attnum,\n engine\n )\n\n if copy_constraints:\n _duplicate_column_constraints(\n table_oid,\n copy_from_attnum,\n new_column_attnum,\n engine,\n copy_nullable=copy_data\n )\n\n table = reflect_table_from_oid(table_oid, engine)\n column_name = get_column_name_from_attnum(table_oid, new_column_attnum, engine)\n return get_mathesar_column_with_engine(table.c[column_name], engine)\n", "path": "db/columns/operations/create.py"}]}
| 4,041 | 483 |
gh_patches_debug_35781
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-5114
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update "Masking out the solar disk" example to use maputils function
Example: https://docs.sunpy.org/en/stable/generated/gallery/computer_vision_techniques/mask_disk.html
Update to use `sunpy.map.coordinate_is_on_solar_disk()`
</issue>
<code>
[start of examples/computer_vision_techniques/mask_disk.py]
1 """
2 ==========================
3 Masking out the solar disk
4 ==========================
5
6 How to mask out all emission from the solar disk.
7 """
8 import matplotlib.pyplot as plt
9 import numpy as np
10 import numpy.ma as ma
11
12 import sunpy.map
13 from sunpy.data.sample import AIA_171_IMAGE
14 from sunpy.map.maputils import all_coordinates_from_map
15
16 ###############################################################################
17 # We start with the sample data
18 aia = sunpy.map.Map(AIA_171_IMAGE)
19
20 ###############################################################################
21 # A utility function gives us access to the helioprojective coordinate of each
22 # pixels. We can use that to create a new array which
23 # contains the normalized radial position for each pixel.
24 hpc_coords = all_coordinates_from_map(aia)
25 r = np.sqrt(hpc_coords.Tx ** 2 + hpc_coords.Ty ** 2) / aia.rsun_obs
26
27 ###############################################################################
28 # With this information, we create a mask where all values which are less then
29 # the solar radius are masked. We also make a slight change to the colormap
30 # so that masked values are shown as black instead of the default white.
31 mask = ma.masked_less_equal(r, 1)
32 palette = aia.cmap
33 palette.set_bad('black')
34
35 ###############################################################################
36 # Finally we create a new map with our new mask.
37 scaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask.mask)
38
39 ###############################################################################
40 # Let's plot the results using our modified colormap
41 fig = plt.figure()
42 plt.subplot(projection=scaled_map)
43 scaled_map.plot(cmap=palette)
44 scaled_map.draw_limb()
45 plt.show()
46
[end of examples/computer_vision_techniques/mask_disk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/computer_vision_techniques/mask_disk.py b/examples/computer_vision_techniques/mask_disk.py
--- a/examples/computer_vision_techniques/mask_disk.py
+++ b/examples/computer_vision_techniques/mask_disk.py
@@ -6,12 +6,10 @@
How to mask out all emission from the solar disk.
"""
import matplotlib.pyplot as plt
-import numpy as np
-import numpy.ma as ma
import sunpy.map
from sunpy.data.sample import AIA_171_IMAGE
-from sunpy.map.maputils import all_coordinates_from_map
+from sunpy.map.maputils import all_coordinates_from_map, coordinate_is_on_solar_disk
###############################################################################
# We start with the sample data
@@ -19,22 +17,22 @@
###############################################################################
# A utility function gives us access to the helioprojective coordinate of each
-# pixels. We can use that to create a new array which
-# contains the normalized radial position for each pixel.
+# pixels. We can use that to create a new array of all the coordinates
+# that are on the solar disk.
hpc_coords = all_coordinates_from_map(aia)
-r = np.sqrt(hpc_coords.Tx ** 2 + hpc_coords.Ty ** 2) / aia.rsun_obs
###############################################################################
-# With this information, we create a mask where all values which are less then
-# the solar radius are masked. We also make a slight change to the colormap
-# so that masked values are shown as black instead of the default white.
-mask = ma.masked_less_equal(r, 1)
+# Now, we can create a mask from the coordinates by using another utility
+# function that gives us a mask that has `True` for those coordinates that are
+# on the solar disk. We also make a slight change to the colormap so that
+# masked values are shown as black instead of the default white.
+mask = coordinate_is_on_solar_disk(hpc_coords)
palette = aia.cmap
palette.set_bad('black')
###############################################################################
# Finally we create a new map with our new mask.
-scaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask.mask)
+scaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask)
###############################################################################
# Let's plot the results using our modified colormap
|
{"golden_diff": "diff --git a/examples/computer_vision_techniques/mask_disk.py b/examples/computer_vision_techniques/mask_disk.py\n--- a/examples/computer_vision_techniques/mask_disk.py\n+++ b/examples/computer_vision_techniques/mask_disk.py\n@@ -6,12 +6,10 @@\n How to mask out all emission from the solar disk.\n \"\"\"\n import matplotlib.pyplot as plt\n-import numpy as np\n-import numpy.ma as ma\n \n import sunpy.map\n from sunpy.data.sample import AIA_171_IMAGE\n-from sunpy.map.maputils import all_coordinates_from_map\n+from sunpy.map.maputils import all_coordinates_from_map, coordinate_is_on_solar_disk\n \n ###############################################################################\n # We start with the sample data\n@@ -19,22 +17,22 @@\n \n ###############################################################################\n # A utility function gives us access to the helioprojective coordinate of each\n-# pixels. We can use that to create a new array which\n-# contains the normalized radial position for each pixel.\n+# pixels. We can use that to create a new array of all the coordinates\n+# that are on the solar disk.\n hpc_coords = all_coordinates_from_map(aia)\n-r = np.sqrt(hpc_coords.Tx ** 2 + hpc_coords.Ty ** 2) / aia.rsun_obs\n \n ###############################################################################\n-# With this information, we create a mask where all values which are less then\n-# the solar radius are masked. We also make a slight change to the colormap\n-# so that masked values are shown as black instead of the default white.\n-mask = ma.masked_less_equal(r, 1)\n+# Now, we can create a mask from the coordinates by using another utility\n+# function that gives us a mask that has `True` for those coordinates that are\n+# on the solar disk. We also make a slight change to the colormap so that\n+# masked values are shown as black instead of the default white.\n+mask = coordinate_is_on_solar_disk(hpc_coords)\n palette = aia.cmap\n palette.set_bad('black')\n \n ###############################################################################\n # Finally we create a new map with our new mask.\n-scaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask.mask)\n+scaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask)\n \n ###############################################################################\n # Let's plot the results using our modified colormap\n", "issue": "Update \"Masking out the solar disk\" example to use maputils function\nExample: https://docs.sunpy.org/en/stable/generated/gallery/computer_vision_techniques/mask_disk.html\r\n\r\nUpdate to use `sunpy.map.coordinate_is_on_solar_disk()`\n", "before_files": [{"content": "\"\"\"\n==========================\nMasking out the solar disk\n==========================\n\nHow to mask out all emission from the solar disk.\n\"\"\"\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport numpy.ma as ma\n\nimport sunpy.map\nfrom sunpy.data.sample import AIA_171_IMAGE\nfrom sunpy.map.maputils import all_coordinates_from_map\n\n###############################################################################\n# We start with the sample data\naia = sunpy.map.Map(AIA_171_IMAGE)\n\n###############################################################################\n# A utility function gives us access to the helioprojective coordinate of each\n# pixels. We can use that to create a new array which\n# contains the normalized radial position for each pixel.\nhpc_coords = all_coordinates_from_map(aia)\nr = np.sqrt(hpc_coords.Tx ** 2 + hpc_coords.Ty ** 2) / aia.rsun_obs\n\n###############################################################################\n# With this information, we create a mask where all values which are less then\n# the solar radius are masked. We also make a slight change to the colormap\n# so that masked values are shown as black instead of the default white.\nmask = ma.masked_less_equal(r, 1)\npalette = aia.cmap\npalette.set_bad('black')\n\n###############################################################################\n# Finally we create a new map with our new mask.\nscaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask.mask)\n\n###############################################################################\n# Let's plot the results using our modified colormap\nfig = plt.figure()\nplt.subplot(projection=scaled_map)\nscaled_map.plot(cmap=palette)\nscaled_map.draw_limb()\nplt.show()\n", "path": "examples/computer_vision_techniques/mask_disk.py"}]}
| 1,027 | 513 |
gh_patches_debug_7803
|
rasdani/github-patches
|
git_diff
|
weecology__retriever-712
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Python 3 to setup.py
We need to note in the setup.py that Python 3 is supported.
</issue>
<code>
[start of setup.py]
1 """Use the following command to install retriever: python setup.py install"""
2 from __future__ import absolute_import
3
4 from setuptools import setup
5 from pkg_resources import parse_version
6 import platform
7
8
9 current_platform = platform.system().lower()
10 extra_includes = []
11 if current_platform == "darwin":
12 try:
13 import py2app
14 except ImportError:
15 pass
16 extra_includes = []
17 elif current_platform == "windows":
18 try:
19 import py2exe
20 except ImportError:
21 pass
22 import sys
23 extra_includes = ['pyodbc', 'inspect']
24 sys.path.append(
25 "C:\\Windows\\winsxs\\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91")
26
27 __version__ = 'v2.0.dev'
28 with open("_version.py", "w") as version_file:
29 version_file.write("__version__ = " + "'" + __version__ + "'\n")
30 version_file.close()
31
32
33 def clean_version(v):
34 return parse_version(v).__repr__().lstrip("<Version('").rstrip("')>")
35
36 packages = [
37 'retriever.lib',
38 'retriever.engines',
39 'retriever',
40 ]
41
42 includes = [
43 'xlrd',
44 'future'
45 'pymysql',
46 'psycopg2',
47 'sqlite3',
48 ] + extra_includes
49
50 excludes = [
51 'pyreadline',
52 'doctest',
53 'optparse',
54 'getopt',
55 'pickle',
56 'calendar',
57 'pdb',
58 'inspect',
59 'email',
60 'pywin', 'pywin.debugger',
61 'pywin.debugger.dbgcon',
62 'pywin.dialogs', 'pywin.dialogs.list',
63 'Tkconstants', 'Tkinter', 'tcl',
64 ]
65
66 setup(name='retriever',
67 version=clean_version(__version__),
68 description='Data Retriever',
69 author='Ben Morris, Ethan White, Henry Senyondo',
70 author_email='[email protected]',
71 url='https://github.com/weecology/retriever',
72 classifiers=['Intended Audience :: Science/Research',
73 'License :: OSI Approved :: MIT License',
74 'Programming Language :: Python',
75 'Programming Language :: Python :: 2', ],
76 packages=packages,
77 package_dir={
78 'retriever': ''
79 },
80 entry_points={
81 'console_scripts': [
82 'retriever = retriever.__main__:main',
83 ],
84 },
85 install_requires=[
86 'xlrd',
87 'future'
88 ],
89
90 # py2exe flags
91 console=[{'script': "__main__.py",
92 'dest_base': "retriever",
93 'icon_resources': [(1, 'icon.ico')]
94 }],
95 zipfile=None,
96
97 # py2app flags
98 app=['__main__.py'],
99 data_files=[('', ['CITATION'])],
100 setup_requires=['py2app'] if current_platform == 'darwin' else [],
101
102 # options
103 # optimize is set to 1 of py2app to avoid errors with pymysql
104 # bundle_files = 1 or 2 was causing failed builds so we moved
105 # to bundle_files = 3 and Inno Setup
106 options={'py2exe': {'bundle_files': 3,
107 'compressed': 2,
108 'optimize': 1,
109 'packages': packages,
110 'includes': includes,
111 'excludes': excludes,
112 },
113 'py2app': {'packages': ['retriever'],
114 'includes': includes,
115 'site_packages': True,
116 'resources': [],
117 'optimize': 1,
118 'argv_emulation': True,
119 'no_chdir': True,
120 'iconfile': 'osx_icon.icns',
121 },
122 },
123 )
124
125
126 try:
127 from retriever.compile import compile
128 from retriever.lib.repository import check_for_updates
129 compile()
130 check_for_updates()
131 except:
132 pass
133
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -72,7 +72,8 @@
classifiers=['Intended Audience :: Science/Research',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python',
- 'Programming Language :: Python :: 2', ],
+ 'Programming Language :: Python :: 2',
+ 'Programming Language :: Python :: 3',],
packages=packages,
package_dir={
'retriever': ''
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -72,7 +72,8 @@\n classifiers=['Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python',\n- 'Programming Language :: Python :: 2', ],\n+ 'Programming Language :: Python :: 2',\n+ 'Programming Language :: Python :: 3',],\n packages=packages,\n package_dir={\n 'retriever': ''\n", "issue": "Add Python 3 to setup.py\nWe need to note in the setup.py that Python 3 is supported.\n", "before_files": [{"content": "\"\"\"Use the following command to install retriever: python setup.py install\"\"\"\nfrom __future__ import absolute_import\n\nfrom setuptools import setup\nfrom pkg_resources import parse_version\nimport platform\n\n\ncurrent_platform = platform.system().lower()\nextra_includes = []\nif current_platform == \"darwin\":\n try:\n import py2app\n except ImportError:\n pass\n extra_includes = []\nelif current_platform == \"windows\":\n try:\n import py2exe\n except ImportError:\n pass\n import sys\n extra_includes = ['pyodbc', 'inspect']\n sys.path.append(\n \"C:\\\\Windows\\\\winsxs\\\\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91\")\n\n__version__ = 'v2.0.dev'\nwith open(\"_version.py\", \"w\") as version_file:\n version_file.write(\"__version__ = \" + \"'\" + __version__ + \"'\\n\")\n version_file.close()\n\n\ndef clean_version(v):\n return parse_version(v).__repr__().lstrip(\"<Version('\").rstrip(\"')>\")\n\npackages = [\n 'retriever.lib',\n 'retriever.engines',\n 'retriever',\n]\n\nincludes = [\n 'xlrd',\n 'future'\n 'pymysql',\n 'psycopg2',\n 'sqlite3',\n] + extra_includes\n\nexcludes = [\n 'pyreadline',\n 'doctest',\n 'optparse',\n 'getopt',\n 'pickle',\n 'calendar',\n 'pdb',\n 'inspect',\n 'email',\n 'pywin', 'pywin.debugger',\n 'pywin.debugger.dbgcon',\n 'pywin.dialogs', 'pywin.dialogs.list',\n 'Tkconstants', 'Tkinter', 'tcl',\n]\n\nsetup(name='retriever',\n version=clean_version(__version__),\n description='Data Retriever',\n author='Ben Morris, Ethan White, Henry Senyondo',\n author_email='[email protected]',\n url='https://github.com/weecology/retriever',\n classifiers=['Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2', ],\n packages=packages,\n package_dir={\n 'retriever': ''\n },\n entry_points={\n 'console_scripts': [\n 'retriever = retriever.__main__:main',\n ],\n },\n install_requires=[\n 'xlrd',\n 'future'\n ],\n\n # py2exe flags\n console=[{'script': \"__main__.py\",\n 'dest_base': \"retriever\",\n 'icon_resources': [(1, 'icon.ico')]\n }],\n zipfile=None,\n\n # py2app flags\n app=['__main__.py'],\n data_files=[('', ['CITATION'])],\n setup_requires=['py2app'] if current_platform == 'darwin' else [],\n\n # options\n # optimize is set to 1 of py2app to avoid errors with pymysql\n # bundle_files = 1 or 2 was causing failed builds so we moved\n # to bundle_files = 3 and Inno Setup\n options={'py2exe': {'bundle_files': 3,\n 'compressed': 2,\n 'optimize': 1,\n 'packages': packages,\n 'includes': includes,\n 'excludes': excludes,\n },\n 'py2app': {'packages': ['retriever'],\n 'includes': includes,\n 'site_packages': True,\n 'resources': [],\n 'optimize': 1,\n 'argv_emulation': True,\n 'no_chdir': True,\n 'iconfile': 'osx_icon.icns',\n },\n },\n )\n\n\ntry:\n from retriever.compile import compile\n from retriever.lib.repository import check_for_updates\n compile()\n check_for_updates()\nexcept:\n pass\n", "path": "setup.py"}]}
| 1,732 | 111 |
gh_patches_debug_38826
|
rasdani/github-patches
|
git_diff
|
sparcs-kaist__otlplus-979
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CHORE] 졸업플래너 신규 model의 serialize결과 캐싱
## 동기
**Is your feature request related to a problem? Please describe.**
OTL의 주요 모델에는 캐시가 구현되어 있으나 졸업플래너에서 새로 생성된 model은 아직 캐싱이 적용되어 있지 않습니다.
베타 출시 때는 우선 임시로 그대로 출시하였지만 캐시 도입이 필요합니다.
특히 트랙 부분은 페이지 접속 시에 로딩되고 한번에 많은 양이 로드되기 때문에 성능이 상당히 저하될 여지가 있습니다.
## 설명
**Describe the solution you'd like.**
A clear and concise description of what you want to happen.
## 스크린샷
(OPTIONAL) If applicable, add screenshots to help explain your feature request.
## 개발 환경
- OS: [e.g. macOS]
- ```python --version```:
- ```node --version```:
## 테스트 환경
(OPTIONAL)
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Web Version: [e.g. 1.1.0]
## 추가 정보
(OPTIONAL) Add any other context or screenshots about the feature request here.
</issue>
<code>
[start of apps/graduation/models.py]
1 from django.db import models
2
3 from apps.subject.models import Department
4
5
6 UNBOUND_START_YEAR = 2000
7 UNBOUND_END_YEAR = 2100
8
9
10 class GeneralTrack(models.Model):
11 start_year = models.IntegerField(db_index=True)
12 end_year = models.IntegerField(db_index=True)
13 is_foreign = models.BooleanField(db_index=True)
14
15 total_credit = models.IntegerField()
16 total_au = models.IntegerField()
17 basic_required = models.IntegerField()
18 basic_elective = models.IntegerField()
19 thesis_study = models.IntegerField()
20 thesis_study_doublemajor = models.IntegerField()
21 general_required_credit = models.IntegerField()
22 general_required_au = models.IntegerField()
23 humanities = models.IntegerField()
24 humanities_doublemajor = models.IntegerField()
25
26 class Meta:
27 unique_together = [["start_year", "is_foreign"], ["end_year", "is_foreign"]]
28
29 def to_json(self):
30 result = {
31 "id": self.id,
32 "start_year": self.start_year,
33 "end_year": self.end_year,
34 "is_foreign": self.is_foreign,
35 "total_credit": self.total_credit,
36 "total_au": self.total_au,
37 "basic_required": self.basic_required,
38 "basic_elective": self.basic_elective,
39 "thesis_study": self.thesis_study,
40 "thesis_study_doublemajor": self.thesis_study_doublemajor,
41 "general_required_credit": self.general_required_credit,
42 "general_required_au": self.general_required_au,
43 "humanities": self.humanities,
44 "humanities_doublemajor": self.humanities_doublemajor,
45 }
46
47 return result
48
49
50 class MajorTrack(models.Model):
51 start_year = models.IntegerField(db_index=True)
52 end_year = models.IntegerField(db_index=True)
53 department = models.ForeignKey(Department,
54 on_delete=models.CASCADE, db_index=True)
55
56 basic_elective_doublemajor = models.IntegerField()
57 major_required = models.IntegerField()
58 major_elective = models.IntegerField()
59
60 class Meta:
61 unique_together = [["start_year", "department"], ["end_year", "department"]]
62
63 def to_json(self):
64 result = {
65 "id": self.id,
66 "start_year": self.start_year,
67 "end_year": self.end_year,
68 "department": self.department.to_json(nested=False),
69 "basic_elective_doublemajor": self.basic_elective_doublemajor,
70 "major_required": self.major_required,
71 "major_elective": self.major_elective,
72 }
73
74 return result
75
76
77 class AdditionalTrack(models.Model):
78 ADDITIONAL_TYPE_CHOICES = [
79 ('DOUBLE', 'DOUBLE'),
80 ('MINOR', 'MINOR'),
81 ('ADVANCED', 'ADVANCED'),
82 ('INTERDISCIPLINARY', 'INTERDISCIPLINARY'),
83 ]
84
85 start_year = models.IntegerField(db_index=True)
86 end_year = models.IntegerField(db_index=True)
87 type = models.CharField(db_index=True, max_length=32, choices=ADDITIONAL_TYPE_CHOICES)
88 department = models.ForeignKey(Department,
89 null=True, blank=True,
90 on_delete=models.CASCADE, db_index=True)
91
92 major_required = models.IntegerField()
93 major_elective = models.IntegerField()
94
95 class Meta:
96 unique_together = [["start_year", "type", "department"], ["end_year", "type", "department"]]
97
98 def to_json(self):
99 result = {
100 "id": self.id,
101 "start_year": self.start_year,
102 "end_year": self.end_year,
103 "type": self.type,
104 "department": self.department.to_json(nested=False) if self.department else None,
105 "major_required": self.major_required,
106 "major_elective": self.major_elective,
107 }
108
109 return result
110
[end of apps/graduation/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/graduation/models.py b/apps/graduation/models.py
--- a/apps/graduation/models.py
+++ b/apps/graduation/models.py
@@ -1,4 +1,5 @@
from django.db import models
+from django.core.cache import cache
from apps.subject.models import Department
@@ -26,7 +27,15 @@
class Meta:
unique_together = [["start_year", "is_foreign"], ["end_year", "is_foreign"]]
+ def get_cache_key(self):
+ return "generaltrack:%d-%d-%s" % (self.start_year, self.end_year, self.is_foreign)
+
def to_json(self):
+ cache_id = self.get_cache_key()
+ result_cached = cache.get(cache_id)
+ if result_cached is not None:
+ return result_cached
+
result = {
"id": self.id,
"start_year": self.start_year,
@@ -44,6 +53,8 @@
"humanities_doublemajor": self.humanities_doublemajor,
}
+ cache.set(cache_id, result, 60 * 60)
+
return result
@@ -60,7 +71,15 @@
class Meta:
unique_together = [["start_year", "department"], ["end_year", "department"]]
+ def get_cache_key(self):
+ return "majortrack:%d-%d-%d" % (self.start_year, self.end_year, self.department.id)
+
def to_json(self):
+ cache_id = self.get_cache_key()
+ result_cached = cache.get(cache_id)
+ if result_cached is not None:
+ return result_cached
+
result = {
"id": self.id,
"start_year": self.start_year,
@@ -71,6 +90,8 @@
"major_elective": self.major_elective,
}
+ cache.set(cache_id, result, 60 * 60)
+
return result
@@ -95,7 +116,15 @@
class Meta:
unique_together = [["start_year", "type", "department"], ["end_year", "type", "department"]]
+ def get_cache_key(self):
+ return "additionaltrack:%d-%d-%s-%d" % (self.start_year, self.end_year, self.type, self.department.id if self.department else 0)
+
def to_json(self):
+ cache_id = self.get_cache_key()
+ result_cached = cache.get(cache_id)
+ if result_cached is not None:
+ return result_cached
+
result = {
"id": self.id,
"start_year": self.start_year,
@@ -106,4 +135,6 @@
"major_elective": self.major_elective,
}
+ cache.set(cache_id, result, 60 * 60)
+
return result
|
{"golden_diff": "diff --git a/apps/graduation/models.py b/apps/graduation/models.py\n--- a/apps/graduation/models.py\n+++ b/apps/graduation/models.py\n@@ -1,4 +1,5 @@\n from django.db import models\n+from django.core.cache import cache\n \n from apps.subject.models import Department\n \n@@ -26,7 +27,15 @@\n class Meta:\n unique_together = [[\"start_year\", \"is_foreign\"], [\"end_year\", \"is_foreign\"]]\n \n+ def get_cache_key(self):\n+ return \"generaltrack:%d-%d-%s\" % (self.start_year, self.end_year, self.is_foreign)\n+\n def to_json(self):\n+ cache_id = self.get_cache_key()\n+ result_cached = cache.get(cache_id)\n+ if result_cached is not None:\n+ return result_cached\n+\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n@@ -44,6 +53,8 @@\n \"humanities_doublemajor\": self.humanities_doublemajor,\n }\n \n+ cache.set(cache_id, result, 60 * 60)\n+\n return result\n \n \n@@ -60,7 +71,15 @@\n class Meta:\n unique_together = [[\"start_year\", \"department\"], [\"end_year\", \"department\"]]\n \n+ def get_cache_key(self):\n+ return \"majortrack:%d-%d-%d\" % (self.start_year, self.end_year, self.department.id)\n+\n def to_json(self):\n+ cache_id = self.get_cache_key()\n+ result_cached = cache.get(cache_id)\n+ if result_cached is not None:\n+ return result_cached\n+\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n@@ -71,6 +90,8 @@\n \"major_elective\": self.major_elective,\n }\n \n+ cache.set(cache_id, result, 60 * 60)\n+\n return result\n \n \n@@ -95,7 +116,15 @@\n class Meta:\n unique_together = [[\"start_year\", \"type\", \"department\"], [\"end_year\", \"type\", \"department\"]]\n \n+ def get_cache_key(self):\n+ return \"additionaltrack:%d-%d-%s-%d\" % (self.start_year, self.end_year, self.type, self.department.id if self.department else 0)\n+\n def to_json(self):\n+ cache_id = self.get_cache_key()\n+ result_cached = cache.get(cache_id)\n+ if result_cached is not None:\n+ return result_cached\n+\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n@@ -106,4 +135,6 @@\n \"major_elective\": self.major_elective,\n }\n \n+ cache.set(cache_id, result, 60 * 60)\n+\n return result\n", "issue": "[CHORE] \uc878\uc5c5\ud50c\ub798\ub108 \uc2e0\uaddc model\uc758 serialize\uacb0\uacfc \uce90\uc2f1\n## \ub3d9\uae30\r\n\r\n**Is your feature request related to a problem? Please describe.**\r\n\r\nOTL\uc758 \uc8fc\uc694 \ubaa8\ub378\uc5d0\ub294 \uce90\uc2dc\uac00 \uad6c\ud604\ub418\uc5b4 \uc788\uc73c\ub098 \uc878\uc5c5\ud50c\ub798\ub108\uc5d0\uc11c \uc0c8\ub85c \uc0dd\uc131\ub41c model\uc740 \uc544\uc9c1 \uce90\uc2f1\uc774 \uc801\uc6a9\ub418\uc5b4 \uc788\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\r\n\ubca0\ud0c0 \ucd9c\uc2dc \ub54c\ub294 \uc6b0\uc120 \uc784\uc2dc\ub85c \uadf8\ub300\ub85c \ucd9c\uc2dc\ud558\uc600\uc9c0\ub9cc \uce90\uc2dc \ub3c4\uc785\uc774 \ud544\uc694\ud569\ub2c8\ub2e4.\r\n\ud2b9\ud788 \ud2b8\ub799 \ubd80\ubd84\uc740 \ud398\uc774\uc9c0 \uc811\uc18d \uc2dc\uc5d0 \ub85c\ub529\ub418\uace0 \ud55c\ubc88\uc5d0 \ub9ce\uc740 \uc591\uc774 \ub85c\ub4dc\ub418\uae30 \ub54c\ubb38\uc5d0 \uc131\ub2a5\uc774 \uc0c1\ub2f9\ud788 \uc800\ud558\ub420 \uc5ec\uc9c0\uac00 \uc788\uc2b5\ub2c8\ub2e4.\r\n\r\n## \uc124\uba85\r\n\r\n**Describe the solution you'd like.**\r\n\r\nA clear and concise description of what you want to happen.\r\n\r\n## \uc2a4\ud06c\ub9b0\uc0f7\r\n\r\n(OPTIONAL) If applicable, add screenshots to help explain your feature request.\r\n\r\n## \uac1c\ubc1c \ud658\uacbd\r\n\r\n- OS: [e.g. macOS]\r\n- ```python --version```:\r\n- ```node --version```:\r\n\r\n## \ud14c\uc2a4\ud2b8 \ud658\uacbd\r\n\r\n(OPTIONAL)\r\n\r\n- Device: [e.g. iPhone6]\r\n- OS: [e.g. iOS8.1]\r\n- Web Version: [e.g. 1.1.0]\r\n\r\n## \ucd94\uac00 \uc815\ubcf4\r\n\r\n(OPTIONAL) Add any other context or screenshots about the feature request here.\r\n\n", "before_files": [{"content": "from django.db import models\n\nfrom apps.subject.models import Department\n\n\nUNBOUND_START_YEAR = 2000\nUNBOUND_END_YEAR = 2100\n\n\nclass GeneralTrack(models.Model):\n start_year = models.IntegerField(db_index=True)\n end_year = models.IntegerField(db_index=True)\n is_foreign = models.BooleanField(db_index=True)\n\n total_credit = models.IntegerField()\n total_au = models.IntegerField()\n basic_required = models.IntegerField()\n basic_elective = models.IntegerField()\n thesis_study = models.IntegerField()\n thesis_study_doublemajor = models.IntegerField()\n general_required_credit = models.IntegerField()\n general_required_au = models.IntegerField()\n humanities = models.IntegerField()\n humanities_doublemajor = models.IntegerField()\n\n class Meta:\n unique_together = [[\"start_year\", \"is_foreign\"], [\"end_year\", \"is_foreign\"]]\n\n def to_json(self):\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n \"end_year\": self.end_year,\n \"is_foreign\": self.is_foreign,\n \"total_credit\": self.total_credit,\n \"total_au\": self.total_au,\n \"basic_required\": self.basic_required,\n \"basic_elective\": self.basic_elective,\n \"thesis_study\": self.thesis_study,\n \"thesis_study_doublemajor\": self.thesis_study_doublemajor,\n \"general_required_credit\": self.general_required_credit,\n \"general_required_au\": self.general_required_au,\n \"humanities\": self.humanities,\n \"humanities_doublemajor\": self.humanities_doublemajor,\n }\n\n return result\n\n\nclass MajorTrack(models.Model):\n start_year = models.IntegerField(db_index=True)\n end_year = models.IntegerField(db_index=True)\n department = models.ForeignKey(Department,\n on_delete=models.CASCADE, db_index=True)\n\n basic_elective_doublemajor = models.IntegerField()\n major_required = models.IntegerField()\n major_elective = models.IntegerField()\n\n class Meta:\n unique_together = [[\"start_year\", \"department\"], [\"end_year\", \"department\"]]\n\n def to_json(self):\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n \"end_year\": self.end_year,\n \"department\": self.department.to_json(nested=False),\n \"basic_elective_doublemajor\": self.basic_elective_doublemajor,\n \"major_required\": self.major_required,\n \"major_elective\": self.major_elective,\n }\n\n return result\n\n\nclass AdditionalTrack(models.Model):\n ADDITIONAL_TYPE_CHOICES = [\n ('DOUBLE', 'DOUBLE'),\n ('MINOR', 'MINOR'),\n ('ADVANCED', 'ADVANCED'),\n ('INTERDISCIPLINARY', 'INTERDISCIPLINARY'),\n ]\n\n start_year = models.IntegerField(db_index=True)\n end_year = models.IntegerField(db_index=True)\n type = models.CharField(db_index=True, max_length=32, choices=ADDITIONAL_TYPE_CHOICES)\n department = models.ForeignKey(Department,\n null=True, blank=True,\n on_delete=models.CASCADE, db_index=True)\n\n major_required = models.IntegerField()\n major_elective = models.IntegerField()\n\n class Meta:\n unique_together = [[\"start_year\", \"type\", \"department\"], [\"end_year\", \"type\", \"department\"]]\n\n def to_json(self):\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n \"end_year\": self.end_year,\n \"type\": self.type,\n \"department\": self.department.to_json(nested=False) if self.department else None,\n \"major_required\": self.major_required,\n \"major_elective\": self.major_elective,\n }\n\n return result\n", "path": "apps/graduation/models.py"}]}
| 1,867 | 648 |
gh_patches_debug_28860
|
rasdani/github-patches
|
git_diff
|
rotki__rotki-4382
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Failed at database upgrade from version 31 to 32
## Problem Definition
Login fails with following message:
`Failed at database upgrade from version 31 to 32: UNIQUE constraint failed: history_events_copy.event_identifier, history_events_copy.sequence_index`
## Logs
Which logs do you need, if you need any?
### System Description
Operating system: Debian Buster (using the AppImage)
Rotki version: 1.24.0
</issue>
<code>
[start of rotkehlchen/db/upgrades/v31_v32.py]
1 from collections import defaultdict
2 from typing import TYPE_CHECKING, Dict, Set
3
4 from rotkehlchen.db.constants import BINANCE_MARKETS_KEY
5
6 if TYPE_CHECKING:
7 from sqlite3 import Cursor
8
9 from rotkehlchen.db.dbhandler import DBHandler
10
11
12 def _upgrade_history_events(cursor: 'Cursor') -> None:
13 cursor.execute("""
14 CREATE TABLE IF NOT EXISTS history_events_copy (
15 identifier INTEGER NOT NULL PRIMARY KEY,
16 event_identifier TEXT NOT NULL,
17 sequence_index INTEGER NOT NULL,
18 timestamp INTEGER NOT NULL,
19 location TEXT NOT NULL,
20 location_label TEXT,
21 asset TEXT NOT NULL,
22 amount TEXT NOT NULL,
23 usd_value TEXT NOT NULL,
24 notes TEXT,
25 type TEXT NOT NULL,
26 subtype TEXT,
27 counterparty TEXT,
28 extra_data TEXT,
29 UNIQUE(event_identifier, sequence_index)
30 );""")
31 cursor.execute('UPDATE history_events SET timestamp = timestamp / 10;')
32 cursor.execute('UPDATE history_events SET subtype = "deposit asset" WHERE subtype = "staking deposit asset";') # noqa: E501
33 cursor.execute('UPDATE history_events SET subtype = "receive wrapped" WHERE subtype = "staking receive asset";') # noqa: E501
34 cursor.execute('UPDATE history_events SET subtype = "remove asset", type = "staking" WHERE subtype = "staking remove asset" AND type = "unstaking";') # noqa: E501
35 cursor.execute('UPDATE history_events SET subtype = "return wrapped", type = "staking" WHERE subtype = "staking receive asset" AND type = "unstaking";') # noqa: E501
36 cursor.execute('UPDATE history_events SET type = "informational" WHERE subtype = "unknown";')
37 cursor.execute("""
38 INSERT INTO history_events_copy (event_identifier, sequence_index, timestamp, location,
39 location_label, asset, amount, usd_value, notes, type, subtype)
40 SELECT event_identifier, sequence_index, timestamp, location, location_label, asset,
41 amount, usd_value, notes, type, subtype
42 FROM history_events;
43 """)
44 cursor.execute('DROP TABLE history_events;')
45 cursor.execute('ALTER TABLE history_events_copy RENAME TO history_events;')
46 cursor.execute(
47 'UPDATE history_events SET subtype="reward" WHERE type="staking" AND subtype IS NULL;',
48 )
49
50
51 def _remove_gitcoin(cursor: 'Cursor') -> None:
52 cursor.execute('DELETE from ledger_actions WHERE identifier IN (SELECT parent_id FROM ledger_actions_gitcoin_data)') # noqa: E501
53 cursor.execute('DELETE from used_query_ranges WHERE name LIKE "gitcoingrants_%"')
54 cursor.execute('DROP TABLE IF exists gitcoin_grant_metadata')
55 cursor.execute('DROP TABLE IF exists ledger_actions_gitcoin_data')
56 cursor.execute('DROP TABLE IF exists gitcoin_tx_type')
57
58
59 def _add_new_tables(cursor: 'Cursor') -> None:
60 cursor.execute('INSERT OR IGNORE INTO location(location, seq) VALUES ("d", 36)')
61 cursor.execute("""
62 CREATE TABLE IF NOT EXISTS ethereum_internal_transactions (
63 parent_tx_hash BLOB NOT NULL,
64 trace_id INTEGER NOT NULL,
65 timestamp INTEGER NOT NULL,
66 block_number INTEGER NOT NULL,
67 from_address TEXT NOT NULL,
68 to_address TEXT,
69 value TEXT NOT NULL,
70 FOREIGN KEY(parent_tx_hash) REFERENCES ethereum_transactions(tx_hash) ON DELETE CASCADE ON UPDATE CASCADE,
71 PRIMARY KEY(parent_tx_hash, trace_id)
72 );""") # noqa: E501
73 cursor.execute("""
74 CREATE TABLE IF NOT EXISTS ethtx_address_mappings (
75 address TEXT NOT NULL,
76 tx_hash BLOB NOT NULL,
77 blockchain TEXT NOT NULL,
78 FOREIGN KEY(blockchain, address) REFERENCES blockchain_accounts(blockchain, account) ON DELETE CASCADE,
79 FOREIGN KEY(tx_hash) references ethereum_transactions(tx_hash) ON UPDATE CASCADE ON DELETE CASCADE,
80 PRIMARY KEY (address, tx_hash, blockchain)
81 );""") # noqa: E501
82 cursor.execute("""
83 CREATE TABLE IF NOT EXISTS evm_tx_mappings (
84 tx_hash BLOB NOT NULL,
85 blockchain TEXT NOT NULL,
86 value TEXT NOT NULL,
87 FOREIGN KEY(tx_hash) references ethereum_transactions(tx_hash) ON UPDATE CASCADE ON DELETE CASCADE,
88 PRIMARY KEY (tx_hash, value)
89 );""") # noqa: E501
90 cursor.execute("""
91 CREATE TABLE IF NOT EXISTS history_events_mappings (
92 parent_identifier INTEGER NOT NULL,
93 value TEXT NOT NULL,
94 FOREIGN KEY(parent_identifier) references history_events(identifier) ON UPDATE CASCADE ON DELETE CASCADE,
95 PRIMARY KEY (parent_identifier, value)
96 );""") # noqa: E501
97 cursor.execute("""
98 CREATE TABLE IF NOT EXISTS ens_mappings (
99 address TEXT NOT NULL PRIMARY KEY,
100 ens_name TEXT UNIQUE,
101 last_update INTEGER NOT NULL
102 );
103 """)
104
105
106 def _refactor_manual_balance_id(cursor: 'Cursor') -> None:
107 cursor.execute("""
108 CREATE TABLE manually_tracked_balances_copy (
109 id INTEGER PRIMARY KEY,
110 asset TEXT NOT NULL,
111 label TEXT NOT NULL,
112 amount TEXT,
113 location CHAR(1) NOT NULL DEFAULT('A') REFERENCES location(location),
114 category CHAR(1) NOT NULL DEFAULT('A') REFERENCES balance_category(category),
115 FOREIGN KEY(asset) REFERENCES assets(identifier) ON UPDATE CASCADE
116 );""")
117 cursor.execute("""
118 INSERT INTO manually_tracked_balances_copy(asset, label, amount, location, category)
119 SELECT asset, label, amount, location, category
120 FROM manually_tracked_balances;
121 """)
122 cursor.execute('DROP TABLE manually_tracked_balances;')
123 cursor.execute(
124 'ALTER TABLE manually_tracked_balances_copy RENAME TO '
125 'manually_tracked_balances;',
126 )
127
128
129 def _update_fee_for_existing_trades(cursor: 'Cursor') -> None:
130 cursor.execute('UPDATE trades SET fee = NULL WHERE fee_currency IS NULL')
131 cursor.execute('UPDATE trades SET fee_currency = NULL WHERE fee IS NULL')
132
133
134 def _update_history_entries_from_kraken(cursor: 'Cursor') -> None:
135 """The logic for kraken was adding additional entries for trades when fee + kfee was
136 being used. This function makes the state of the database consistent with the upgraded
137 logic by:
138 - Removing extra row additions
139 - Make sure that no other event has duplicated sequence indexes
140 """
141 cursor.execute("""
142 DELETE FROM history_events where location="B" AND asset="KFEE" AND
143 type="trade" AND subtype=NULL;
144 """)
145 cursor.execute("""
146 SELECT e.event_identifier, e.sequence_index, e.identifier from history_events e JOIN (SELECT event_identifier,
147 sequence_index, COUNT(*) as cnt FROM history_events GROUP BY event_identifier, sequence_index)
148 other ON e.event_identifier = other.event_identifier and e.sequence_index=other.sequence_index
149 WHERE other.cnt > 1;
150 """) # noqa: E501
151
152 update_tuples = []
153 eventid_to_indices: Dict[str, Set[int]] = defaultdict(set)
154 for event_identifier, sequence_index, identifier in cursor:
155 last_indices = eventid_to_indices.get(event_identifier)
156 if last_indices is None:
157 # Let the first one be the same as it was in the database
158 eventid_to_indices[event_identifier].add(sequence_index)
159 continue
160
161 new_index = sequence_index + 1
162 while new_index in eventid_to_indices[event_identifier]:
163 new_index += 1
164 eventid_to_indices[event_identifier].add(new_index)
165 update_tuples.append((new_index, identifier))
166
167 if len(update_tuples) != 0:
168 cursor.executemany(
169 'UPDATE history_events SET sequence_index=? WHERE identifier=?',
170 update_tuples,
171 )
172
173
174 def _update_settings_name_for_selected_binance_markets(cursor: 'Cursor') -> None:
175 cursor.execute("""
176 UPDATE user_credentials_mappings SET setting_name = ? WHERE setting_name = "PAIRS"
177 """, (BINANCE_MARKETS_KEY,))
178
179
180 def _update_manual_balances_tags(cursor_fetch: 'Cursor', cursor_update: 'Cursor') -> None:
181 manual_balances = cursor_fetch.execute('SELECT id, label FROM manually_tracked_balances')
182 for balance_id, label in manual_balances:
183 cursor_update.execute('UPDATE tag_mappings SET object_reference=? WHERE object_reference=?', (balance_id, label)) # noqa: E501
184
185
186 def upgrade_v31_to_v32(db: 'DBHandler') -> None:
187 """Upgrades the DB from v31 to v32
188 - use new identifiers for the history_events table. The id will be generated by sqlite
189 and will be the column rowid
190
191 -Add the subtype REWARD to staking rewards (before they had type staking
192 and no subtype)
193
194 -Remove all gitcoin grant related data that was pulled from their API and saved in
195 specific tables along with the tables themselves
196
197 -Sets fee to null for existing trades if fee_currency is missing.
198 """
199 primary_cursor = db.conn.cursor()
200 secondary_cursor = db.conn.cursor()
201 _update_history_entries_from_kraken(primary_cursor)
202 _upgrade_history_events(primary_cursor)
203 _remove_gitcoin(primary_cursor)
204 _add_new_tables(primary_cursor)
205 _refactor_manual_balance_id(primary_cursor)
206 _update_fee_for_existing_trades(primary_cursor)
207 _update_settings_name_for_selected_binance_markets(primary_cursor)
208 _update_manual_balances_tags(cursor_fetch=primary_cursor, cursor_update=secondary_cursor)
209 db.conn.commit()
210
[end of rotkehlchen/db/upgrades/v31_v32.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rotkehlchen/db/upgrades/v31_v32.py b/rotkehlchen/db/upgrades/v31_v32.py
--- a/rotkehlchen/db/upgrades/v31_v32.py
+++ b/rotkehlchen/db/upgrades/v31_v32.py
@@ -142,22 +142,20 @@
DELETE FROM history_events where location="B" AND asset="KFEE" AND
type="trade" AND subtype=NULL;
""")
+
+ cursor.execute('SELECT event_identifier, sequence_index from history_events')
+ eventid_to_indices: Dict[str, Set[int]] = defaultdict(set)
+ for event_identifier, sequence_index in cursor:
+ eventid_to_indices[event_identifier].add(sequence_index)
+
cursor.execute("""
SELECT e.event_identifier, e.sequence_index, e.identifier from history_events e JOIN (SELECT event_identifier,
sequence_index, COUNT(*) as cnt FROM history_events GROUP BY event_identifier, sequence_index)
other ON e.event_identifier = other.event_identifier and e.sequence_index=other.sequence_index
WHERE other.cnt > 1;
""") # noqa: E501
-
update_tuples = []
- eventid_to_indices: Dict[str, Set[int]] = defaultdict(set)
for event_identifier, sequence_index, identifier in cursor:
- last_indices = eventid_to_indices.get(event_identifier)
- if last_indices is None:
- # Let the first one be the same as it was in the database
- eventid_to_indices[event_identifier].add(sequence_index)
- continue
-
new_index = sequence_index + 1
while new_index in eventid_to_indices[event_identifier]:
new_index += 1
|
{"golden_diff": "diff --git a/rotkehlchen/db/upgrades/v31_v32.py b/rotkehlchen/db/upgrades/v31_v32.py\n--- a/rotkehlchen/db/upgrades/v31_v32.py\n+++ b/rotkehlchen/db/upgrades/v31_v32.py\n@@ -142,22 +142,20 @@\n DELETE FROM history_events where location=\"B\" AND asset=\"KFEE\" AND\n type=\"trade\" AND subtype=NULL;\n \"\"\")\n+\n+ cursor.execute('SELECT event_identifier, sequence_index from history_events')\n+ eventid_to_indices: Dict[str, Set[int]] = defaultdict(set)\n+ for event_identifier, sequence_index in cursor:\n+ eventid_to_indices[event_identifier].add(sequence_index)\n+\n cursor.execute(\"\"\"\n SELECT e.event_identifier, e.sequence_index, e.identifier from history_events e JOIN (SELECT event_identifier,\n sequence_index, COUNT(*) as cnt FROM history_events GROUP BY event_identifier, sequence_index)\n other ON e.event_identifier = other.event_identifier and e.sequence_index=other.sequence_index\n WHERE other.cnt > 1;\n \"\"\") # noqa: E501\n-\n update_tuples = []\n- eventid_to_indices: Dict[str, Set[int]] = defaultdict(set)\n for event_identifier, sequence_index, identifier in cursor:\n- last_indices = eventid_to_indices.get(event_identifier)\n- if last_indices is None:\n- # Let the first one be the same as it was in the database\n- eventid_to_indices[event_identifier].add(sequence_index)\n- continue\n-\n new_index = sequence_index + 1\n while new_index in eventid_to_indices[event_identifier]:\n new_index += 1\n", "issue": "Failed at database upgrade from version 31 to 32\n## Problem Definition\r\n\r\nLogin fails with following message:\r\n`Failed at database upgrade from version 31 to 32: UNIQUE constraint failed: history_events_copy.event_identifier, history_events_copy.sequence_index`\r\n\r\n## Logs\r\n\r\nWhich logs do you need, if you need any?\r\n\r\n### System Description\r\n\r\nOperating system: Debian Buster (using the AppImage)\r\nRotki version: 1.24.0\r\n\n", "before_files": [{"content": "from collections import defaultdict\nfrom typing import TYPE_CHECKING, Dict, Set\n\nfrom rotkehlchen.db.constants import BINANCE_MARKETS_KEY\n\nif TYPE_CHECKING:\n from sqlite3 import Cursor\n\n from rotkehlchen.db.dbhandler import DBHandler\n\n\ndef _upgrade_history_events(cursor: 'Cursor') -> None:\n cursor.execute(\"\"\"\n CREATE TABLE IF NOT EXISTS history_events_copy (\n identifier INTEGER NOT NULL PRIMARY KEY,\n event_identifier TEXT NOT NULL,\n sequence_index INTEGER NOT NULL,\n timestamp INTEGER NOT NULL,\n location TEXT NOT NULL,\n location_label TEXT,\n asset TEXT NOT NULL,\n amount TEXT NOT NULL,\n usd_value TEXT NOT NULL,\n notes TEXT,\n type TEXT NOT NULL,\n subtype TEXT,\n counterparty TEXT,\n extra_data TEXT,\n UNIQUE(event_identifier, sequence_index)\n );\"\"\")\n cursor.execute('UPDATE history_events SET timestamp = timestamp / 10;')\n cursor.execute('UPDATE history_events SET subtype = \"deposit asset\" WHERE subtype = \"staking deposit asset\";') # noqa: E501\n cursor.execute('UPDATE history_events SET subtype = \"receive wrapped\" WHERE subtype = \"staking receive asset\";') # noqa: E501\n cursor.execute('UPDATE history_events SET subtype = \"remove asset\", type = \"staking\" WHERE subtype = \"staking remove asset\" AND type = \"unstaking\";') # noqa: E501\n cursor.execute('UPDATE history_events SET subtype = \"return wrapped\", type = \"staking\" WHERE subtype = \"staking receive asset\" AND type = \"unstaking\";') # noqa: E501\n cursor.execute('UPDATE history_events SET type = \"informational\" WHERE subtype = \"unknown\";')\n cursor.execute(\"\"\"\n INSERT INTO history_events_copy (event_identifier, sequence_index, timestamp, location,\n location_label, asset, amount, usd_value, notes, type, subtype)\n SELECT event_identifier, sequence_index, timestamp, location, location_label, asset,\n amount, usd_value, notes, type, subtype\n FROM history_events;\n \"\"\")\n cursor.execute('DROP TABLE history_events;')\n cursor.execute('ALTER TABLE history_events_copy RENAME TO history_events;')\n cursor.execute(\n 'UPDATE history_events SET subtype=\"reward\" WHERE type=\"staking\" AND subtype IS NULL;',\n )\n\n\ndef _remove_gitcoin(cursor: 'Cursor') -> None:\n cursor.execute('DELETE from ledger_actions WHERE identifier IN (SELECT parent_id FROM ledger_actions_gitcoin_data)') # noqa: E501\n cursor.execute('DELETE from used_query_ranges WHERE name LIKE \"gitcoingrants_%\"')\n cursor.execute('DROP TABLE IF exists gitcoin_grant_metadata')\n cursor.execute('DROP TABLE IF exists ledger_actions_gitcoin_data')\n cursor.execute('DROP TABLE IF exists gitcoin_tx_type')\n\n\ndef _add_new_tables(cursor: 'Cursor') -> None:\n cursor.execute('INSERT OR IGNORE INTO location(location, seq) VALUES (\"d\", 36)')\n cursor.execute(\"\"\"\n CREATE TABLE IF NOT EXISTS ethereum_internal_transactions (\n parent_tx_hash BLOB NOT NULL,\n trace_id INTEGER NOT NULL,\n timestamp INTEGER NOT NULL,\n block_number INTEGER NOT NULL,\n from_address TEXT NOT NULL,\n to_address TEXT,\n value TEXT NOT NULL,\n FOREIGN KEY(parent_tx_hash) REFERENCES ethereum_transactions(tx_hash) ON DELETE CASCADE ON UPDATE CASCADE,\n PRIMARY KEY(parent_tx_hash, trace_id)\n);\"\"\") # noqa: E501\n cursor.execute(\"\"\"\n CREATE TABLE IF NOT EXISTS ethtx_address_mappings (\n address TEXT NOT NULL,\n tx_hash BLOB NOT NULL,\n blockchain TEXT NOT NULL,\n FOREIGN KEY(blockchain, address) REFERENCES blockchain_accounts(blockchain, account) ON DELETE CASCADE,\n FOREIGN KEY(tx_hash) references ethereum_transactions(tx_hash) ON UPDATE CASCADE ON DELETE CASCADE,\n PRIMARY KEY (address, tx_hash, blockchain)\n);\"\"\") # noqa: E501\n cursor.execute(\"\"\"\n CREATE TABLE IF NOT EXISTS evm_tx_mappings (\n tx_hash BLOB NOT NULL,\n blockchain TEXT NOT NULL,\n value TEXT NOT NULL,\n FOREIGN KEY(tx_hash) references ethereum_transactions(tx_hash) ON UPDATE CASCADE ON DELETE CASCADE,\n PRIMARY KEY (tx_hash, value)\n);\"\"\") # noqa: E501\n cursor.execute(\"\"\"\n CREATE TABLE IF NOT EXISTS history_events_mappings (\n parent_identifier INTEGER NOT NULL,\n value TEXT NOT NULL,\n FOREIGN KEY(parent_identifier) references history_events(identifier) ON UPDATE CASCADE ON DELETE CASCADE,\n PRIMARY KEY (parent_identifier, value)\n);\"\"\") # noqa: E501\n cursor.execute(\"\"\"\n CREATE TABLE IF NOT EXISTS ens_mappings (\n address TEXT NOT NULL PRIMARY KEY,\n ens_name TEXT UNIQUE,\n last_update INTEGER NOT NULL\n);\n\"\"\")\n\n\ndef _refactor_manual_balance_id(cursor: 'Cursor') -> None:\n cursor.execute(\"\"\"\n CREATE TABLE manually_tracked_balances_copy (\n id INTEGER PRIMARY KEY,\n asset TEXT NOT NULL,\n label TEXT NOT NULL,\n amount TEXT,\n location CHAR(1) NOT NULL DEFAULT('A') REFERENCES location(location),\n category CHAR(1) NOT NULL DEFAULT('A') REFERENCES balance_category(category),\n FOREIGN KEY(asset) REFERENCES assets(identifier) ON UPDATE CASCADE\n );\"\"\")\n cursor.execute(\"\"\"\n INSERT INTO manually_tracked_balances_copy(asset, label, amount, location, category)\n SELECT asset, label, amount, location, category\n FROM manually_tracked_balances;\n \"\"\")\n cursor.execute('DROP TABLE manually_tracked_balances;')\n cursor.execute(\n 'ALTER TABLE manually_tracked_balances_copy RENAME TO '\n 'manually_tracked_balances;',\n )\n\n\ndef _update_fee_for_existing_trades(cursor: 'Cursor') -> None:\n cursor.execute('UPDATE trades SET fee = NULL WHERE fee_currency IS NULL')\n cursor.execute('UPDATE trades SET fee_currency = NULL WHERE fee IS NULL')\n\n\ndef _update_history_entries_from_kraken(cursor: 'Cursor') -> None:\n \"\"\"The logic for kraken was adding additional entries for trades when fee + kfee was\n being used. This function makes the state of the database consistent with the upgraded\n logic by:\n - Removing extra row additions\n - Make sure that no other event has duplicated sequence indexes\n \"\"\"\n cursor.execute(\"\"\"\n DELETE FROM history_events where location=\"B\" AND asset=\"KFEE\" AND\n type=\"trade\" AND subtype=NULL;\n \"\"\")\n cursor.execute(\"\"\"\n SELECT e.event_identifier, e.sequence_index, e.identifier from history_events e JOIN (SELECT event_identifier,\n sequence_index, COUNT(*) as cnt FROM history_events GROUP BY event_identifier, sequence_index)\n other ON e.event_identifier = other.event_identifier and e.sequence_index=other.sequence_index\n WHERE other.cnt > 1;\n \"\"\") # noqa: E501\n\n update_tuples = []\n eventid_to_indices: Dict[str, Set[int]] = defaultdict(set)\n for event_identifier, sequence_index, identifier in cursor:\n last_indices = eventid_to_indices.get(event_identifier)\n if last_indices is None:\n # Let the first one be the same as it was in the database\n eventid_to_indices[event_identifier].add(sequence_index)\n continue\n\n new_index = sequence_index + 1\n while new_index in eventid_to_indices[event_identifier]:\n new_index += 1\n eventid_to_indices[event_identifier].add(new_index)\n update_tuples.append((new_index, identifier))\n\n if len(update_tuples) != 0:\n cursor.executemany(\n 'UPDATE history_events SET sequence_index=? WHERE identifier=?',\n update_tuples,\n )\n\n\ndef _update_settings_name_for_selected_binance_markets(cursor: 'Cursor') -> None:\n cursor.execute(\"\"\"\n UPDATE user_credentials_mappings SET setting_name = ? WHERE setting_name = \"PAIRS\"\n \"\"\", (BINANCE_MARKETS_KEY,))\n\n\ndef _update_manual_balances_tags(cursor_fetch: 'Cursor', cursor_update: 'Cursor') -> None:\n manual_balances = cursor_fetch.execute('SELECT id, label FROM manually_tracked_balances')\n for balance_id, label in manual_balances:\n cursor_update.execute('UPDATE tag_mappings SET object_reference=? WHERE object_reference=?', (balance_id, label)) # noqa: E501\n\n\ndef upgrade_v31_to_v32(db: 'DBHandler') -> None:\n \"\"\"Upgrades the DB from v31 to v32\n - use new identifiers for the history_events table. The id will be generated by sqlite\n and will be the column rowid\n\n -Add the subtype REWARD to staking rewards (before they had type staking\n and no subtype)\n\n -Remove all gitcoin grant related data that was pulled from their API and saved in\n specific tables along with the tables themselves\n\n -Sets fee to null for existing trades if fee_currency is missing.\n \"\"\"\n primary_cursor = db.conn.cursor()\n secondary_cursor = db.conn.cursor()\n _update_history_entries_from_kraken(primary_cursor)\n _upgrade_history_events(primary_cursor)\n _remove_gitcoin(primary_cursor)\n _add_new_tables(primary_cursor)\n _refactor_manual_balance_id(primary_cursor)\n _update_fee_for_existing_trades(primary_cursor)\n _update_settings_name_for_selected_binance_markets(primary_cursor)\n _update_manual_balances_tags(cursor_fetch=primary_cursor, cursor_update=secondary_cursor)\n db.conn.commit()\n", "path": "rotkehlchen/db/upgrades/v31_v32.py"}]}
| 3,198 | 381 |
gh_patches_debug_4968
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-2649
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dbt clean regression
### Describe the bug
In dbt 0.16.1 `dbt clean` fails without a profile:
```bash
(dbt) dbt$ dbt --version
installed version: 0.16.1
latest version: 0.17.0
Your version of dbt is out of date! You can find instructions for upgrading here:
https://docs.getdbt.com/docs/installation
(dbt) dbt$ dbt clean
Running with dbt=0.16.1
Encountered an error while reading the project:
ERROR: Runtime Error
Could not find profile named 'profile'
Encountered an error:
Runtime Error
Could not run dbt
```
In dbt 0.15.1, `dbt clean` works.
```bash
(dbt) dbt$ dbt --version
installed version: 0.15.1
latest version: 0.17.0
Your version of dbt is out of date! You can find instructions for upgrading here:
https://docs.getdbt.com/docs/installation
(dbt) dbt$ dbt clean
Running with dbt=0.15.1
Checking target/*
Cleaned target/*
Finished cleaning all paths.
```
### Steps To Reproduce
Delete any profile found in `~/.dbt/profile.yml`.
Install 0.16.1:
```bash
pip install dbt==0.16.1
```
Navigate to dbt project:
```
dbt clean
```
Repeat for 0.15.1 to confirm regression.
### Expected behavior
I expected `dbt clean` to work without a profile. This broke some of our automated jobs when we tried to upgrade.
### System information
**Which database are you using dbt with?**
- [ ] postgres
- [X] redshift
- [ ] bigquery
- [ ] snowflake
- [ ] other (specify: ____________)
**The output of `dbt --version`:**
Multiple versions. See above.
**The operating system you're using:**
macOS 10.14.6
**The output of `python --version`:**
```
(dbt) dbt$ python --version
Python 3.7.3
```
### Additional context
Most people probably don't run `dbt clean` without a profile, but it was causing us confusion, so wanted to document it as a breaking change at least.
I also tested this with 0.17.0: same error as 0.16.1.
```
(dbt) dbt$ dbt --version
installed version: 0.17.0
latest version: 0.17.0
Up to date!
Plugins:
- bigquery: 0.17.0
- snowflake: 0.17.0
- redshift: 0.17.0
- postgres: 0.17.0
(dbt) dbt$ dbt clean
Running with dbt=0.17.0
Encountered an error while reading the project:
ERROR: Runtime Error
Could not find profile named 'profile'
Encountered an error:
Runtime Error
Could not run dbt
```
</issue>
<code>
[start of core/dbt/task/clean.py]
1 import os.path
2 import os
3 import shutil
4
5 from dbt.task.base import ConfiguredTask
6 from dbt.logger import GLOBAL_LOGGER as logger
7
8
9 class CleanTask(ConfiguredTask):
10
11 def __is_project_path(self, path):
12 proj_path = os.path.abspath('.')
13 return not os.path.commonprefix(
14 [proj_path, os.path.abspath(path)]
15 ) == proj_path
16
17 def __is_protected_path(self, path):
18 """
19 This function identifies protected paths, so as not to clean them.
20 """
21 abs_path = os.path.abspath(path)
22 protected_paths = self.config.source_paths + \
23 self.config.test_paths + ['.']
24 protected_abs_paths = [os.path.abspath(p) for p in protected_paths]
25 return abs_path in set(protected_abs_paths) or \
26 self.__is_project_path(abs_path)
27
28 def run(self):
29 """
30 This function takes all the paths in the target file
31 and cleans the project paths that are not protected.
32 """
33 for path in self.config.clean_targets:
34 logger.info("Checking {}/*".format(path))
35 if not self.__is_protected_path(path):
36 shutil.rmtree(path, True)
37 logger.info(" Cleaned {}/*".format(path))
38 else:
39 logger.info("ERROR: not cleaning {}/* because it is "
40 "protected".format(path))
41 logger.info("Finished cleaning all paths.")
42
[end of core/dbt/task/clean.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/dbt/task/clean.py b/core/dbt/task/clean.py
--- a/core/dbt/task/clean.py
+++ b/core/dbt/task/clean.py
@@ -2,11 +2,13 @@
import os
import shutil
-from dbt.task.base import ConfiguredTask
+from dbt.task.base import BaseTask
from dbt.logger import GLOBAL_LOGGER as logger
+from dbt.config import UnsetProfileConfig
-class CleanTask(ConfiguredTask):
+class CleanTask(BaseTask):
+ ConfigType = UnsetProfileConfig
def __is_project_path(self, path):
proj_path = os.path.abspath('.')
|
{"golden_diff": "diff --git a/core/dbt/task/clean.py b/core/dbt/task/clean.py\n--- a/core/dbt/task/clean.py\n+++ b/core/dbt/task/clean.py\n@@ -2,11 +2,13 @@\n import os\n import shutil\n \n-from dbt.task.base import ConfiguredTask\n+from dbt.task.base import BaseTask\n from dbt.logger import GLOBAL_LOGGER as logger\n+from dbt.config import UnsetProfileConfig\n \n \n-class CleanTask(ConfiguredTask):\n+class CleanTask(BaseTask):\n+ ConfigType = UnsetProfileConfig\n \n def __is_project_path(self, path):\n proj_path = os.path.abspath('.')\n", "issue": "dbt clean regression\n### Describe the bug\r\nIn dbt 0.16.1 `dbt clean` fails without a profile: \r\n\r\n```bash\r\n(dbt) dbt$ dbt --version\r\ninstalled version: 0.16.1\r\n latest version: 0.17.0\r\n\r\nYour version of dbt is out of date! You can find instructions for upgrading here:\r\nhttps://docs.getdbt.com/docs/installation\r\n(dbt) dbt$ dbt clean\r\nRunning with dbt=0.16.1\r\nEncountered an error while reading the project:\r\n ERROR: Runtime Error\r\n Could not find profile named 'profile'\r\nEncountered an error:\r\nRuntime Error\r\n Could not run dbt\r\n```\r\n\r\nIn dbt 0.15.1, `dbt clean` works.\r\n\r\n```bash\r\n(dbt) dbt$ dbt --version\r\ninstalled version: 0.15.1\r\n latest version: 0.17.0\r\n\r\nYour version of dbt is out of date! You can find instructions for upgrading here:\r\nhttps://docs.getdbt.com/docs/installation\r\n(dbt) dbt$ dbt clean\r\nRunning with dbt=0.15.1\r\nChecking target/*\r\n Cleaned target/*\r\nFinished cleaning all paths.\r\n```\r\n\r\n### Steps To Reproduce\r\nDelete any profile found in `~/.dbt/profile.yml`. \r\n\r\nInstall 0.16.1:\r\n```bash\r\npip install dbt==0.16.1\r\n```\r\nNavigate to dbt project:\r\n```\r\ndbt clean\r\n```\r\n\r\nRepeat for 0.15.1 to confirm regression.\r\n\r\n### Expected behavior\r\nI expected `dbt clean` to work without a profile. This broke some of our automated jobs when we tried to upgrade.\r\n\r\n### System information\r\n**Which database are you using dbt with?**\r\n- [ ] postgres\r\n- [X] redshift\r\n- [ ] bigquery\r\n- [ ] snowflake\r\n- [ ] other (specify: ____________)\r\n\r\n\r\n**The output of `dbt --version`:**\r\nMultiple versions. See above.\r\n\r\n**The operating system you're using:**\r\nmacOS 10.14.6\r\n\r\n**The output of `python --version`:**\r\n```\r\n(dbt) dbt$ python --version\r\nPython 3.7.3\r\n```\r\n\r\n### Additional context\r\nMost people probably don't run `dbt clean` without a profile, but it was causing us confusion, so wanted to document it as a breaking change at least.\r\n\r\nI also tested this with 0.17.0: same error as 0.16.1.\r\n\r\n```\r\n(dbt) dbt$ dbt --version\r\ninstalled version: 0.17.0\r\n latest version: 0.17.0\r\n\r\nUp to date!\r\n\r\nPlugins:\r\n - bigquery: 0.17.0\r\n - snowflake: 0.17.0\r\n - redshift: 0.17.0\r\n - postgres: 0.17.0\r\n(dbt) dbt$ dbt clean\r\nRunning with dbt=0.17.0\r\nEncountered an error while reading the project:\r\n ERROR: Runtime Error\r\n Could not find profile named 'profile'\r\nEncountered an error:\r\nRuntime Error\r\n Could not run dbt\r\n```\r\n\n", "before_files": [{"content": "import os.path\nimport os\nimport shutil\n\nfrom dbt.task.base import ConfiguredTask\nfrom dbt.logger import GLOBAL_LOGGER as logger\n\n\nclass CleanTask(ConfiguredTask):\n\n def __is_project_path(self, path):\n proj_path = os.path.abspath('.')\n return not os.path.commonprefix(\n [proj_path, os.path.abspath(path)]\n ) == proj_path\n\n def __is_protected_path(self, path):\n \"\"\"\n This function identifies protected paths, so as not to clean them.\n \"\"\"\n abs_path = os.path.abspath(path)\n protected_paths = self.config.source_paths + \\\n self.config.test_paths + ['.']\n protected_abs_paths = [os.path.abspath(p) for p in protected_paths]\n return abs_path in set(protected_abs_paths) or \\\n self.__is_project_path(abs_path)\n\n def run(self):\n \"\"\"\n This function takes all the paths in the target file\n and cleans the project paths that are not protected.\n \"\"\"\n for path in self.config.clean_targets:\n logger.info(\"Checking {}/*\".format(path))\n if not self.__is_protected_path(path):\n shutil.rmtree(path, True)\n logger.info(\" Cleaned {}/*\".format(path))\n else:\n logger.info(\"ERROR: not cleaning {}/* because it is \"\n \"protected\".format(path))\n logger.info(\"Finished cleaning all paths.\")\n", "path": "core/dbt/task/clean.py"}]}
| 1,636 | 142 |
gh_patches_debug_39629
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-7411
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typo in `skimage.measure.find_contours`
### Description:
There is a typo in the `skimage.measure.find_contours` docstring:
Uses the “marching squares” method to compute **a the** iso-valued contours
### Way to reproduce:
_No response_
### Version information:
_No response_
</issue>
<code>
[start of skimage/measure/_find_contours.py]
1 import numpy as np
2
3 from ._find_contours_cy import _get_contour_segments
4
5 from collections import deque
6
7 _param_options = ('high', 'low')
8
9
10 def find_contours(
11 image, level=None, fully_connected='low', positive_orientation='low', *, mask=None
12 ):
13 """Find iso-valued contours in a 2D array for a given level value.
14
15 Uses the "marching squares" method to compute a the iso-valued contours of
16 the input 2D array for a particular level value. Array values are linearly
17 interpolated to provide better precision for the output contours.
18
19 Parameters
20 ----------
21 image : (M, N) ndarray of double
22 Input image in which to find contours.
23 level : float, optional
24 Value along which to find contours in the array. By default, the level
25 is set to (max(image) + min(image)) / 2
26
27 .. versionchanged:: 0.18
28 This parameter is now optional.
29 fully_connected : str, {'low', 'high'}
30 Indicates whether array elements below the given level value are to be
31 considered fully-connected (and hence elements above the value will
32 only be face connected), or vice-versa. (See notes below for details.)
33 positive_orientation : str, {'low', 'high'}
34 Indicates whether the output contours will produce positively-oriented
35 polygons around islands of low- or high-valued elements. If 'low' then
36 contours will wind counter- clockwise around elements below the
37 iso-value. Alternately, this means that low-valued elements are always
38 on the left of the contour. (See below for details.)
39 mask : (M, N) ndarray of bool or None
40 A boolean mask, True where we want to draw contours.
41 Note that NaN values are always excluded from the considered region
42 (``mask`` is set to ``False`` wherever ``array`` is ``NaN``).
43
44 Returns
45 -------
46 contours : list of (K, 2) ndarrays
47 Each contour is a ndarray of ``(row, column)`` coordinates along the contour.
48
49 See Also
50 --------
51 skimage.measure.marching_cubes
52
53 Notes
54 -----
55 The marching squares algorithm is a special case of the marching cubes
56 algorithm [1]_. A simple explanation is available here:
57
58 http://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html
59
60 There is a single ambiguous case in the marching squares algorithm: when
61 a given ``2 x 2``-element square has two high-valued and two low-valued
62 elements, each pair diagonally adjacent. (Where high- and low-valued is
63 with respect to the contour value sought.) In this case, either the
64 high-valued elements can be 'connected together' via a thin isthmus that
65 separates the low-valued elements, or vice-versa. When elements are
66 connected together across a diagonal, they are considered 'fully
67 connected' (also known as 'face+vertex-connected' or '8-connected'). Only
68 high-valued or low-valued elements can be fully-connected, the other set
69 will be considered as 'face-connected' or '4-connected'. By default,
70 low-valued elements are considered fully-connected; this can be altered
71 with the 'fully_connected' parameter.
72
73 Output contours are not guaranteed to be closed: contours which intersect
74 the array edge or a masked-off region (either where mask is False or where
75 array is NaN) will be left open. All other contours will be closed. (The
76 closed-ness of a contours can be tested by checking whether the beginning
77 point is the same as the end point.)
78
79 Contours are oriented. By default, array values lower than the contour
80 value are to the left of the contour and values greater than the contour
81 value are to the right. This means that contours will wind
82 counter-clockwise (i.e. in 'positive orientation') around islands of
83 low-valued pixels. This behavior can be altered with the
84 'positive_orientation' parameter.
85
86 The order of the contours in the output list is determined by the position
87 of the smallest ``x,y`` (in lexicographical order) coordinate in the
88 contour. This is a side-effect of how the input array is traversed, but
89 can be relied upon.
90
91 .. warning::
92
93 Array coordinates/values are assumed to refer to the *center* of the
94 array element. Take a simple example input: ``[0, 1]``. The interpolated
95 position of 0.5 in this array is midway between the 0-element (at
96 ``x=0``) and the 1-element (at ``x=1``), and thus would fall at
97 ``x=0.5``.
98
99 This means that to find reasonable contours, it is best to find contours
100 midway between the expected "light" and "dark" values. In particular,
101 given a binarized array, *do not* choose to find contours at the low or
102 high value of the array. This will often yield degenerate contours,
103 especially around structures that are a single array element wide. Instead
104 choose a middle value, as above.
105
106 References
107 ----------
108 .. [1] Lorensen, William and Harvey E. Cline. Marching Cubes: A High
109 Resolution 3D Surface Construction Algorithm. Computer Graphics
110 (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170).
111 :DOI:`10.1145/37401.37422`
112
113 Examples
114 --------
115 >>> a = np.zeros((3, 3))
116 >>> a[0, 0] = 1
117 >>> a
118 array([[1., 0., 0.],
119 [0., 0., 0.],
120 [0., 0., 0.]])
121 >>> find_contours(a, 0.5)
122 [array([[0. , 0.5],
123 [0.5, 0. ]])]
124 """
125 if fully_connected not in _param_options:
126 raise ValueError(
127 'Parameters "fully_connected" must be either ' '"high" or "low".'
128 )
129 if positive_orientation not in _param_options:
130 raise ValueError(
131 'Parameters "positive_orientation" must be either ' '"high" or "low".'
132 )
133 if image.shape[0] < 2 or image.shape[1] < 2:
134 raise ValueError("Input array must be at least 2x2.")
135 if image.ndim != 2:
136 raise ValueError('Only 2D arrays are supported.')
137 if mask is not None:
138 if mask.shape != image.shape:
139 raise ValueError('Parameters "array" and "mask"' ' must have same shape.')
140 if not np.can_cast(mask.dtype, bool, casting='safe'):
141 raise TypeError('Parameter "mask" must be a binary array.')
142 mask = mask.astype(np.uint8, copy=False)
143 if level is None:
144 level = (np.nanmin(image) + np.nanmax(image)) / 2.0
145
146 segments = _get_contour_segments(
147 image.astype(np.float64), float(level), fully_connected == 'high', mask=mask
148 )
149 contours = _assemble_contours(segments)
150 if positive_orientation == 'high':
151 contours = [c[::-1] for c in contours]
152 return contours
153
154
155 def _assemble_contours(segments):
156 current_index = 0
157 contours = {}
158 starts = {}
159 ends = {}
160 for from_point, to_point in segments:
161 # Ignore degenerate segments.
162 # This happens when (and only when) one vertex of the square is
163 # exactly the contour level, and the rest are above or below.
164 # This degenerate vertex will be picked up later by neighboring
165 # squares.
166 if from_point == to_point:
167 continue
168
169 tail, tail_num = starts.pop(to_point, (None, None))
170 head, head_num = ends.pop(from_point, (None, None))
171
172 if tail is not None and head is not None:
173 # We need to connect these two contours.
174 if tail is head:
175 # We need to closed a contour: add the end point
176 head.append(to_point)
177 else: # tail is not head
178 # We need to join two distinct contours.
179 # We want to keep the first contour segment created, so that
180 # the final contours are ordered left->right, top->bottom.
181 if tail_num > head_num:
182 # tail was created second. Append tail to head.
183 head.extend(tail)
184 # Remove tail from the detected contours
185 contours.pop(tail_num, None)
186 # Update starts and ends
187 starts[head[0]] = (head, head_num)
188 ends[head[-1]] = (head, head_num)
189 else: # tail_num <= head_num
190 # head was created second. Prepend head to tail.
191 tail.extendleft(reversed(head))
192 # Remove head from the detected contours
193 starts.pop(head[0], None) # head[0] can be == to_point!
194 contours.pop(head_num, None)
195 # Update starts and ends
196 starts[tail[0]] = (tail, tail_num)
197 ends[tail[-1]] = (tail, tail_num)
198 elif tail is None and head is None:
199 # We need to add a new contour
200 new_contour = deque((from_point, to_point))
201 contours[current_index] = new_contour
202 starts[from_point] = (new_contour, current_index)
203 ends[to_point] = (new_contour, current_index)
204 current_index += 1
205 elif head is None: # tail is not None
206 # tail first element is to_point: the new segment should be
207 # prepended.
208 tail.appendleft(from_point)
209 # Update starts
210 starts[from_point] = (tail, tail_num)
211 else: # tail is None and head is not None:
212 # head last element is from_point: the new segment should be
213 # appended
214 head.append(to_point)
215 # Update ends
216 ends[to_point] = (head, head_num)
217
218 return [np.array(contour) for _, contour in sorted(contours.items())]
219
[end of skimage/measure/_find_contours.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/skimage/measure/_find_contours.py b/skimage/measure/_find_contours.py
--- a/skimage/measure/_find_contours.py
+++ b/skimage/measure/_find_contours.py
@@ -12,7 +12,7 @@
):
"""Find iso-valued contours in a 2D array for a given level value.
- Uses the "marching squares" method to compute a the iso-valued contours of
+ Uses the "marching squares" method to compute the iso-valued contours of
the input 2D array for a particular level value. Array values are linearly
interpolated to provide better precision for the output contours.
@@ -33,7 +33,7 @@
positive_orientation : str, {'low', 'high'}
Indicates whether the output contours will produce positively-oriented
polygons around islands of low- or high-valued elements. If 'low' then
- contours will wind counter- clockwise around elements below the
+ contours will wind counter-clockwise around elements below the
iso-value. Alternately, this means that low-valued elements are always
on the left of the contour. (See below for details.)
mask : (M, N) ndarray of bool or None
@@ -55,7 +55,7 @@
The marching squares algorithm is a special case of the marching cubes
algorithm [1]_. A simple explanation is available here:
- http://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html
+ https://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html
There is a single ambiguous case in the marching squares algorithm: when
a given ``2 x 2``-element square has two high-valued and two low-valued
@@ -85,7 +85,7 @@
The order of the contours in the output list is determined by the position
of the smallest ``x,y`` (in lexicographical order) coordinate in the
- contour. This is a side-effect of how the input array is traversed, but
+ contour. This is a side effect of how the input array is traversed, but
can be relied upon.
.. warning::
@@ -100,7 +100,7 @@
midway between the expected "light" and "dark" values. In particular,
given a binarized array, *do not* choose to find contours at the low or
high value of the array. This will often yield degenerate contours,
- especially around structures that are a single array element wide. Instead
+ especially around structures that are a single array element wide. Instead,
choose a middle value, as above.
References
|
{"golden_diff": "diff --git a/skimage/measure/_find_contours.py b/skimage/measure/_find_contours.py\n--- a/skimage/measure/_find_contours.py\n+++ b/skimage/measure/_find_contours.py\n@@ -12,7 +12,7 @@\n ):\n \"\"\"Find iso-valued contours in a 2D array for a given level value.\n \n- Uses the \"marching squares\" method to compute a the iso-valued contours of\n+ Uses the \"marching squares\" method to compute the iso-valued contours of\n the input 2D array for a particular level value. Array values are linearly\n interpolated to provide better precision for the output contours.\n \n@@ -33,7 +33,7 @@\n positive_orientation : str, {'low', 'high'}\n Indicates whether the output contours will produce positively-oriented\n polygons around islands of low- or high-valued elements. If 'low' then\n- contours will wind counter- clockwise around elements below the\n+ contours will wind counter-clockwise around elements below the\n iso-value. Alternately, this means that low-valued elements are always\n on the left of the contour. (See below for details.)\n mask : (M, N) ndarray of bool or None\n@@ -55,7 +55,7 @@\n The marching squares algorithm is a special case of the marching cubes\n algorithm [1]_. A simple explanation is available here:\n \n- http://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html\n+ https://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html\n \n There is a single ambiguous case in the marching squares algorithm: when\n a given ``2 x 2``-element square has two high-valued and two low-valued\n@@ -85,7 +85,7 @@\n \n The order of the contours in the output list is determined by the position\n of the smallest ``x,y`` (in lexicographical order) coordinate in the\n- contour. This is a side-effect of how the input array is traversed, but\n+ contour. This is a side effect of how the input array is traversed, but\n can be relied upon.\n \n .. warning::\n@@ -100,7 +100,7 @@\n midway between the expected \"light\" and \"dark\" values. In particular,\n given a binarized array, *do not* choose to find contours at the low or\n high value of the array. This will often yield degenerate contours,\n- especially around structures that are a single array element wide. Instead\n+ especially around structures that are a single array element wide. Instead,\n choose a middle value, as above.\n \n References\n", "issue": "Typo in `skimage.measure.find_contours`\n### Description:\n\nThere is a typo in the `skimage.measure.find_contours` docstring: \r\nUses the \u201cmarching squares\u201d method to compute **a the** iso-valued contours\n\n### Way to reproduce:\n\n_No response_\n\n### Version information:\n\n_No response_\n", "before_files": [{"content": "import numpy as np\n\nfrom ._find_contours_cy import _get_contour_segments\n\nfrom collections import deque\n\n_param_options = ('high', 'low')\n\n\ndef find_contours(\n image, level=None, fully_connected='low', positive_orientation='low', *, mask=None\n):\n \"\"\"Find iso-valued contours in a 2D array for a given level value.\n\n Uses the \"marching squares\" method to compute a the iso-valued contours of\n the input 2D array for a particular level value. Array values are linearly\n interpolated to provide better precision for the output contours.\n\n Parameters\n ----------\n image : (M, N) ndarray of double\n Input image in which to find contours.\n level : float, optional\n Value along which to find contours in the array. By default, the level\n is set to (max(image) + min(image)) / 2\n\n .. versionchanged:: 0.18\n This parameter is now optional.\n fully_connected : str, {'low', 'high'}\n Indicates whether array elements below the given level value are to be\n considered fully-connected (and hence elements above the value will\n only be face connected), or vice-versa. (See notes below for details.)\n positive_orientation : str, {'low', 'high'}\n Indicates whether the output contours will produce positively-oriented\n polygons around islands of low- or high-valued elements. If 'low' then\n contours will wind counter- clockwise around elements below the\n iso-value. Alternately, this means that low-valued elements are always\n on the left of the contour. (See below for details.)\n mask : (M, N) ndarray of bool or None\n A boolean mask, True where we want to draw contours.\n Note that NaN values are always excluded from the considered region\n (``mask`` is set to ``False`` wherever ``array`` is ``NaN``).\n\n Returns\n -------\n contours : list of (K, 2) ndarrays\n Each contour is a ndarray of ``(row, column)`` coordinates along the contour.\n\n See Also\n --------\n skimage.measure.marching_cubes\n\n Notes\n -----\n The marching squares algorithm is a special case of the marching cubes\n algorithm [1]_. A simple explanation is available here:\n\n http://users.polytech.unice.fr/~lingrand/MarchingCubes/algo.html\n\n There is a single ambiguous case in the marching squares algorithm: when\n a given ``2 x 2``-element square has two high-valued and two low-valued\n elements, each pair diagonally adjacent. (Where high- and low-valued is\n with respect to the contour value sought.) In this case, either the\n high-valued elements can be 'connected together' via a thin isthmus that\n separates the low-valued elements, or vice-versa. When elements are\n connected together across a diagonal, they are considered 'fully\n connected' (also known as 'face+vertex-connected' or '8-connected'). Only\n high-valued or low-valued elements can be fully-connected, the other set\n will be considered as 'face-connected' or '4-connected'. By default,\n low-valued elements are considered fully-connected; this can be altered\n with the 'fully_connected' parameter.\n\n Output contours are not guaranteed to be closed: contours which intersect\n the array edge or a masked-off region (either where mask is False or where\n array is NaN) will be left open. All other contours will be closed. (The\n closed-ness of a contours can be tested by checking whether the beginning\n point is the same as the end point.)\n\n Contours are oriented. By default, array values lower than the contour\n value are to the left of the contour and values greater than the contour\n value are to the right. This means that contours will wind\n counter-clockwise (i.e. in 'positive orientation') around islands of\n low-valued pixels. This behavior can be altered with the\n 'positive_orientation' parameter.\n\n The order of the contours in the output list is determined by the position\n of the smallest ``x,y`` (in lexicographical order) coordinate in the\n contour. This is a side-effect of how the input array is traversed, but\n can be relied upon.\n\n .. warning::\n\n Array coordinates/values are assumed to refer to the *center* of the\n array element. Take a simple example input: ``[0, 1]``. The interpolated\n position of 0.5 in this array is midway between the 0-element (at\n ``x=0``) and the 1-element (at ``x=1``), and thus would fall at\n ``x=0.5``.\n\n This means that to find reasonable contours, it is best to find contours\n midway between the expected \"light\" and \"dark\" values. In particular,\n given a binarized array, *do not* choose to find contours at the low or\n high value of the array. This will often yield degenerate contours,\n especially around structures that are a single array element wide. Instead\n choose a middle value, as above.\n\n References\n ----------\n .. [1] Lorensen, William and Harvey E. Cline. Marching Cubes: A High\n Resolution 3D Surface Construction Algorithm. Computer Graphics\n (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170).\n :DOI:`10.1145/37401.37422`\n\n Examples\n --------\n >>> a = np.zeros((3, 3))\n >>> a[0, 0] = 1\n >>> a\n array([[1., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]])\n >>> find_contours(a, 0.5)\n [array([[0. , 0.5],\n [0.5, 0. ]])]\n \"\"\"\n if fully_connected not in _param_options:\n raise ValueError(\n 'Parameters \"fully_connected\" must be either ' '\"high\" or \"low\".'\n )\n if positive_orientation not in _param_options:\n raise ValueError(\n 'Parameters \"positive_orientation\" must be either ' '\"high\" or \"low\".'\n )\n if image.shape[0] < 2 or image.shape[1] < 2:\n raise ValueError(\"Input array must be at least 2x2.\")\n if image.ndim != 2:\n raise ValueError('Only 2D arrays are supported.')\n if mask is not None:\n if mask.shape != image.shape:\n raise ValueError('Parameters \"array\" and \"mask\"' ' must have same shape.')\n if not np.can_cast(mask.dtype, bool, casting='safe'):\n raise TypeError('Parameter \"mask\" must be a binary array.')\n mask = mask.astype(np.uint8, copy=False)\n if level is None:\n level = (np.nanmin(image) + np.nanmax(image)) / 2.0\n\n segments = _get_contour_segments(\n image.astype(np.float64), float(level), fully_connected == 'high', mask=mask\n )\n contours = _assemble_contours(segments)\n if positive_orientation == 'high':\n contours = [c[::-1] for c in contours]\n return contours\n\n\ndef _assemble_contours(segments):\n current_index = 0\n contours = {}\n starts = {}\n ends = {}\n for from_point, to_point in segments:\n # Ignore degenerate segments.\n # This happens when (and only when) one vertex of the square is\n # exactly the contour level, and the rest are above or below.\n # This degenerate vertex will be picked up later by neighboring\n # squares.\n if from_point == to_point:\n continue\n\n tail, tail_num = starts.pop(to_point, (None, None))\n head, head_num = ends.pop(from_point, (None, None))\n\n if tail is not None and head is not None:\n # We need to connect these two contours.\n if tail is head:\n # We need to closed a contour: add the end point\n head.append(to_point)\n else: # tail is not head\n # We need to join two distinct contours.\n # We want to keep the first contour segment created, so that\n # the final contours are ordered left->right, top->bottom.\n if tail_num > head_num:\n # tail was created second. Append tail to head.\n head.extend(tail)\n # Remove tail from the detected contours\n contours.pop(tail_num, None)\n # Update starts and ends\n starts[head[0]] = (head, head_num)\n ends[head[-1]] = (head, head_num)\n else: # tail_num <= head_num\n # head was created second. Prepend head to tail.\n tail.extendleft(reversed(head))\n # Remove head from the detected contours\n starts.pop(head[0], None) # head[0] can be == to_point!\n contours.pop(head_num, None)\n # Update starts and ends\n starts[tail[0]] = (tail, tail_num)\n ends[tail[-1]] = (tail, tail_num)\n elif tail is None and head is None:\n # We need to add a new contour\n new_contour = deque((from_point, to_point))\n contours[current_index] = new_contour\n starts[from_point] = (new_contour, current_index)\n ends[to_point] = (new_contour, current_index)\n current_index += 1\n elif head is None: # tail is not None\n # tail first element is to_point: the new segment should be\n # prepended.\n tail.appendleft(from_point)\n # Update starts\n starts[from_point] = (tail, tail_num)\n else: # tail is None and head is not None:\n # head last element is from_point: the new segment should be\n # appended\n head.append(to_point)\n # Update ends\n ends[to_point] = (head, head_num)\n\n return [np.array(contour) for _, contour in sorted(contours.items())]\n", "path": "skimage/measure/_find_contours.py"}]}
| 3,439 | 612 |
gh_patches_debug_19311
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-3617
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No registration message no longer accepts links
### Describe the bug
Since #2458 the event registration status (specifically no_registration_message) doesn't support URLs any more on the website (app is unaltered I think). I aslo think the formatting that happens now might be a bit risky.
### How to reproduce
Make an event with URL in the no_registration message and view it.
### Expected behaviour
A nice link.
### Screenshots
<img width="627" alt="afbeelding" src="https://github.com/svthalia/concrexit/assets/41264528/66d61a15-0ceb-43bf-aa5a-a6536a02739d">
</issue>
<code>
[start of website/events/admin/event.py]
1 """Registers admin interfaces for the event model."""
2
3 from django.contrib import admin, messages
4 from django.template.defaultfilters import date as _date
5 from django.urls import path, reverse
6 from django.utils import timezone
7 from django.utils.html import format_html
8 from django.utils.translation import gettext_lazy as _
9
10 from events import emails, models, services
11 from events.admin.filters import LectureYearFilter
12 from events.admin.forms import EventAdminForm, RegistrationInformationFieldForm
13 from events.admin.inlines import (
14 PizzaEventInline,
15 PromotionRequestInline,
16 RegistrationInformationFieldInline,
17 )
18 from events.admin.views import (
19 EventAdminDetails,
20 EventMarkPresentQR,
21 EventRegistrationsExport,
22 )
23 from utils.admin import DoNextModelAdmin
24
25
26 @admin.register(models.Event)
27 class EventAdmin(DoNextModelAdmin):
28 """Manage the events."""
29
30 form = EventAdminForm
31
32 inlines = (
33 RegistrationInformationFieldInline,
34 PizzaEventInline,
35 PromotionRequestInline,
36 )
37
38 list_display = (
39 "overview_link",
40 "event_date",
41 "registration_date",
42 "num_participants",
43 "get_organisers",
44 "category",
45 "published",
46 "edit_link",
47 )
48 list_display_links = ("edit_link",)
49 list_filter = (LectureYearFilter, "start", "published", "category")
50 actions = ("make_published", "make_unpublished")
51 date_hierarchy = "start"
52 search_fields = ("title", "description")
53 prepopulated_fields = {
54 "map_location": ("location",),
55 }
56
57 filter_horizontal = ("documents", "organisers")
58
59 fieldsets = (
60 (
61 _("General"),
62 {
63 "fields": (
64 "title",
65 "slug",
66 "published",
67 "organisers",
68 )
69 },
70 ),
71 (
72 _("Detail"),
73 {
74 "fields": (
75 "category",
76 "start",
77 "end",
78 "description",
79 "caption",
80 "location",
81 "map_location",
82 "show_map_location",
83 ),
84 "classes": ("collapse", "start-open"),
85 },
86 ),
87 (
88 _("Registrations"),
89 {
90 "fields": (
91 "price",
92 "fine",
93 "tpay_allowed",
94 "max_participants",
95 "registration_without_membership",
96 "registration_start",
97 "registration_end",
98 "cancel_deadline",
99 "send_cancel_email",
100 "optional_registrations",
101 "no_registration_message",
102 ),
103 "classes": ("collapse",),
104 },
105 ),
106 (
107 _("Extra"),
108 {"fields": ("documents", "shift"), "classes": ("collapse",)},
109 ),
110 )
111
112 def get_queryset(self, request):
113 queryset = (
114 super()
115 .get_queryset(request)
116 .select_properties("participant_count")
117 .prefetch_related("organisers")
118 )
119 if not (
120 request.user.has_perm("events.override_organiser")
121 or request.user.has_perm("events.view_unpublished")
122 ):
123 queryset_published = queryset.filter(published=True)
124 queryset_unpublished = queryset.filter(
125 published=False,
126 organisers__in=list(
127 request.member.get_member_groups().values_list("id", flat=True)
128 ),
129 )
130 queryset = queryset_published | queryset_unpublished
131 return queryset
132
133 def get_form(self, request, obj=None, change=False, **kwargs):
134 form = super().get_form(request, obj, change, **kwargs)
135 form.request = request
136 return form
137
138 def overview_link(self, obj):
139 return format_html(
140 '<a href="{link}">{title}</a>',
141 link=reverse("admin:events_event_details", kwargs={"pk": obj.pk}),
142 title=obj.title,
143 )
144
145 def has_delete_permission(self, request, obj=None):
146 """Only allow deleting an event if the user is an organiser."""
147 if obj is not None and not services.is_organiser(request.member, obj):
148 return False
149 return super().has_delete_permission(request, obj)
150
151 def has_change_permission(self, request, obj=None):
152 """Only allow access to the change form if the user is an organiser."""
153 if obj is not None and not services.is_organiser(request.member, obj):
154 return False
155 return super().has_change_permission(request, obj)
156
157 def event_date(self, obj):
158 event_date = timezone.make_naive(obj.start)
159 return _date(event_date, "l d b Y, G:i")
160
161 event_date.short_description = _("Event Date")
162 event_date.admin_order_field = "start"
163
164 def registration_date(self, obj):
165 if obj.registration_start is not None:
166 start_date = timezone.make_naive(obj.registration_start)
167 else:
168 start_date = obj.registration_start
169
170 return _date(start_date, "l d b Y, G:i")
171
172 registration_date.short_description = _("Registration Start")
173 registration_date.admin_order_field = "registration_start"
174
175 def edit_link(self, obj):
176 return _("Edit")
177
178 edit_link.short_description = ""
179
180 def num_participants(self, obj):
181 """Pretty-print the number of participants."""
182 num = obj.participant_count # prefetched aggregateproperty
183 if not obj.max_participants:
184 return f"{num}/∞"
185 return f"{num}/{obj.max_participants}"
186
187 num_participants.short_description = _("Number of participants")
188
189 def get_organisers(self, obj):
190 return ", ".join(str(o) for o in obj.organisers.all())
191
192 get_organisers.short_description = _("Organisers")
193
194 def make_published(self, request, queryset):
195 """Change the status of the event to published."""
196 self._change_published(request, queryset, True)
197
198 make_published.short_description = _("Publish selected events")
199
200 def make_unpublished(self, request, queryset):
201 """Change the status of the event to unpublished."""
202 self._change_published(request, queryset, False)
203
204 make_unpublished.short_description = _("Unpublish selected events")
205
206 @staticmethod
207 def _change_published(request, queryset, published):
208 if not request.user.is_superuser:
209 queryset = queryset.filter(
210 organisers__in=request.member.get_member_groups()
211 )
212 queryset.update(published=published)
213
214 def save_formset(self, request, form, formset, change):
215 """Save formsets with their order."""
216 formset.save()
217
218 informationfield_forms = (
219 x
220 for x in formset.forms
221 if isinstance(x, RegistrationInformationFieldForm)
222 and "DELETE" not in x.changed_data
223 )
224 form.instance.set_registrationinformationfield_order(
225 [
226 f.instance.pk
227 for f in sorted(
228 informationfield_forms,
229 key=lambda x: (x.cleaned_data["order"], x.instance.pk),
230 )
231 ]
232 )
233 form.instance.save()
234
235 def save_model(self, request, obj, form, change):
236 if change and "max_participants" in form.changed_data:
237 prev = self.model.objects.get(id=obj.id)
238 prev_limit = prev.max_participants
239 self_limit = obj.max_participants
240 if prev_limit is None:
241 prev_limit = prev.participant_count
242 if self_limit is None:
243 self_limit = obj.participant_count
244
245 if prev_limit < self_limit and prev_limit < obj.participant_count:
246 diff = self_limit - prev_limit
247 joiners = prev.queue[:diff]
248 for registration in joiners:
249 emails.notify_waiting(obj, registration)
250 messages.info(
251 request,
252 "The maximum number of participants was increased. Any members that moved from the waiting list to the participants list have been notified.",
253 )
254 elif self_limit < prev_limit and self_limit < obj.participant_count:
255 diff = self_limit - prev_limit
256 leavers = prev.registrations[self_limit:]
257 address = map(lambda r: r.email, leavers)
258 link = "mailto:?bcc=" + ",".join(address)
259 messages.warning(
260 request,
261 format_html(
262 "The maximum number of participants was decreased and some members moved to the waiting list. <a href='{}' style='text-decoration: underline;'>Use this link to send them an email.</a>",
263 link,
264 ),
265 )
266 super().save_model(request, obj, form, change)
267
268 def get_actions(self, request):
269 actions = super().get_actions(request)
270 if "delete_selected" in actions:
271 del actions["delete_selected"]
272 return actions
273
274 def get_formsets_with_inlines(self, request, obj=None):
275 for inline in self.get_inline_instances(request, obj):
276 if self.has_change_permission(request, obj) or obj is None:
277 yield inline.get_formset(request, obj), inline
278
279 def get_urls(self):
280 urls = super().get_urls()
281 custom_urls = [
282 path(
283 "<int:pk>/details/",
284 self.admin_site.admin_view(EventAdminDetails.as_view()),
285 name="events_event_details",
286 ),
287 path(
288 "<int:pk>/export/",
289 self.admin_site.admin_view(EventRegistrationsExport.as_view()),
290 name="events_event_export",
291 ),
292 path(
293 "<int:pk>/mark-present-qr/",
294 self.admin_site.admin_view(EventMarkPresentQR.as_view()),
295 name="events_event_mark_present_qr",
296 ),
297 ]
298 return custom_urls + urls
299
[end of website/events/admin/event.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/events/admin/event.py b/website/events/admin/event.py
--- a/website/events/admin/event.py
+++ b/website/events/admin/event.py
@@ -98,7 +98,26 @@
"cancel_deadline",
"send_cancel_email",
"optional_registrations",
+ ),
+ "classes": ("collapse",),
+ },
+ ),
+ (
+ _("Registration status messages"),
+ {
+ "fields": (
"no_registration_message",
+ "registration_msg_optional",
+ "registration_msg_optional_registered",
+ "registration_msg_registered",
+ "registration_msg_open",
+ "registration_msg_full",
+ "registration_msg_waitinglist",
+ "registration_msg_will_open",
+ "registration_msg_expired",
+ "registration_msg_cancelled",
+ "registration_msg_cancelled_late",
+ "registration_msg_cancelled_final",
),
"classes": ("collapse",),
},
|
{"golden_diff": "diff --git a/website/events/admin/event.py b/website/events/admin/event.py\n--- a/website/events/admin/event.py\n+++ b/website/events/admin/event.py\n@@ -98,7 +98,26 @@\n \"cancel_deadline\",\n \"send_cancel_email\",\n \"optional_registrations\",\n+ ),\n+ \"classes\": (\"collapse\",),\n+ },\n+ ),\n+ (\n+ _(\"Registration status messages\"),\n+ {\n+ \"fields\": (\n \"no_registration_message\",\n+ \"registration_msg_optional\",\n+ \"registration_msg_optional_registered\",\n+ \"registration_msg_registered\",\n+ \"registration_msg_open\",\n+ \"registration_msg_full\",\n+ \"registration_msg_waitinglist\",\n+ \"registration_msg_will_open\",\n+ \"registration_msg_expired\",\n+ \"registration_msg_cancelled\",\n+ \"registration_msg_cancelled_late\",\n+ \"registration_msg_cancelled_final\",\n ),\n \"classes\": (\"collapse\",),\n },\n", "issue": "No registration message no longer accepts links\n### Describe the bug\r\nSince #2458 the event registration status (specifically no_registration_message) doesn't support URLs any more on the website (app is unaltered I think). I aslo think the formatting that happens now might be a bit risky.\r\n\r\n### How to reproduce\r\nMake an event with URL in the no_registration message and view it.\r\n\r\n### Expected behaviour\r\nA nice link.\r\n\r\n### Screenshots\r\n<img width=\"627\" alt=\"afbeelding\" src=\"https://github.com/svthalia/concrexit/assets/41264528/66d61a15-0ceb-43bf-aa5a-a6536a02739d\">\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Registers admin interfaces for the event model.\"\"\"\n\nfrom django.contrib import admin, messages\nfrom django.template.defaultfilters import date as _date\nfrom django.urls import path, reverse\nfrom django.utils import timezone\nfrom django.utils.html import format_html\nfrom django.utils.translation import gettext_lazy as _\n\nfrom events import emails, models, services\nfrom events.admin.filters import LectureYearFilter\nfrom events.admin.forms import EventAdminForm, RegistrationInformationFieldForm\nfrom events.admin.inlines import (\n PizzaEventInline,\n PromotionRequestInline,\n RegistrationInformationFieldInline,\n)\nfrom events.admin.views import (\n EventAdminDetails,\n EventMarkPresentQR,\n EventRegistrationsExport,\n)\nfrom utils.admin import DoNextModelAdmin\n\n\[email protected](models.Event)\nclass EventAdmin(DoNextModelAdmin):\n \"\"\"Manage the events.\"\"\"\n\n form = EventAdminForm\n\n inlines = (\n RegistrationInformationFieldInline,\n PizzaEventInline,\n PromotionRequestInline,\n )\n\n list_display = (\n \"overview_link\",\n \"event_date\",\n \"registration_date\",\n \"num_participants\",\n \"get_organisers\",\n \"category\",\n \"published\",\n \"edit_link\",\n )\n list_display_links = (\"edit_link\",)\n list_filter = (LectureYearFilter, \"start\", \"published\", \"category\")\n actions = (\"make_published\", \"make_unpublished\")\n date_hierarchy = \"start\"\n search_fields = (\"title\", \"description\")\n prepopulated_fields = {\n \"map_location\": (\"location\",),\n }\n\n filter_horizontal = (\"documents\", \"organisers\")\n\n fieldsets = (\n (\n _(\"General\"),\n {\n \"fields\": (\n \"title\",\n \"slug\",\n \"published\",\n \"organisers\",\n )\n },\n ),\n (\n _(\"Detail\"),\n {\n \"fields\": (\n \"category\",\n \"start\",\n \"end\",\n \"description\",\n \"caption\",\n \"location\",\n \"map_location\",\n \"show_map_location\",\n ),\n \"classes\": (\"collapse\", \"start-open\"),\n },\n ),\n (\n _(\"Registrations\"),\n {\n \"fields\": (\n \"price\",\n \"fine\",\n \"tpay_allowed\",\n \"max_participants\",\n \"registration_without_membership\",\n \"registration_start\",\n \"registration_end\",\n \"cancel_deadline\",\n \"send_cancel_email\",\n \"optional_registrations\",\n \"no_registration_message\",\n ),\n \"classes\": (\"collapse\",),\n },\n ),\n (\n _(\"Extra\"),\n {\"fields\": (\"documents\", \"shift\"), \"classes\": (\"collapse\",)},\n ),\n )\n\n def get_queryset(self, request):\n queryset = (\n super()\n .get_queryset(request)\n .select_properties(\"participant_count\")\n .prefetch_related(\"organisers\")\n )\n if not (\n request.user.has_perm(\"events.override_organiser\")\n or request.user.has_perm(\"events.view_unpublished\")\n ):\n queryset_published = queryset.filter(published=True)\n queryset_unpublished = queryset.filter(\n published=False,\n organisers__in=list(\n request.member.get_member_groups().values_list(\"id\", flat=True)\n ),\n )\n queryset = queryset_published | queryset_unpublished\n return queryset\n\n def get_form(self, request, obj=None, change=False, **kwargs):\n form = super().get_form(request, obj, change, **kwargs)\n form.request = request\n return form\n\n def overview_link(self, obj):\n return format_html(\n '<a href=\"{link}\">{title}</a>',\n link=reverse(\"admin:events_event_details\", kwargs={\"pk\": obj.pk}),\n title=obj.title,\n )\n\n def has_delete_permission(self, request, obj=None):\n \"\"\"Only allow deleting an event if the user is an organiser.\"\"\"\n if obj is not None and not services.is_organiser(request.member, obj):\n return False\n return super().has_delete_permission(request, obj)\n\n def has_change_permission(self, request, obj=None):\n \"\"\"Only allow access to the change form if the user is an organiser.\"\"\"\n if obj is not None and not services.is_organiser(request.member, obj):\n return False\n return super().has_change_permission(request, obj)\n\n def event_date(self, obj):\n event_date = timezone.make_naive(obj.start)\n return _date(event_date, \"l d b Y, G:i\")\n\n event_date.short_description = _(\"Event Date\")\n event_date.admin_order_field = \"start\"\n\n def registration_date(self, obj):\n if obj.registration_start is not None:\n start_date = timezone.make_naive(obj.registration_start)\n else:\n start_date = obj.registration_start\n\n return _date(start_date, \"l d b Y, G:i\")\n\n registration_date.short_description = _(\"Registration Start\")\n registration_date.admin_order_field = \"registration_start\"\n\n def edit_link(self, obj):\n return _(\"Edit\")\n\n edit_link.short_description = \"\"\n\n def num_participants(self, obj):\n \"\"\"Pretty-print the number of participants.\"\"\"\n num = obj.participant_count # prefetched aggregateproperty\n if not obj.max_participants:\n return f\"{num}/\u221e\"\n return f\"{num}/{obj.max_participants}\"\n\n num_participants.short_description = _(\"Number of participants\")\n\n def get_organisers(self, obj):\n return \", \".join(str(o) for o in obj.organisers.all())\n\n get_organisers.short_description = _(\"Organisers\")\n\n def make_published(self, request, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(request, queryset, True)\n\n make_published.short_description = _(\"Publish selected events\")\n\n def make_unpublished(self, request, queryset):\n \"\"\"Change the status of the event to unpublished.\"\"\"\n self._change_published(request, queryset, False)\n\n make_unpublished.short_description = _(\"Unpublish selected events\")\n\n @staticmethod\n def _change_published(request, queryset, published):\n if not request.user.is_superuser:\n queryset = queryset.filter(\n organisers__in=request.member.get_member_groups()\n )\n queryset.update(published=published)\n\n def save_formset(self, request, form, formset, change):\n \"\"\"Save formsets with their order.\"\"\"\n formset.save()\n\n informationfield_forms = (\n x\n for x in formset.forms\n if isinstance(x, RegistrationInformationFieldForm)\n and \"DELETE\" not in x.changed_data\n )\n form.instance.set_registrationinformationfield_order(\n [\n f.instance.pk\n for f in sorted(\n informationfield_forms,\n key=lambda x: (x.cleaned_data[\"order\"], x.instance.pk),\n )\n ]\n )\n form.instance.save()\n\n def save_model(self, request, obj, form, change):\n if change and \"max_participants\" in form.changed_data:\n prev = self.model.objects.get(id=obj.id)\n prev_limit = prev.max_participants\n self_limit = obj.max_participants\n if prev_limit is None:\n prev_limit = prev.participant_count\n if self_limit is None:\n self_limit = obj.participant_count\n\n if prev_limit < self_limit and prev_limit < obj.participant_count:\n diff = self_limit - prev_limit\n joiners = prev.queue[:diff]\n for registration in joiners:\n emails.notify_waiting(obj, registration)\n messages.info(\n request,\n \"The maximum number of participants was increased. Any members that moved from the waiting list to the participants list have been notified.\",\n )\n elif self_limit < prev_limit and self_limit < obj.participant_count:\n diff = self_limit - prev_limit\n leavers = prev.registrations[self_limit:]\n address = map(lambda r: r.email, leavers)\n link = \"mailto:?bcc=\" + \",\".join(address)\n messages.warning(\n request,\n format_html(\n \"The maximum number of participants was decreased and some members moved to the waiting list. <a href='{}' style='text-decoration: underline;'>Use this link to send them an email.</a>\",\n link,\n ),\n )\n super().save_model(request, obj, form, change)\n\n def get_actions(self, request):\n actions = super().get_actions(request)\n if \"delete_selected\" in actions:\n del actions[\"delete_selected\"]\n return actions\n\n def get_formsets_with_inlines(self, request, obj=None):\n for inline in self.get_inline_instances(request, obj):\n if self.has_change_permission(request, obj) or obj is None:\n yield inline.get_formset(request, obj), inline\n\n def get_urls(self):\n urls = super().get_urls()\n custom_urls = [\n path(\n \"<int:pk>/details/\",\n self.admin_site.admin_view(EventAdminDetails.as_view()),\n name=\"events_event_details\",\n ),\n path(\n \"<int:pk>/export/\",\n self.admin_site.admin_view(EventRegistrationsExport.as_view()),\n name=\"events_event_export\",\n ),\n path(\n \"<int:pk>/mark-present-qr/\",\n self.admin_site.admin_view(EventMarkPresentQR.as_view()),\n name=\"events_event_mark_present_qr\",\n ),\n ]\n return custom_urls + urls\n", "path": "website/events/admin/event.py"}]}
| 3,480 | 210 |
gh_patches_debug_2478
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1767
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Delete tpay payment if order is modified
### Summary
Right now it is possible to order a pizza, pay it with tpay, change the order to a pizza with a different price, and the payment will not match the order anymore.
### How to test
1. Order a pizza
2. Pay with tpay
3. Change the order
4. The payment should be deleted
5. If the event is over, or the payment is batched, then changing the order should crash
</issue>
<code>
[start of website/pizzas/views.py]
1 """Views provided by the pizzas package."""
2 from django.contrib import messages
3 from django.contrib.auth.decorators import login_required
4 from django.http import Http404
5 from django.shortcuts import get_object_or_404, render, redirect
6 from django.utils.translation import gettext_lazy as _
7 from django.views.decorators.http import require_http_methods
8
9 from payments.services import delete_payment
10 from .models import FoodOrder, FoodEvent, Product
11
12
13 @login_required
14 def index(request):
15 """Overview of user order for a pizza event."""
16 products = Product.available_products.order_by("name")
17 if not request.user.has_perm("pizzas.order_restricted_products"):
18 products = products.exclude(restricted=True)
19 event = FoodEvent.current()
20 try:
21 obj = FoodOrder.objects.get(food_event=event, member=request.member)
22 except FoodOrder.DoesNotExist:
23 obj = None
24 context = {"event": event, "products": products, "order": obj}
25 return render(request, "pizzas/index.html", context)
26
27
28 @require_http_methods(["POST"])
29 def cancel_order(request):
30 """View that cancels a user's order."""
31 if "order" in request.POST:
32 try:
33 order = get_object_or_404(FoodOrder, pk=int(request.POST["order"]))
34 if not order.can_be_changed:
35 messages.error(request, _("You can no longer cancel."))
36 elif order.member == request.member:
37 order.delete()
38 messages.success(request, _("Your order has been cancelled."))
39 except Http404:
40 messages.error(request, _("Your order could not be found."))
41 return redirect("pizzas:index")
42
43
44 @login_required
45 def place_order(request):
46 """View that shows the detail of the current order."""
47 event = FoodEvent.current()
48 if not event:
49 return redirect("pizzas:index")
50
51 try:
52 obj = FoodOrder.objects.get(food_event=event, member=request.member)
53 current_order_locked = not obj.can_be_changed
54 except FoodOrder.DoesNotExist:
55 obj = None
56 current_order_locked = False
57
58 if "product" in request.POST and not current_order_locked:
59 productset = Product.available_products.all()
60 if not request.user.has_perm("pizzas.order_restricted_products"):
61 productset = productset.exclude(restricted=True)
62 try:
63 product = productset.get(pk=int(request.POST["product"]))
64 except Product.DoesNotExist as e:
65 raise Http404("Pizza does not exist") from e
66 if not obj:
67 obj = FoodOrder(food_event=event, member=request.member)
68 obj.product = product
69 if obj.payment:
70 delete_payment(obj.payment)
71 obj.save()
72 return redirect("pizzas:index")
73
[end of website/pizzas/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/pizzas/views.py b/website/pizzas/views.py
--- a/website/pizzas/views.py
+++ b/website/pizzas/views.py
@@ -67,6 +67,6 @@
obj = FoodOrder(food_event=event, member=request.member)
obj.product = product
if obj.payment:
- delete_payment(obj.payment)
+ delete_payment(obj)
obj.save()
return redirect("pizzas:index")
|
{"golden_diff": "diff --git a/website/pizzas/views.py b/website/pizzas/views.py\n--- a/website/pizzas/views.py\n+++ b/website/pizzas/views.py\n@@ -67,6 +67,6 @@\n obj = FoodOrder(food_event=event, member=request.member)\n obj.product = product\n if obj.payment:\n- delete_payment(obj.payment)\n+ delete_payment(obj)\n obj.save()\n return redirect(\"pizzas:index\")\n", "issue": "Delete tpay payment if order is modified\n### Summary\r\nRight now it is possible to order a pizza, pay it with tpay, change the order to a pizza with a different price, and the payment will not match the order anymore.\r\n\r\n### How to test\r\n1. Order a pizza\r\n2. Pay with tpay\r\n3. Change the order\r\n4. The payment should be deleted\r\n5. If the event is over, or the payment is batched, then changing the order should crash\n", "before_files": [{"content": "\"\"\"Views provided by the pizzas package.\"\"\"\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, render, redirect\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views.decorators.http import require_http_methods\n\nfrom payments.services import delete_payment\nfrom .models import FoodOrder, FoodEvent, Product\n\n\n@login_required\ndef index(request):\n \"\"\"Overview of user order for a pizza event.\"\"\"\n products = Product.available_products.order_by(\"name\")\n if not request.user.has_perm(\"pizzas.order_restricted_products\"):\n products = products.exclude(restricted=True)\n event = FoodEvent.current()\n try:\n obj = FoodOrder.objects.get(food_event=event, member=request.member)\n except FoodOrder.DoesNotExist:\n obj = None\n context = {\"event\": event, \"products\": products, \"order\": obj}\n return render(request, \"pizzas/index.html\", context)\n\n\n@require_http_methods([\"POST\"])\ndef cancel_order(request):\n \"\"\"View that cancels a user's order.\"\"\"\n if \"order\" in request.POST:\n try:\n order = get_object_or_404(FoodOrder, pk=int(request.POST[\"order\"]))\n if not order.can_be_changed:\n messages.error(request, _(\"You can no longer cancel.\"))\n elif order.member == request.member:\n order.delete()\n messages.success(request, _(\"Your order has been cancelled.\"))\n except Http404:\n messages.error(request, _(\"Your order could not be found.\"))\n return redirect(\"pizzas:index\")\n\n\n@login_required\ndef place_order(request):\n \"\"\"View that shows the detail of the current order.\"\"\"\n event = FoodEvent.current()\n if not event:\n return redirect(\"pizzas:index\")\n\n try:\n obj = FoodOrder.objects.get(food_event=event, member=request.member)\n current_order_locked = not obj.can_be_changed\n except FoodOrder.DoesNotExist:\n obj = None\n current_order_locked = False\n\n if \"product\" in request.POST and not current_order_locked:\n productset = Product.available_products.all()\n if not request.user.has_perm(\"pizzas.order_restricted_products\"):\n productset = productset.exclude(restricted=True)\n try:\n product = productset.get(pk=int(request.POST[\"product\"]))\n except Product.DoesNotExist as e:\n raise Http404(\"Pizza does not exist\") from e\n if not obj:\n obj = FoodOrder(food_event=event, member=request.member)\n obj.product = product\n if obj.payment:\n delete_payment(obj.payment)\n obj.save()\n return redirect(\"pizzas:index\")\n", "path": "website/pizzas/views.py"}]}
| 1,344 | 97 |
gh_patches_debug_9752
|
rasdani/github-patches
|
git_diff
|
Textualize__textual-4452
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`ListView` initial index is off by one
Setting the `initial_index` of a `ListView` seems to be off by one, introduced somewhere in v0.48.2:
```python
from textual.app import App, ComposeResult
from textual.widgets import Label, ListItem, ListView
class ExampleApp(App):
def compose(self) -> ComposeResult:
list_items = [ListItem(Label(str(i))) for i in range(10)]
yield ListView(
*list_items,
initial_index=9,
)
if __name__ == "__main__":
app = ExampleApp()
app.run()
```

</issue>
<code>
[start of src/textual/widgets/_list_view.py]
1 from __future__ import annotations
2
3 from typing import ClassVar, Iterable, Optional
4
5 from typing_extensions import TypeGuard
6
7 from .. import _widget_navigation
8 from ..await_remove import AwaitRemove
9 from ..binding import Binding, BindingType
10 from ..containers import VerticalScroll
11 from ..events import Mount
12 from ..message import Message
13 from ..reactive import reactive
14 from ..widget import AwaitMount
15 from ..widgets._list_item import ListItem
16
17
18 class ListView(VerticalScroll, can_focus=True, can_focus_children=False):
19 """A vertical list view widget.
20
21 Displays a vertical list of `ListItem`s which can be highlighted and
22 selected using the mouse or keyboard.
23
24 Attributes:
25 index: The index in the list that's currently highlighted.
26 """
27
28 BINDINGS: ClassVar[list[BindingType]] = [
29 Binding("enter", "select_cursor", "Select", show=False),
30 Binding("up", "cursor_up", "Cursor Up", show=False),
31 Binding("down", "cursor_down", "Cursor Down", show=False),
32 ]
33 """
34 | Key(s) | Description |
35 | :- | :- |
36 | enter | Select the current item. |
37 | up | Move the cursor up. |
38 | down | Move the cursor down. |
39 """
40
41 index = reactive[Optional[int]](0, always_update=True, init=False)
42 """The index of the currently highlighted item."""
43
44 class Highlighted(Message):
45 """Posted when the highlighted item changes.
46
47 Highlighted item is controlled using up/down keys.
48 Can be handled using `on_list_view_highlighted` in a subclass of `ListView`
49 or in a parent widget in the DOM.
50 """
51
52 ALLOW_SELECTOR_MATCH = {"item"}
53 """Additional message attributes that can be used with the [`on` decorator][textual.on]."""
54
55 def __init__(self, list_view: ListView, item: ListItem | None) -> None:
56 super().__init__()
57 self.list_view: ListView = list_view
58 """The view that contains the item highlighted."""
59 self.item: ListItem | None = item
60 """The highlighted item, if there is one highlighted."""
61
62 @property
63 def control(self) -> ListView:
64 """The view that contains the item highlighted.
65
66 This is an alias for [`Highlighted.list_view`][textual.widgets.ListView.Highlighted.list_view]
67 and is used by the [`on`][textual.on] decorator.
68 """
69 return self.list_view
70
71 class Selected(Message):
72 """Posted when a list item is selected, e.g. when you press the enter key on it.
73
74 Can be handled using `on_list_view_selected` in a subclass of `ListView` or in
75 a parent widget in the DOM.
76 """
77
78 ALLOW_SELECTOR_MATCH = {"item"}
79 """Additional message attributes that can be used with the [`on` decorator][textual.on]."""
80
81 def __init__(self, list_view: ListView, item: ListItem) -> None:
82 super().__init__()
83 self.list_view: ListView = list_view
84 """The view that contains the item selected."""
85 self.item: ListItem = item
86 """The selected item."""
87
88 @property
89 def control(self) -> ListView:
90 """The view that contains the item selected.
91
92 This is an alias for [`Selected.list_view`][textual.widgets.ListView.Selected.list_view]
93 and is used by the [`on`][textual.on] decorator.
94 """
95 return self.list_view
96
97 def __init__(
98 self,
99 *children: ListItem,
100 initial_index: int | None = 0,
101 name: str | None = None,
102 id: str | None = None,
103 classes: str | None = None,
104 disabled: bool = False,
105 ) -> None:
106 """
107 Initialize a ListView.
108
109 Args:
110 *children: The ListItems to display in the list.
111 initial_index: The index that should be highlighted when the list is first mounted.
112 name: The name of the widget.
113 id: The unique ID of the widget used in CSS/query selection.
114 classes: The CSS classes of the widget.
115 disabled: Whether the ListView is disabled or not.
116 """
117 super().__init__(
118 *children, name=name, id=id, classes=classes, disabled=disabled
119 )
120 # Set the index to the given initial index, or the first available index after.
121 self._index = _widget_navigation.find_next_enabled(
122 self._nodes,
123 anchor=initial_index - 1 if initial_index is not None else None,
124 direction=1,
125 )
126
127 def _on_mount(self, _: Mount) -> None:
128 """Ensure the ListView is fully-settled after mounting."""
129 self.index = self._index
130
131 @property
132 def highlighted_child(self) -> ListItem | None:
133 """The currently highlighted ListItem, or None if nothing is highlighted."""
134 if self.index is not None and 0 <= self.index < len(self._nodes):
135 list_item = self._nodes[self.index]
136 assert isinstance(list_item, ListItem)
137 return list_item
138 else:
139 return None
140
141 def validate_index(self, index: int | None) -> int | None:
142 """Clamp the index to the valid range, or set to None if there's nothing to highlight.
143
144 Args:
145 index: The index to clamp.
146
147 Returns:
148 The clamped index.
149 """
150 if index is None or not self._nodes:
151 return None
152 elif index < 0:
153 return 0
154 elif index >= len(self._nodes):
155 return len(self._nodes) - 1
156
157 return index
158
159 def _is_valid_index(self, index: int | None) -> TypeGuard[int]:
160 """Determine whether the current index is valid into the list of children."""
161 if index is None:
162 return False
163 return 0 <= index < len(self._nodes)
164
165 def watch_index(self, old_index: int | None, new_index: int | None) -> None:
166 """Updates the highlighting when the index changes."""
167 if self._is_valid_index(old_index):
168 old_child = self._nodes[old_index]
169 assert isinstance(old_child, ListItem)
170 old_child.highlighted = False
171
172 if self._is_valid_index(new_index) and not self._nodes[new_index].disabled:
173 new_child = self._nodes[new_index]
174 assert isinstance(new_child, ListItem)
175 new_child.highlighted = True
176 self._scroll_highlighted_region()
177 self.post_message(self.Highlighted(self, new_child))
178 else:
179 self.post_message(self.Highlighted(self, None))
180
181 def extend(self, items: Iterable[ListItem]) -> AwaitMount:
182 """Append multiple new ListItems to the end of the ListView.
183
184 Args:
185 items: The ListItems to append.
186
187 Returns:
188 An awaitable that yields control to the event loop
189 until the DOM has been updated with the new child items.
190 """
191 await_mount = self.mount(*items)
192 if len(self) == 1:
193 self.index = 0
194 return await_mount
195
196 def append(self, item: ListItem) -> AwaitMount:
197 """Append a new ListItem to the end of the ListView.
198
199 Args:
200 item: The ListItem to append.
201
202 Returns:
203 An awaitable that yields control to the event loop
204 until the DOM has been updated with the new child item.
205 """
206 return self.extend([item])
207
208 def clear(self) -> AwaitRemove:
209 """Clear all items from the ListView.
210
211 Returns:
212 An awaitable that yields control to the event loop until
213 the DOM has been updated to reflect all children being removed.
214 """
215 await_remove = self.query("ListView > ListItem").remove()
216 self.index = None
217 return await_remove
218
219 def insert(self, index: int, items: Iterable[ListItem]) -> AwaitMount:
220 """Insert new ListItem(s) to specified index.
221
222 Args:
223 index: index to insert new ListItem.
224 items: The ListItems to insert.
225
226 Returns:
227 An awaitable that yields control to the event loop
228 until the DOM has been updated with the new child item.
229 """
230 await_mount = self.mount(*items, before=index)
231 return await_mount
232
233 def pop(self, index: Optional[int] = None) -> AwaitRemove:
234 """Remove last ListItem from ListView or
235 Remove ListItem from ListView by index
236
237 Args:
238 index: index of ListItem to remove from ListView
239
240 Returns:
241 An awaitable that yields control to the event loop until
242 the DOM has been updated to reflect item being removed.
243 """
244 if index is None:
245 await_remove = self.query("ListItem").last().remove()
246 else:
247 await_remove = self.query("ListItem")[index].remove()
248 return await_remove
249
250 def remove_items(self, indices: Iterable[int]) -> AwaitRemove:
251 """Remove ListItems from ListView by indices
252
253 Args:
254 indices: index(s) of ListItems to remove from ListView
255
256 Returns:
257 An awaitable object that waits for the direct children to be removed.
258 """
259 items = self.query("ListItem")
260 items_to_remove = []
261 for index in indices:
262 items_to_remove.append(items[index])
263
264 await_remove = self.app._remove_nodes(items_to_remove, self)
265 return await_remove
266
267 def action_select_cursor(self) -> None:
268 """Select the current item in the list."""
269 selected_child = self.highlighted_child
270 if selected_child is None:
271 return
272 self.post_message(self.Selected(self, selected_child))
273
274 def action_cursor_down(self) -> None:
275 """Highlight the next item in the list."""
276 candidate = _widget_navigation.find_next_enabled(
277 self._nodes,
278 anchor=self.index,
279 direction=1,
280 )
281 if self.index is not None and candidate is not None and candidate < self.index:
282 return # Avoid wrapping around.
283
284 self.index = candidate
285
286 def action_cursor_up(self) -> None:
287 """Highlight the previous item in the list."""
288 candidate = _widget_navigation.find_next_enabled(
289 self._nodes,
290 anchor=self.index,
291 direction=-1,
292 )
293 if self.index is not None and candidate is not None and candidate > self.index:
294 return # Avoid wrapping around.
295
296 self.index = candidate
297
298 def _on_list_item__child_clicked(self, event: ListItem._ChildClicked) -> None:
299 event.stop()
300 self.focus()
301 self.index = self._nodes.index(event.item)
302 self.post_message(self.Selected(self, event.item))
303
304 def _scroll_highlighted_region(self) -> None:
305 """Used to keep the highlighted index within vision"""
306 if self.highlighted_child is not None:
307 self.call_after_refresh(
308 self.scroll_to_widget, self.highlighted_child, animate=False
309 )
310
311 def __len__(self) -> int:
312 """Compute the length (in number of items) of the list view."""
313 return len(self._nodes)
314
[end of src/textual/widgets/_list_view.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/textual/widgets/_list_view.py b/src/textual/widgets/_list_view.py
--- a/src/textual/widgets/_list_view.py
+++ b/src/textual/widgets/_list_view.py
@@ -119,9 +119,10 @@
)
# Set the index to the given initial index, or the first available index after.
self._index = _widget_navigation.find_next_enabled(
- self._nodes,
- anchor=initial_index - 1 if initial_index is not None else None,
+ children,
+ anchor=initial_index if initial_index is not None else None,
direction=1,
+ with_anchor=True,
)
def _on_mount(self, _: Mount) -> None:
|
{"golden_diff": "diff --git a/src/textual/widgets/_list_view.py b/src/textual/widgets/_list_view.py\n--- a/src/textual/widgets/_list_view.py\n+++ b/src/textual/widgets/_list_view.py\n@@ -119,9 +119,10 @@\n )\n # Set the index to the given initial index, or the first available index after.\n self._index = _widget_navigation.find_next_enabled(\n- self._nodes,\n- anchor=initial_index - 1 if initial_index is not None else None,\n+ children,\n+ anchor=initial_index if initial_index is not None else None,\n direction=1,\n+ with_anchor=True,\n )\n \n def _on_mount(self, _: Mount) -> None:\n", "issue": "`ListView` initial index is off by one\nSetting the `initial_index` of a `ListView` seems to be off by one, introduced somewhere in v0.48.2:\r\n\r\n```python\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.widgets import Label, ListItem, ListView\r\n\r\n\r\nclass ExampleApp(App):\r\n def compose(self) -> ComposeResult:\r\n list_items = [ListItem(Label(str(i))) for i in range(10)]\r\n yield ListView(\r\n *list_items,\r\n initial_index=9,\r\n )\r\n\r\n\r\nif __name__ == \"__main__\":\r\n app = ExampleApp()\r\n app.run()\r\n```\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import ClassVar, Iterable, Optional\n\nfrom typing_extensions import TypeGuard\n\nfrom .. import _widget_navigation\nfrom ..await_remove import AwaitRemove\nfrom ..binding import Binding, BindingType\nfrom ..containers import VerticalScroll\nfrom ..events import Mount\nfrom ..message import Message\nfrom ..reactive import reactive\nfrom ..widget import AwaitMount\nfrom ..widgets._list_item import ListItem\n\n\nclass ListView(VerticalScroll, can_focus=True, can_focus_children=False):\n \"\"\"A vertical list view widget.\n\n Displays a vertical list of `ListItem`s which can be highlighted and\n selected using the mouse or keyboard.\n\n Attributes:\n index: The index in the list that's currently highlighted.\n \"\"\"\n\n BINDINGS: ClassVar[list[BindingType]] = [\n Binding(\"enter\", \"select_cursor\", \"Select\", show=False),\n Binding(\"up\", \"cursor_up\", \"Cursor Up\", show=False),\n Binding(\"down\", \"cursor_down\", \"Cursor Down\", show=False),\n ]\n \"\"\"\n | Key(s) | Description |\n | :- | :- |\n | enter | Select the current item. |\n | up | Move the cursor up. |\n | down | Move the cursor down. |\n \"\"\"\n\n index = reactive[Optional[int]](0, always_update=True, init=False)\n \"\"\"The index of the currently highlighted item.\"\"\"\n\n class Highlighted(Message):\n \"\"\"Posted when the highlighted item changes.\n\n Highlighted item is controlled using up/down keys.\n Can be handled using `on_list_view_highlighted` in a subclass of `ListView`\n or in a parent widget in the DOM.\n \"\"\"\n\n ALLOW_SELECTOR_MATCH = {\"item\"}\n \"\"\"Additional message attributes that can be used with the [`on` decorator][textual.on].\"\"\"\n\n def __init__(self, list_view: ListView, item: ListItem | None) -> None:\n super().__init__()\n self.list_view: ListView = list_view\n \"\"\"The view that contains the item highlighted.\"\"\"\n self.item: ListItem | None = item\n \"\"\"The highlighted item, if there is one highlighted.\"\"\"\n\n @property\n def control(self) -> ListView:\n \"\"\"The view that contains the item highlighted.\n\n This is an alias for [`Highlighted.list_view`][textual.widgets.ListView.Highlighted.list_view]\n and is used by the [`on`][textual.on] decorator.\n \"\"\"\n return self.list_view\n\n class Selected(Message):\n \"\"\"Posted when a list item is selected, e.g. when you press the enter key on it.\n\n Can be handled using `on_list_view_selected` in a subclass of `ListView` or in\n a parent widget in the DOM.\n \"\"\"\n\n ALLOW_SELECTOR_MATCH = {\"item\"}\n \"\"\"Additional message attributes that can be used with the [`on` decorator][textual.on].\"\"\"\n\n def __init__(self, list_view: ListView, item: ListItem) -> None:\n super().__init__()\n self.list_view: ListView = list_view\n \"\"\"The view that contains the item selected.\"\"\"\n self.item: ListItem = item\n \"\"\"The selected item.\"\"\"\n\n @property\n def control(self) -> ListView:\n \"\"\"The view that contains the item selected.\n\n This is an alias for [`Selected.list_view`][textual.widgets.ListView.Selected.list_view]\n and is used by the [`on`][textual.on] decorator.\n \"\"\"\n return self.list_view\n\n def __init__(\n self,\n *children: ListItem,\n initial_index: int | None = 0,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ) -> None:\n \"\"\"\n Initialize a ListView.\n\n Args:\n *children: The ListItems to display in the list.\n initial_index: The index that should be highlighted when the list is first mounted.\n name: The name of the widget.\n id: The unique ID of the widget used in CSS/query selection.\n classes: The CSS classes of the widget.\n disabled: Whether the ListView is disabled or not.\n \"\"\"\n super().__init__(\n *children, name=name, id=id, classes=classes, disabled=disabled\n )\n # Set the index to the given initial index, or the first available index after.\n self._index = _widget_navigation.find_next_enabled(\n self._nodes,\n anchor=initial_index - 1 if initial_index is not None else None,\n direction=1,\n )\n\n def _on_mount(self, _: Mount) -> None:\n \"\"\"Ensure the ListView is fully-settled after mounting.\"\"\"\n self.index = self._index\n\n @property\n def highlighted_child(self) -> ListItem | None:\n \"\"\"The currently highlighted ListItem, or None if nothing is highlighted.\"\"\"\n if self.index is not None and 0 <= self.index < len(self._nodes):\n list_item = self._nodes[self.index]\n assert isinstance(list_item, ListItem)\n return list_item\n else:\n return None\n\n def validate_index(self, index: int | None) -> int | None:\n \"\"\"Clamp the index to the valid range, or set to None if there's nothing to highlight.\n\n Args:\n index: The index to clamp.\n\n Returns:\n The clamped index.\n \"\"\"\n if index is None or not self._nodes:\n return None\n elif index < 0:\n return 0\n elif index >= len(self._nodes):\n return len(self._nodes) - 1\n\n return index\n\n def _is_valid_index(self, index: int | None) -> TypeGuard[int]:\n \"\"\"Determine whether the current index is valid into the list of children.\"\"\"\n if index is None:\n return False\n return 0 <= index < len(self._nodes)\n\n def watch_index(self, old_index: int | None, new_index: int | None) -> None:\n \"\"\"Updates the highlighting when the index changes.\"\"\"\n if self._is_valid_index(old_index):\n old_child = self._nodes[old_index]\n assert isinstance(old_child, ListItem)\n old_child.highlighted = False\n\n if self._is_valid_index(new_index) and not self._nodes[new_index].disabled:\n new_child = self._nodes[new_index]\n assert isinstance(new_child, ListItem)\n new_child.highlighted = True\n self._scroll_highlighted_region()\n self.post_message(self.Highlighted(self, new_child))\n else:\n self.post_message(self.Highlighted(self, None))\n\n def extend(self, items: Iterable[ListItem]) -> AwaitMount:\n \"\"\"Append multiple new ListItems to the end of the ListView.\n\n Args:\n items: The ListItems to append.\n\n Returns:\n An awaitable that yields control to the event loop\n until the DOM has been updated with the new child items.\n \"\"\"\n await_mount = self.mount(*items)\n if len(self) == 1:\n self.index = 0\n return await_mount\n\n def append(self, item: ListItem) -> AwaitMount:\n \"\"\"Append a new ListItem to the end of the ListView.\n\n Args:\n item: The ListItem to append.\n\n Returns:\n An awaitable that yields control to the event loop\n until the DOM has been updated with the new child item.\n \"\"\"\n return self.extend([item])\n\n def clear(self) -> AwaitRemove:\n \"\"\"Clear all items from the ListView.\n\n Returns:\n An awaitable that yields control to the event loop until\n the DOM has been updated to reflect all children being removed.\n \"\"\"\n await_remove = self.query(\"ListView > ListItem\").remove()\n self.index = None\n return await_remove\n\n def insert(self, index: int, items: Iterable[ListItem]) -> AwaitMount:\n \"\"\"Insert new ListItem(s) to specified index.\n\n Args:\n index: index to insert new ListItem.\n items: The ListItems to insert.\n\n Returns:\n An awaitable that yields control to the event loop\n until the DOM has been updated with the new child item.\n \"\"\"\n await_mount = self.mount(*items, before=index)\n return await_mount\n\n def pop(self, index: Optional[int] = None) -> AwaitRemove:\n \"\"\"Remove last ListItem from ListView or\n Remove ListItem from ListView by index\n\n Args:\n index: index of ListItem to remove from ListView\n\n Returns:\n An awaitable that yields control to the event loop until\n the DOM has been updated to reflect item being removed.\n \"\"\"\n if index is None:\n await_remove = self.query(\"ListItem\").last().remove()\n else:\n await_remove = self.query(\"ListItem\")[index].remove()\n return await_remove\n\n def remove_items(self, indices: Iterable[int]) -> AwaitRemove:\n \"\"\"Remove ListItems from ListView by indices\n\n Args:\n indices: index(s) of ListItems to remove from ListView\n\n Returns:\n An awaitable object that waits for the direct children to be removed.\n \"\"\"\n items = self.query(\"ListItem\")\n items_to_remove = []\n for index in indices:\n items_to_remove.append(items[index])\n\n await_remove = self.app._remove_nodes(items_to_remove, self)\n return await_remove\n\n def action_select_cursor(self) -> None:\n \"\"\"Select the current item in the list.\"\"\"\n selected_child = self.highlighted_child\n if selected_child is None:\n return\n self.post_message(self.Selected(self, selected_child))\n\n def action_cursor_down(self) -> None:\n \"\"\"Highlight the next item in the list.\"\"\"\n candidate = _widget_navigation.find_next_enabled(\n self._nodes,\n anchor=self.index,\n direction=1,\n )\n if self.index is not None and candidate is not None and candidate < self.index:\n return # Avoid wrapping around.\n\n self.index = candidate\n\n def action_cursor_up(self) -> None:\n \"\"\"Highlight the previous item in the list.\"\"\"\n candidate = _widget_navigation.find_next_enabled(\n self._nodes,\n anchor=self.index,\n direction=-1,\n )\n if self.index is not None and candidate is not None and candidate > self.index:\n return # Avoid wrapping around.\n\n self.index = candidate\n\n def _on_list_item__child_clicked(self, event: ListItem._ChildClicked) -> None:\n event.stop()\n self.focus()\n self.index = self._nodes.index(event.item)\n self.post_message(self.Selected(self, event.item))\n\n def _scroll_highlighted_region(self) -> None:\n \"\"\"Used to keep the highlighted index within vision\"\"\"\n if self.highlighted_child is not None:\n self.call_after_refresh(\n self.scroll_to_widget, self.highlighted_child, animate=False\n )\n\n def __len__(self) -> int:\n \"\"\"Compute the length (in number of items) of the list view.\"\"\"\n return len(self._nodes)\n", "path": "src/textual/widgets/_list_view.py"}]}
| 3,934 | 161 |
gh_patches_debug_12655
|
rasdani/github-patches
|
git_diff
|
deis__deis-3535
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error with `deis certs:remove`
Getting the following error when trying to remove a cert.
```
$ deis certs:remove '*.brandfolder.com'
Removing *.brandfolder.com... 405 METHOD NOT ALLOWED
Detail:
Method 'DELETE' not allowed.
```
</issue>
<code>
[start of controller/api/urls.py]
1 from __future__ import unicode_literals
2
3 from django.conf import settings
4 from django.conf.urls import include, patterns, url
5
6 from api import routers, views
7
8
9 router = routers.ApiRouter()
10
11 # Add the generated REST URLs and login/logout endpoint
12 urlpatterns = patterns(
13 '',
14 url(r'^', include(router.urls)),
15 # application release components
16 url(r'^apps/(?P<id>{})/config/?'.format(settings.APP_URL_REGEX),
17 views.ConfigViewSet.as_view({'get': 'retrieve', 'post': 'create'})),
18 url(r'^apps/(?P<id>{})/builds/(?P<uuid>[-_\w]+)/?'.format(settings.APP_URL_REGEX),
19 views.BuildViewSet.as_view({'get': 'retrieve'})),
20 url(r'^apps/(?P<id>{})/builds/?'.format(settings.APP_URL_REGEX),
21 views.BuildViewSet.as_view({'get': 'list', 'post': 'create'})),
22 url(r'^apps/(?P<id>{})/releases/v(?P<version>[0-9]+)/?'.format(settings.APP_URL_REGEX),
23 views.ReleaseViewSet.as_view({'get': 'retrieve'})),
24 url(r'^apps/(?P<id>{})/releases/rollback/?'.format(settings.APP_URL_REGEX),
25 views.ReleaseViewSet.as_view({'post': 'rollback'})),
26 url(r'^apps/(?P<id>{})/releases/?'.format(settings.APP_URL_REGEX),
27 views.ReleaseViewSet.as_view({'get': 'list'})),
28 # application infrastructure
29 url(r'^apps/(?P<id>{})/containers/(?P<type>[-_\w]+)/(?P<num>[-_\w]+)/?'.format(
30 settings.APP_URL_REGEX),
31 views.ContainerViewSet.as_view({'get': 'retrieve'})),
32 url(r'^apps/(?P<id>{})/containers/(?P<type>[-_\w.]+)/?'.format(settings.APP_URL_REGEX),
33 views.ContainerViewSet.as_view({'get': 'list'})),
34 url(r'^apps/(?P<id>{})/containers/?'.format(settings.APP_URL_REGEX),
35 views.ContainerViewSet.as_view({'get': 'list'})),
36 # application domains
37 url(r'^apps/(?P<id>{})/domains/(?P<domain>[-\._\w]+)/?'.format(settings.APP_URL_REGEX),
38 views.DomainViewSet.as_view({'delete': 'destroy'})),
39 url(r'^apps/(?P<id>{})/domains/?'.format(settings.APP_URL_REGEX),
40 views.DomainViewSet.as_view({'post': 'create', 'get': 'list'})),
41 # application actions
42 url(r'^apps/(?P<id>{})/scale/?'.format(settings.APP_URL_REGEX),
43 views.AppViewSet.as_view({'post': 'scale'})),
44 url(r'^apps/(?P<id>{})/logs/?'.format(settings.APP_URL_REGEX),
45 views.AppViewSet.as_view({'get': 'logs'})),
46 url(r'^apps/(?P<id>{})/run/?'.format(settings.APP_URL_REGEX),
47 views.AppViewSet.as_view({'post': 'run'})),
48 # apps sharing
49 url(r'^apps/(?P<id>{})/perms/(?P<username>[-_\w]+)/?'.format(settings.APP_URL_REGEX),
50 views.AppPermsViewSet.as_view({'delete': 'destroy'})),
51 url(r'^apps/(?P<id>{})/perms/?'.format(settings.APP_URL_REGEX),
52 views.AppPermsViewSet.as_view({'get': 'list', 'post': 'create'})),
53 # apps base endpoint
54 url(r'^apps/(?P<id>{})/?'.format(settings.APP_URL_REGEX),
55 views.AppViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),
56 url(r'^apps/?',
57 views.AppViewSet.as_view({'get': 'list', 'post': 'create'})),
58 # key
59 url(r'^keys/(?P<id>.+)/?',
60 views.KeyViewSet.as_view({
61 'get': 'retrieve', 'delete': 'destroy'})),
62 url(r'^keys/?',
63 views.KeyViewSet.as_view({'get': 'list', 'post': 'create'})),
64 # hooks
65 url(r'^hooks/push/?',
66 views.PushHookViewSet.as_view({'post': 'create'})),
67 url(r'^hooks/build/?',
68 views.BuildHookViewSet.as_view({'post': 'create'})),
69 url(r'^hooks/config/?',
70 views.ConfigHookViewSet.as_view({'post': 'create'})),
71 # authn / authz
72 url(r'^auth/register/?',
73 views.UserRegistrationViewSet.as_view({'post': 'create'})),
74 url(r'^auth/cancel/?',
75 views.UserManagementViewSet.as_view({'delete': 'destroy'})),
76 url(r'^auth/passwd/?',
77 views.UserManagementViewSet.as_view({'post': 'passwd'})),
78 url(r'^auth/login/',
79 'rest_framework.authtoken.views.obtain_auth_token'),
80 # admin sharing
81 url(r'^admin/perms/(?P<username>[-_\w]+)/?',
82 views.AdminPermsViewSet.as_view({'delete': 'destroy'})),
83 url(r'^admin/perms/?',
84 views.AdminPermsViewSet.as_view({'get': 'list', 'post': 'create'})),
85 url(r'^certs/(?P<common_name>[-_.\w]+)/?'.format(settings.APP_URL_REGEX),
86 views.CertificateViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),
87 url(r'^certs/?',
88 views.CertificateViewSet.as_view({'get': 'list', 'post': 'create'})),
89 # list users
90 url(r'^users/', views.UserView.as_view({'get': 'list'})),
91 )
92
[end of controller/api/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/controller/api/urls.py b/controller/api/urls.py
--- a/controller/api/urls.py
+++ b/controller/api/urls.py
@@ -82,7 +82,7 @@
views.AdminPermsViewSet.as_view({'delete': 'destroy'})),
url(r'^admin/perms/?',
views.AdminPermsViewSet.as_view({'get': 'list', 'post': 'create'})),
- url(r'^certs/(?P<common_name>[-_.\w]+)/?'.format(settings.APP_URL_REGEX),
+ url(r'^certs/(?P<common_name>[-_*.\w]+)/?'.format(settings.APP_URL_REGEX),
views.CertificateViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),
url(r'^certs/?',
views.CertificateViewSet.as_view({'get': 'list', 'post': 'create'})),
|
{"golden_diff": "diff --git a/controller/api/urls.py b/controller/api/urls.py\n--- a/controller/api/urls.py\n+++ b/controller/api/urls.py\n@@ -82,7 +82,7 @@\n views.AdminPermsViewSet.as_view({'delete': 'destroy'})),\n url(r'^admin/perms/?',\n views.AdminPermsViewSet.as_view({'get': 'list', 'post': 'create'})),\n- url(r'^certs/(?P<common_name>[-_.\\w]+)/?'.format(settings.APP_URL_REGEX),\n+ url(r'^certs/(?P<common_name>[-_*.\\w]+)/?'.format(settings.APP_URL_REGEX),\n views.CertificateViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),\n url(r'^certs/?',\n views.CertificateViewSet.as_view({'get': 'list', 'post': 'create'})),\n", "issue": "Error with `deis certs:remove`\nGetting the following error when trying to remove a cert.\n\n```\n$ deis certs:remove '*.brandfolder.com'\nRemoving *.brandfolder.com... 405 METHOD NOT ALLOWED\nDetail:\nMethod 'DELETE' not allowed.\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom django.conf import settings\nfrom django.conf.urls import include, patterns, url\n\nfrom api import routers, views\n\n\nrouter = routers.ApiRouter()\n\n# Add the generated REST URLs and login/logout endpoint\nurlpatterns = patterns(\n '',\n url(r'^', include(router.urls)),\n # application release components\n url(r'^apps/(?P<id>{})/config/?'.format(settings.APP_URL_REGEX),\n views.ConfigViewSet.as_view({'get': 'retrieve', 'post': 'create'})),\n url(r'^apps/(?P<id>{})/builds/(?P<uuid>[-_\\w]+)/?'.format(settings.APP_URL_REGEX),\n views.BuildViewSet.as_view({'get': 'retrieve'})),\n url(r'^apps/(?P<id>{})/builds/?'.format(settings.APP_URL_REGEX),\n views.BuildViewSet.as_view({'get': 'list', 'post': 'create'})),\n url(r'^apps/(?P<id>{})/releases/v(?P<version>[0-9]+)/?'.format(settings.APP_URL_REGEX),\n views.ReleaseViewSet.as_view({'get': 'retrieve'})),\n url(r'^apps/(?P<id>{})/releases/rollback/?'.format(settings.APP_URL_REGEX),\n views.ReleaseViewSet.as_view({'post': 'rollback'})),\n url(r'^apps/(?P<id>{})/releases/?'.format(settings.APP_URL_REGEX),\n views.ReleaseViewSet.as_view({'get': 'list'})),\n # application infrastructure\n url(r'^apps/(?P<id>{})/containers/(?P<type>[-_\\w]+)/(?P<num>[-_\\w]+)/?'.format(\n settings.APP_URL_REGEX),\n views.ContainerViewSet.as_view({'get': 'retrieve'})),\n url(r'^apps/(?P<id>{})/containers/(?P<type>[-_\\w.]+)/?'.format(settings.APP_URL_REGEX),\n views.ContainerViewSet.as_view({'get': 'list'})),\n url(r'^apps/(?P<id>{})/containers/?'.format(settings.APP_URL_REGEX),\n views.ContainerViewSet.as_view({'get': 'list'})),\n # application domains\n url(r'^apps/(?P<id>{})/domains/(?P<domain>[-\\._\\w]+)/?'.format(settings.APP_URL_REGEX),\n views.DomainViewSet.as_view({'delete': 'destroy'})),\n url(r'^apps/(?P<id>{})/domains/?'.format(settings.APP_URL_REGEX),\n views.DomainViewSet.as_view({'post': 'create', 'get': 'list'})),\n # application actions\n url(r'^apps/(?P<id>{})/scale/?'.format(settings.APP_URL_REGEX),\n views.AppViewSet.as_view({'post': 'scale'})),\n url(r'^apps/(?P<id>{})/logs/?'.format(settings.APP_URL_REGEX),\n views.AppViewSet.as_view({'get': 'logs'})),\n url(r'^apps/(?P<id>{})/run/?'.format(settings.APP_URL_REGEX),\n views.AppViewSet.as_view({'post': 'run'})),\n # apps sharing\n url(r'^apps/(?P<id>{})/perms/(?P<username>[-_\\w]+)/?'.format(settings.APP_URL_REGEX),\n views.AppPermsViewSet.as_view({'delete': 'destroy'})),\n url(r'^apps/(?P<id>{})/perms/?'.format(settings.APP_URL_REGEX),\n views.AppPermsViewSet.as_view({'get': 'list', 'post': 'create'})),\n # apps base endpoint\n url(r'^apps/(?P<id>{})/?'.format(settings.APP_URL_REGEX),\n views.AppViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),\n url(r'^apps/?',\n views.AppViewSet.as_view({'get': 'list', 'post': 'create'})),\n # key\n url(r'^keys/(?P<id>.+)/?',\n views.KeyViewSet.as_view({\n 'get': 'retrieve', 'delete': 'destroy'})),\n url(r'^keys/?',\n views.KeyViewSet.as_view({'get': 'list', 'post': 'create'})),\n # hooks\n url(r'^hooks/push/?',\n views.PushHookViewSet.as_view({'post': 'create'})),\n url(r'^hooks/build/?',\n views.BuildHookViewSet.as_view({'post': 'create'})),\n url(r'^hooks/config/?',\n views.ConfigHookViewSet.as_view({'post': 'create'})),\n # authn / authz\n url(r'^auth/register/?',\n views.UserRegistrationViewSet.as_view({'post': 'create'})),\n url(r'^auth/cancel/?',\n views.UserManagementViewSet.as_view({'delete': 'destroy'})),\n url(r'^auth/passwd/?',\n views.UserManagementViewSet.as_view({'post': 'passwd'})),\n url(r'^auth/login/',\n 'rest_framework.authtoken.views.obtain_auth_token'),\n # admin sharing\n url(r'^admin/perms/(?P<username>[-_\\w]+)/?',\n views.AdminPermsViewSet.as_view({'delete': 'destroy'})),\n url(r'^admin/perms/?',\n views.AdminPermsViewSet.as_view({'get': 'list', 'post': 'create'})),\n url(r'^certs/(?P<common_name>[-_.\\w]+)/?'.format(settings.APP_URL_REGEX),\n views.CertificateViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),\n url(r'^certs/?',\n views.CertificateViewSet.as_view({'get': 'list', 'post': 'create'})),\n # list users\n url(r'^users/', views.UserView.as_view({'get': 'list'})),\n)\n", "path": "controller/api/urls.py"}]}
| 1,975 | 189 |
gh_patches_debug_11545
|
rasdani/github-patches
|
git_diff
|
lisa-lab__pylearn2-1300
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
test_video.test_spatiotemporal_cubes is randomly failing
The test `utils.tests.test_video.test_spatiotemporal_cubes` seems to be randomly failing on some Travis builds. Both @jych and @daemonmaker have experienced this. From [comments in the code](https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/utils/tests/test_video.py#L27-L28) it seems like this is not the first time. Was a reason ever discovered?
test_video.test_spatiotemporal_cubes is randomly failing
The test `utils.tests.test_video.test_spatiotemporal_cubes` seems to be randomly failing on some Travis builds. Both @jych and @daemonmaker have experienced this. From [comments in the code](https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/utils/tests/test_video.py#L27-L28) it seems like this is not the first time. Was a reason ever discovered?
</issue>
<code>
[start of pylearn2/utils/video.py]
1 """
2 Utilities for working with videos, pulling out patches, etc.
3 """
4 import numpy
5
6 from pylearn2.compat import OrderedDict
7 from pylearn2.utils.rng import make_np_rng
8
9 __author__ = "David Warde-Farley"
10 __copyright__ = "Copyright 2011, David Warde-Farley / Universite de Montreal"
11 __license__ = "BSD"
12 __maintainer__ = "David Warde-Farley"
13 __email__ = "wardefar@iro"
14 __all__ = ["get_video_dims", "spatiotemporal_cubes"]
15
16
17 def get_video_dims(fname):
18 """
19 Pull out the frame length, spatial height and spatial width of
20 a video file using ffmpeg.
21
22 Parameters
23 ----------
24 fname : str
25 Path to video file to be inspected.
26
27 Returns
28 -------
29 shape : tuple
30 The spatiotemporal dimensions of the video
31 (length, height, width).
32 """
33 try:
34 import pyffmpeg
35 except ImportError:
36 raise ImportError("This function requires pyffmpeg "
37 "<http://code.google.com/p/pyffmpeg/>")
38 mp = pyffmpeg.FFMpegReader()
39 try:
40 mp.open(fname)
41 tracks = mp.get_tracks()
42 for track in tracks:
43 if isinstance(track, pyffmpeg.VideoTrack):
44 break
45 else:
46 raise ValueError('no video track found')
47 return (track.duration(),) + track.get_orig_size()
48 finally:
49 mp.close()
50
51
52 class FrameLookup(object):
53 """
54 Class encapsulating the logic of turning a frame index into a
55 collection of files into the frame index of a specific video file.
56
57 Item-indexing on this object will yield a (filename, nframes, frame_no)
58 tuple, where nframes is the number of frames in the given file
59 (mainly for checking that we're far enough from the end so that we
60 can sample a big enough chunk).
61
62 Parameters
63 ----------
64 names_ang_lengths : WRITEME
65 """
66 def __init__(self, names_and_lengths):
67 self.files, self.lengths = zip(*names_and_lengths)
68 self.terminals = numpy.cumsum([s[1] for s in names_and_lengths])
69
70 def __getitem__(self, i):
71 idx = (i < self.terminals).nonzero()[0][0]
72 frame_no = i
73 if idx > 0:
74 frame_no -= self.terminals[idx - 1]
75 return self.files[idx], self.lengths[idx], frame_no
76
77 def __len__(self):
78 return self.terminals[-1]
79
80 def __iter__(self):
81 raise TypeError('iteration not supported')
82
83
84 def spatiotemporal_cubes(file_tuples, shape, n_patches=numpy.inf, rng=None):
85 """
86 Generator function that yields a stream of (filename, slicetuple)
87 representing a spatiotemporal patch of that file.
88
89 Parameters
90 ----------
91 file_tuples : list of tuples
92 Each element should be a 2-tuple consisting of a filename
93 (or arbitrary identifier) and a (length, height, width)
94 shape tuple of the dimensions (number of frames in the video,
95 height and width of each frame).
96
97 shape : tuple
98 A shape tuple consisting of the desired (length, height, width)
99 of each spatiotemporal patch.
100
101 n_patches : int, optional
102 The number of patches to generate. By default, generates patches
103 infinitely.
104
105 rng : RandomState object or seed, optional
106 The random number generator (or seed) to use. Defaults to None,
107 meaning it will be seeded from /dev/urandom or the clock.
108
109 Returns
110 -------
111 generator : generator object
112 A generator that yields a stream of (filename, slicetuple) tuples.
113 The slice tuple is such that it indexes into a 3D array containing
114 the entire clip with frames indexed along the first axis, rows
115 along the second and columns along the third.
116 """
117 frame_lookup = FrameLookup([(a, b[0]) for a, b in file_tuples])
118 file_lookup = OrderedDict(file_tuples)
119 patch_length, patch_height, patch_width = shape
120 done = 0
121 rng = make_np_rng(rng, which_method="random_integers")
122 while done < n_patches:
123 frame = rng.random_integers(0, len(frame_lookup) - 1)
124 filename, file_length, frame_no = frame_lookup[frame]
125 # Check that there is a contiguous block of frames starting at
126 # frame_no that is at least as long as our desired cube length.
127 if file_length - frame_no < patch_length:
128 continue
129 _, video_height, video_width = file_lookup[filename][:3]
130 # The last row and column in which a patch could "start" to still
131 # fall within frame.
132 last_row = video_height - patch_height
133 last_col = video_width - patch_width
134 row = numpy.random.random_integers(0, last_row)
135 col = numpy.random.random_integers(0, last_col)
136 patch_slice = (slice(frame_no, frame_no + patch_length),
137 slice(row, row + patch_height),
138 slice(col, col + patch_width))
139 done += 1
140 yield filename, patch_slice
141
[end of pylearn2/utils/video.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pylearn2/utils/video.py b/pylearn2/utils/video.py
--- a/pylearn2/utils/video.py
+++ b/pylearn2/utils/video.py
@@ -131,8 +131,8 @@
# fall within frame.
last_row = video_height - patch_height
last_col = video_width - patch_width
- row = numpy.random.random_integers(0, last_row)
- col = numpy.random.random_integers(0, last_col)
+ row = rng.random_integers(0, last_row)
+ col = rng.random_integers(0, last_col)
patch_slice = (slice(frame_no, frame_no + patch_length),
slice(row, row + patch_height),
slice(col, col + patch_width))
|
{"golden_diff": "diff --git a/pylearn2/utils/video.py b/pylearn2/utils/video.py\n--- a/pylearn2/utils/video.py\n+++ b/pylearn2/utils/video.py\n@@ -131,8 +131,8 @@\n # fall within frame.\n last_row = video_height - patch_height\n last_col = video_width - patch_width\n- row = numpy.random.random_integers(0, last_row)\n- col = numpy.random.random_integers(0, last_col)\n+ row = rng.random_integers(0, last_row)\n+ col = rng.random_integers(0, last_col)\n patch_slice = (slice(frame_no, frame_no + patch_length),\n slice(row, row + patch_height),\n slice(col, col + patch_width))\n", "issue": "test_video.test_spatiotemporal_cubes is randomly failing\nThe test `utils.tests.test_video.test_spatiotemporal_cubes` seems to be randomly failing on some Travis builds. Both @jych and @daemonmaker have experienced this. From [comments in the code](https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/utils/tests/test_video.py#L27-L28) it seems like this is not the first time. Was a reason ever discovered?\n\ntest_video.test_spatiotemporal_cubes is randomly failing\nThe test `utils.tests.test_video.test_spatiotemporal_cubes` seems to be randomly failing on some Travis builds. Both @jych and @daemonmaker have experienced this. From [comments in the code](https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/utils/tests/test_video.py#L27-L28) it seems like this is not the first time. Was a reason ever discovered?\n\n", "before_files": [{"content": "\"\"\"\nUtilities for working with videos, pulling out patches, etc.\n\"\"\"\nimport numpy\n\nfrom pylearn2.compat import OrderedDict\nfrom pylearn2.utils.rng import make_np_rng\n\n__author__ = \"David Warde-Farley\"\n__copyright__ = \"Copyright 2011, David Warde-Farley / Universite de Montreal\"\n__license__ = \"BSD\"\n__maintainer__ = \"David Warde-Farley\"\n__email__ = \"wardefar@iro\"\n__all__ = [\"get_video_dims\", \"spatiotemporal_cubes\"]\n\n\ndef get_video_dims(fname):\n \"\"\"\n Pull out the frame length, spatial height and spatial width of\n a video file using ffmpeg.\n\n Parameters\n ----------\n fname : str\n Path to video file to be inspected.\n\n Returns\n -------\n shape : tuple\n The spatiotemporal dimensions of the video\n (length, height, width).\n \"\"\"\n try:\n import pyffmpeg\n except ImportError:\n raise ImportError(\"This function requires pyffmpeg \"\n \"<http://code.google.com/p/pyffmpeg/>\")\n mp = pyffmpeg.FFMpegReader()\n try:\n mp.open(fname)\n tracks = mp.get_tracks()\n for track in tracks:\n if isinstance(track, pyffmpeg.VideoTrack):\n break\n else:\n raise ValueError('no video track found')\n return (track.duration(),) + track.get_orig_size()\n finally:\n mp.close()\n\n\nclass FrameLookup(object):\n \"\"\"\n Class encapsulating the logic of turning a frame index into a\n collection of files into the frame index of a specific video file.\n\n Item-indexing on this object will yield a (filename, nframes, frame_no)\n tuple, where nframes is the number of frames in the given file\n (mainly for checking that we're far enough from the end so that we\n can sample a big enough chunk).\n\n Parameters\n ----------\n names_ang_lengths : WRITEME\n \"\"\"\n def __init__(self, names_and_lengths):\n self.files, self.lengths = zip(*names_and_lengths)\n self.terminals = numpy.cumsum([s[1] for s in names_and_lengths])\n\n def __getitem__(self, i):\n idx = (i < self.terminals).nonzero()[0][0]\n frame_no = i\n if idx > 0:\n frame_no -= self.terminals[idx - 1]\n return self.files[idx], self.lengths[idx], frame_no\n\n def __len__(self):\n return self.terminals[-1]\n\n def __iter__(self):\n raise TypeError('iteration not supported')\n\n\ndef spatiotemporal_cubes(file_tuples, shape, n_patches=numpy.inf, rng=None):\n \"\"\"\n Generator function that yields a stream of (filename, slicetuple)\n representing a spatiotemporal patch of that file.\n\n Parameters\n ----------\n file_tuples : list of tuples\n Each element should be a 2-tuple consisting of a filename\n (or arbitrary identifier) and a (length, height, width)\n shape tuple of the dimensions (number of frames in the video,\n height and width of each frame).\n\n shape : tuple\n A shape tuple consisting of the desired (length, height, width)\n of each spatiotemporal patch.\n\n n_patches : int, optional\n The number of patches to generate. By default, generates patches\n infinitely.\n\n rng : RandomState object or seed, optional\n The random number generator (or seed) to use. Defaults to None,\n meaning it will be seeded from /dev/urandom or the clock.\n\n Returns\n -------\n generator : generator object\n A generator that yields a stream of (filename, slicetuple) tuples.\n The slice tuple is such that it indexes into a 3D array containing\n the entire clip with frames indexed along the first axis, rows\n along the second and columns along the third.\n \"\"\"\n frame_lookup = FrameLookup([(a, b[0]) for a, b in file_tuples])\n file_lookup = OrderedDict(file_tuples)\n patch_length, patch_height, patch_width = shape\n done = 0\n rng = make_np_rng(rng, which_method=\"random_integers\")\n while done < n_patches:\n frame = rng.random_integers(0, len(frame_lookup) - 1)\n filename, file_length, frame_no = frame_lookup[frame]\n # Check that there is a contiguous block of frames starting at\n # frame_no that is at least as long as our desired cube length.\n if file_length - frame_no < patch_length:\n continue\n _, video_height, video_width = file_lookup[filename][:3]\n # The last row and column in which a patch could \"start\" to still\n # fall within frame.\n last_row = video_height - patch_height\n last_col = video_width - patch_width\n row = numpy.random.random_integers(0, last_row)\n col = numpy.random.random_integers(0, last_col)\n patch_slice = (slice(frame_no, frame_no + patch_length),\n slice(row, row + patch_height),\n slice(col, col + patch_width))\n done += 1\n yield filename, patch_slice\n", "path": "pylearn2/utils/video.py"}]}
| 2,215 | 171 |
gh_patches_debug_32647
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1257
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add statistics information for individual checks (especially time of execution)
I would love to have an option to add some statistics to be displayed for each check. This could be very useful for running pre-commit in CI. I had a case recently of a check that run accidentally a bit longer than expected (it was building unnecessary docker image) and I have not realised that it is taking longer - because we had other checks after and I could only see the total execution time.
Also some other useful information that we might see is how many parallel processes were run and how many files were passed as parameters. This might be really useful especially for people who do not understand that pre-commit runs in parallel by default - this can have some undesireable side effects if you forget to set "require_serial" to True when you need. And seeing that in the output of CI might immediately show that something is wrong. For now the output is a bit of "black-box".
An option to see some additional information (`--add-statistics`) might be super-useful.
Example output I imagine (maybe a bit better aligned):
```
Check if image build is needed...........................................Passed [23.5s, 1 process, no-files]
Add licence for all JINJA template files.................................Passed [2.25s, 8 processes, 48 files]
Check Shell scripts syntax correctness...................................Passed [6.00s, 8 processes, 10 files]
Lint dockerfile..........................................................Passed [2.20s, 3 processes, 3 files]
Run mypy.................................................................Passed [12.00s, 8 processes, 1456 files]
Run pylint for main sources.............................................Skipped
Run pylint for tests....................................................Skipped
Run flake8...............................................................Passed [24.05s, 8 processes, 1456 files]
```
</issue>
<code>
[start of pre_commit/commands/run.py]
1 from __future__ import unicode_literals
2
3 import logging
4 import os
5 import re
6 import subprocess
7
8 from identify.identify import tags_from_path
9
10 from pre_commit import color
11 from pre_commit import git
12 from pre_commit import output
13 from pre_commit.clientlib import load_config
14 from pre_commit.output import get_hook_message
15 from pre_commit.repository import all_hooks
16 from pre_commit.repository import install_hook_envs
17 from pre_commit.staged_files_only import staged_files_only
18 from pre_commit.util import cmd_output_b
19 from pre_commit.util import noop_context
20
21
22 logger = logging.getLogger('pre_commit')
23
24
25 def filter_by_include_exclude(names, include, exclude):
26 include_re, exclude_re = re.compile(include), re.compile(exclude)
27 return [
28 filename for filename in names
29 if include_re.search(filename)
30 if not exclude_re.search(filename)
31 ]
32
33
34 class Classifier(object):
35 def __init__(self, filenames):
36 # on windows we normalize all filenames to use forward slashes
37 # this makes it easier to filter using the `files:` regex
38 # this also makes improperly quoted shell-based hooks work better
39 # see #1173
40 if os.altsep == '/' and os.sep == '\\':
41 filenames = (f.replace(os.sep, os.altsep) for f in filenames)
42 self.filenames = [f for f in filenames if os.path.lexists(f)]
43 self._types_cache = {}
44
45 def _types_for_file(self, filename):
46 try:
47 return self._types_cache[filename]
48 except KeyError:
49 ret = self._types_cache[filename] = tags_from_path(filename)
50 return ret
51
52 def by_types(self, names, types, exclude_types):
53 types, exclude_types = frozenset(types), frozenset(exclude_types)
54 ret = []
55 for filename in names:
56 tags = self._types_for_file(filename)
57 if tags >= types and not tags & exclude_types:
58 ret.append(filename)
59 return ret
60
61 def filenames_for_hook(self, hook):
62 names = self.filenames
63 names = filter_by_include_exclude(names, hook.files, hook.exclude)
64 names = self.by_types(names, hook.types, hook.exclude_types)
65 return names
66
67
68 def _get_skips(environ):
69 skips = environ.get('SKIP', '')
70 return {skip.strip() for skip in skips.split(',') if skip.strip()}
71
72
73 SKIPPED = 'Skipped'
74 NO_FILES = '(no files to check)'
75
76
77 def _subtle_line(s, use_color):
78 output.write_line(color.format_color(s, color.SUBTLE, use_color))
79
80
81 def _run_single_hook(classifier, hook, skips, cols, verbose, use_color):
82 filenames = classifier.filenames_for_hook(hook)
83
84 if hook.language == 'pcre':
85 logger.warning(
86 '`{}` (from {}) uses the deprecated pcre language.\n'
87 'The pcre language is scheduled for removal in pre-commit 2.x.\n'
88 'The pygrep language is a more portable (and usually drop-in) '
89 'replacement.'.format(hook.id, hook.src),
90 )
91
92 if hook.id in skips or hook.alias in skips:
93 output.write(
94 get_hook_message(
95 hook.name,
96 end_msg=SKIPPED,
97 end_color=color.YELLOW,
98 use_color=use_color,
99 cols=cols,
100 ),
101 )
102 retcode = 0
103 files_modified = False
104 out = b''
105 elif not filenames and not hook.always_run:
106 output.write(
107 get_hook_message(
108 hook.name,
109 postfix=NO_FILES,
110 end_msg=SKIPPED,
111 end_color=color.TURQUOISE,
112 use_color=use_color,
113 cols=cols,
114 ),
115 )
116 retcode = 0
117 files_modified = False
118 out = b''
119 else:
120 # print hook and dots first in case the hook takes a while to run
121 output.write(get_hook_message(hook.name, end_len=6, cols=cols))
122
123 diff_cmd = ('git', 'diff', '--no-ext-diff')
124 diff_before = cmd_output_b(*diff_cmd, retcode=None)
125 filenames = tuple(filenames) if hook.pass_filenames else ()
126 retcode, out = hook.run(filenames, use_color)
127 diff_after = cmd_output_b(*diff_cmd, retcode=None)
128
129 # if the hook makes changes, fail the commit
130 files_modified = diff_before != diff_after
131
132 if retcode or files_modified:
133 print_color = color.RED
134 status = 'Failed'
135 else:
136 print_color = color.GREEN
137 status = 'Passed'
138
139 output.write_line(color.format_color(status, print_color, use_color))
140
141 if verbose or hook.verbose or retcode or files_modified:
142 _subtle_line('- hook id: {}'.format(hook.id), use_color)
143
144 if retcode:
145 _subtle_line('- exit code: {}'.format(retcode), use_color)
146
147 # Print a message if failing due to file modifications
148 if files_modified:
149 _subtle_line('- files were modified by this hook', use_color)
150
151 if out.strip():
152 output.write_line()
153 output.write_line(out.strip(), logfile_name=hook.log_file)
154 output.write_line()
155
156 return files_modified or bool(retcode)
157
158
159 def _compute_cols(hooks):
160 """Compute the number of columns to display hook messages. The widest
161 that will be displayed is in the no files skipped case:
162
163 Hook name...(no files to check) Skipped
164 """
165 if hooks:
166 name_len = max(len(hook.name) for hook in hooks)
167 else:
168 name_len = 0
169
170 cols = name_len + 3 + len(NO_FILES) + 1 + len(SKIPPED)
171 return max(cols, 80)
172
173
174 def _all_filenames(args):
175 if args.origin and args.source:
176 return git.get_changed_files(args.origin, args.source)
177 elif args.hook_stage in {'prepare-commit-msg', 'commit-msg'}:
178 return (args.commit_msg_filename,)
179 elif args.files:
180 return args.files
181 elif args.all_files:
182 return git.get_all_files()
183 elif git.is_in_merge_conflict():
184 return git.get_conflicted_files()
185 else:
186 return git.get_staged_files()
187
188
189 def _run_hooks(config, hooks, args, environ):
190 """Actually run the hooks."""
191 skips = _get_skips(environ)
192 cols = _compute_cols(hooks)
193 filenames = _all_filenames(args)
194 filenames = filter_by_include_exclude(
195 filenames, config['files'], config['exclude'],
196 )
197 classifier = Classifier(filenames)
198 retval = 0
199 for hook in hooks:
200 retval |= _run_single_hook(
201 classifier, hook, skips, cols,
202 verbose=args.verbose, use_color=args.color,
203 )
204 if retval and config['fail_fast']:
205 break
206 if retval and args.show_diff_on_failure and git.has_diff():
207 if args.all_files:
208 output.write_line(
209 'pre-commit hook(s) made changes.\n'
210 'If you are seeing this message in CI, '
211 'reproduce locally with: `pre-commit run --all-files`.\n'
212 'To run `pre-commit` as part of git workflow, use '
213 '`pre-commit install`.',
214 )
215 output.write_line('All changes made by hooks:')
216 # args.color is a boolean.
217 # See user_color function in color.py
218 subprocess.call((
219 'git', '--no-pager', 'diff', '--no-ext-diff',
220 '--color={}'.format({True: 'always', False: 'never'}[args.color]),
221 ))
222
223 return retval
224
225
226 def _has_unmerged_paths():
227 _, stdout, _ = cmd_output_b('git', 'ls-files', '--unmerged')
228 return bool(stdout.strip())
229
230
231 def _has_unstaged_config(config_file):
232 retcode, _, _ = cmd_output_b(
233 'git', 'diff', '--no-ext-diff', '--exit-code', config_file,
234 retcode=None,
235 )
236 # be explicit, other git errors don't mean it has an unstaged config.
237 return retcode == 1
238
239
240 def run(config_file, store, args, environ=os.environ):
241 no_stash = args.all_files or bool(args.files)
242
243 # Check if we have unresolved merge conflict files and fail fast.
244 if _has_unmerged_paths():
245 logger.error('Unmerged files. Resolve before committing.')
246 return 1
247 if bool(args.source) != bool(args.origin):
248 logger.error('Specify both --origin and --source.')
249 return 1
250 if _has_unstaged_config(config_file) and not no_stash:
251 logger.error(
252 'Your pre-commit configuration is unstaged.\n'
253 '`git add {}` to fix this.'.format(config_file),
254 )
255 return 1
256
257 # Expose origin / source as environment variables for hooks to consume
258 if args.origin and args.source:
259 environ['PRE_COMMIT_ORIGIN'] = args.origin
260 environ['PRE_COMMIT_SOURCE'] = args.source
261
262 if no_stash:
263 ctx = noop_context()
264 else:
265 ctx = staged_files_only(store.directory)
266
267 with ctx:
268 config = load_config(config_file)
269 hooks = [
270 hook
271 for hook in all_hooks(config, store)
272 if not args.hook or hook.id == args.hook or hook.alias == args.hook
273 if args.hook_stage in hook.stages
274 ]
275
276 if args.hook and not hooks:
277 output.write_line(
278 'No hook with id `{}` in stage `{}`'.format(
279 args.hook, args.hook_stage,
280 ),
281 )
282 return 1
283
284 install_hook_envs(hooks, store)
285
286 return _run_hooks(config, hooks, args, environ)
287
[end of pre_commit/commands/run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -4,6 +4,7 @@
import os
import re
import subprocess
+import time
from identify.identify import tags_from_path
@@ -99,6 +100,7 @@
cols=cols,
),
)
+ duration = None
retcode = 0
files_modified = False
out = b''
@@ -113,6 +115,7 @@
cols=cols,
),
)
+ duration = None
retcode = 0
files_modified = False
out = b''
@@ -123,7 +126,9 @@
diff_cmd = ('git', 'diff', '--no-ext-diff')
diff_before = cmd_output_b(*diff_cmd, retcode=None)
filenames = tuple(filenames) if hook.pass_filenames else ()
+ time_before = time.time()
retcode, out = hook.run(filenames, use_color)
+ duration = round(time.time() - time_before, 2) or 0
diff_after = cmd_output_b(*diff_cmd, retcode=None)
# if the hook makes changes, fail the commit
@@ -141,6 +146,9 @@
if verbose or hook.verbose or retcode or files_modified:
_subtle_line('- hook id: {}'.format(hook.id), use_color)
+ if (verbose or hook.verbose) and duration is not None:
+ _subtle_line('- duration: {}s'.format(duration), use_color)
+
if retcode:
_subtle_line('- exit code: {}'.format(retcode), use_color)
|
{"golden_diff": "diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py\n--- a/pre_commit/commands/run.py\n+++ b/pre_commit/commands/run.py\n@@ -4,6 +4,7 @@\n import os\n import re\n import subprocess\n+import time\n \n from identify.identify import tags_from_path\n \n@@ -99,6 +100,7 @@\n cols=cols,\n ),\n )\n+ duration = None\n retcode = 0\n files_modified = False\n out = b''\n@@ -113,6 +115,7 @@\n cols=cols,\n ),\n )\n+ duration = None\n retcode = 0\n files_modified = False\n out = b''\n@@ -123,7 +126,9 @@\n diff_cmd = ('git', 'diff', '--no-ext-diff')\n diff_before = cmd_output_b(*diff_cmd, retcode=None)\n filenames = tuple(filenames) if hook.pass_filenames else ()\n+ time_before = time.time()\n retcode, out = hook.run(filenames, use_color)\n+ duration = round(time.time() - time_before, 2) or 0\n diff_after = cmd_output_b(*diff_cmd, retcode=None)\n \n # if the hook makes changes, fail the commit\n@@ -141,6 +146,9 @@\n if verbose or hook.verbose or retcode or files_modified:\n _subtle_line('- hook id: {}'.format(hook.id), use_color)\n \n+ if (verbose or hook.verbose) and duration is not None:\n+ _subtle_line('- duration: {}s'.format(duration), use_color)\n+\n if retcode:\n _subtle_line('- exit code: {}'.format(retcode), use_color)\n", "issue": "Add statistics information for individual checks (especially time of execution)\nI would love to have an option to add some statistics to be displayed for each check. This could be very useful for running pre-commit in CI. I had a case recently of a check that run accidentally a bit longer than expected (it was building unnecessary docker image) and I have not realised that it is taking longer - because we had other checks after and I could only see the total execution time.\r\n\r\nAlso some other useful information that we might see is how many parallel processes were run and how many files were passed as parameters. This might be really useful especially for people who do not understand that pre-commit runs in parallel by default - this can have some undesireable side effects if you forget to set \"require_serial\" to True when you need. And seeing that in the output of CI might immediately show that something is wrong. For now the output is a bit of \"black-box\".\r\n\r\nAn option to see some additional information (`--add-statistics`) might be super-useful.\r\n\r\nExample output I imagine (maybe a bit better aligned):\r\n\r\n```\r\nCheck if image build is needed...........................................Passed [23.5s, 1 process, no-files] \r\nAdd licence for all JINJA template files.................................Passed [2.25s, 8 processes, 48 files]\r\nCheck Shell scripts syntax correctness...................................Passed [6.00s, 8 processes, 10 files]\r\nLint dockerfile..........................................................Passed [2.20s, 3 processes, 3 files]\r\nRun mypy.................................................................Passed [12.00s, 8 processes, 1456 files]\r\nRun pylint for main sources.............................................Skipped \r\nRun pylint for tests....................................................Skipped\r\nRun flake8...............................................................Passed [24.05s, 8 processes, 1456 files]\r\n```\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\nimport os\nimport re\nimport subprocess\n\nfrom identify.identify import tags_from_path\n\nfrom pre_commit import color\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.output import get_hook_message\nfrom pre_commit.repository import all_hooks\nfrom pre_commit.repository import install_hook_envs\nfrom pre_commit.staged_files_only import staged_files_only\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.util import noop_context\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef filter_by_include_exclude(names, include, exclude):\n include_re, exclude_re = re.compile(include), re.compile(exclude)\n return [\n filename for filename in names\n if include_re.search(filename)\n if not exclude_re.search(filename)\n ]\n\n\nclass Classifier(object):\n def __init__(self, filenames):\n # on windows we normalize all filenames to use forward slashes\n # this makes it easier to filter using the `files:` regex\n # this also makes improperly quoted shell-based hooks work better\n # see #1173\n if os.altsep == '/' and os.sep == '\\\\':\n filenames = (f.replace(os.sep, os.altsep) for f in filenames)\n self.filenames = [f for f in filenames if os.path.lexists(f)]\n self._types_cache = {}\n\n def _types_for_file(self, filename):\n try:\n return self._types_cache[filename]\n except KeyError:\n ret = self._types_cache[filename] = tags_from_path(filename)\n return ret\n\n def by_types(self, names, types, exclude_types):\n types, exclude_types = frozenset(types), frozenset(exclude_types)\n ret = []\n for filename in names:\n tags = self._types_for_file(filename)\n if tags >= types and not tags & exclude_types:\n ret.append(filename)\n return ret\n\n def filenames_for_hook(self, hook):\n names = self.filenames\n names = filter_by_include_exclude(names, hook.files, hook.exclude)\n names = self.by_types(names, hook.types, hook.exclude_types)\n return names\n\n\ndef _get_skips(environ):\n skips = environ.get('SKIP', '')\n return {skip.strip() for skip in skips.split(',') if skip.strip()}\n\n\nSKIPPED = 'Skipped'\nNO_FILES = '(no files to check)'\n\n\ndef _subtle_line(s, use_color):\n output.write_line(color.format_color(s, color.SUBTLE, use_color))\n\n\ndef _run_single_hook(classifier, hook, skips, cols, verbose, use_color):\n filenames = classifier.filenames_for_hook(hook)\n\n if hook.language == 'pcre':\n logger.warning(\n '`{}` (from {}) uses the deprecated pcre language.\\n'\n 'The pcre language is scheduled for removal in pre-commit 2.x.\\n'\n 'The pygrep language is a more portable (and usually drop-in) '\n 'replacement.'.format(hook.id, hook.src),\n )\n\n if hook.id in skips or hook.alias in skips:\n output.write(\n get_hook_message(\n hook.name,\n end_msg=SKIPPED,\n end_color=color.YELLOW,\n use_color=use_color,\n cols=cols,\n ),\n )\n retcode = 0\n files_modified = False\n out = b''\n elif not filenames and not hook.always_run:\n output.write(\n get_hook_message(\n hook.name,\n postfix=NO_FILES,\n end_msg=SKIPPED,\n end_color=color.TURQUOISE,\n use_color=use_color,\n cols=cols,\n ),\n )\n retcode = 0\n files_modified = False\n out = b''\n else:\n # print hook and dots first in case the hook takes a while to run\n output.write(get_hook_message(hook.name, end_len=6, cols=cols))\n\n diff_cmd = ('git', 'diff', '--no-ext-diff')\n diff_before = cmd_output_b(*diff_cmd, retcode=None)\n filenames = tuple(filenames) if hook.pass_filenames else ()\n retcode, out = hook.run(filenames, use_color)\n diff_after = cmd_output_b(*diff_cmd, retcode=None)\n\n # if the hook makes changes, fail the commit\n files_modified = diff_before != diff_after\n\n if retcode or files_modified:\n print_color = color.RED\n status = 'Failed'\n else:\n print_color = color.GREEN\n status = 'Passed'\n\n output.write_line(color.format_color(status, print_color, use_color))\n\n if verbose or hook.verbose or retcode or files_modified:\n _subtle_line('- hook id: {}'.format(hook.id), use_color)\n\n if retcode:\n _subtle_line('- exit code: {}'.format(retcode), use_color)\n\n # Print a message if failing due to file modifications\n if files_modified:\n _subtle_line('- files were modified by this hook', use_color)\n\n if out.strip():\n output.write_line()\n output.write_line(out.strip(), logfile_name=hook.log_file)\n output.write_line()\n\n return files_modified or bool(retcode)\n\n\ndef _compute_cols(hooks):\n \"\"\"Compute the number of columns to display hook messages. The widest\n that will be displayed is in the no files skipped case:\n\n Hook name...(no files to check) Skipped\n \"\"\"\n if hooks:\n name_len = max(len(hook.name) for hook in hooks)\n else:\n name_len = 0\n\n cols = name_len + 3 + len(NO_FILES) + 1 + len(SKIPPED)\n return max(cols, 80)\n\n\ndef _all_filenames(args):\n if args.origin and args.source:\n return git.get_changed_files(args.origin, args.source)\n elif args.hook_stage in {'prepare-commit-msg', 'commit-msg'}:\n return (args.commit_msg_filename,)\n elif args.files:\n return args.files\n elif args.all_files:\n return git.get_all_files()\n elif git.is_in_merge_conflict():\n return git.get_conflicted_files()\n else:\n return git.get_staged_files()\n\n\ndef _run_hooks(config, hooks, args, environ):\n \"\"\"Actually run the hooks.\"\"\"\n skips = _get_skips(environ)\n cols = _compute_cols(hooks)\n filenames = _all_filenames(args)\n filenames = filter_by_include_exclude(\n filenames, config['files'], config['exclude'],\n )\n classifier = Classifier(filenames)\n retval = 0\n for hook in hooks:\n retval |= _run_single_hook(\n classifier, hook, skips, cols,\n verbose=args.verbose, use_color=args.color,\n )\n if retval and config['fail_fast']:\n break\n if retval and args.show_diff_on_failure and git.has_diff():\n if args.all_files:\n output.write_line(\n 'pre-commit hook(s) made changes.\\n'\n 'If you are seeing this message in CI, '\n 'reproduce locally with: `pre-commit run --all-files`.\\n'\n 'To run `pre-commit` as part of git workflow, use '\n '`pre-commit install`.',\n )\n output.write_line('All changes made by hooks:')\n # args.color is a boolean.\n # See user_color function in color.py\n subprocess.call((\n 'git', '--no-pager', 'diff', '--no-ext-diff',\n '--color={}'.format({True: 'always', False: 'never'}[args.color]),\n ))\n\n return retval\n\n\ndef _has_unmerged_paths():\n _, stdout, _ = cmd_output_b('git', 'ls-files', '--unmerged')\n return bool(stdout.strip())\n\n\ndef _has_unstaged_config(config_file):\n retcode, _, _ = cmd_output_b(\n 'git', 'diff', '--no-ext-diff', '--exit-code', config_file,\n retcode=None,\n )\n # be explicit, other git errors don't mean it has an unstaged config.\n return retcode == 1\n\n\ndef run(config_file, store, args, environ=os.environ):\n no_stash = args.all_files or bool(args.files)\n\n # Check if we have unresolved merge conflict files and fail fast.\n if _has_unmerged_paths():\n logger.error('Unmerged files. Resolve before committing.')\n return 1\n if bool(args.source) != bool(args.origin):\n logger.error('Specify both --origin and --source.')\n return 1\n if _has_unstaged_config(config_file) and not no_stash:\n logger.error(\n 'Your pre-commit configuration is unstaged.\\n'\n '`git add {}` to fix this.'.format(config_file),\n )\n return 1\n\n # Expose origin / source as environment variables for hooks to consume\n if args.origin and args.source:\n environ['PRE_COMMIT_ORIGIN'] = args.origin\n environ['PRE_COMMIT_SOURCE'] = args.source\n\n if no_stash:\n ctx = noop_context()\n else:\n ctx = staged_files_only(store.directory)\n\n with ctx:\n config = load_config(config_file)\n hooks = [\n hook\n for hook in all_hooks(config, store)\n if not args.hook or hook.id == args.hook or hook.alias == args.hook\n if args.hook_stage in hook.stages\n ]\n\n if args.hook and not hooks:\n output.write_line(\n 'No hook with id `{}` in stage `{}`'.format(\n args.hook, args.hook_stage,\n ),\n )\n return 1\n\n install_hook_envs(hooks, store)\n\n return _run_hooks(config, hooks, args, environ)\n", "path": "pre_commit/commands/run.py"}]}
| 3,822 | 390 |
gh_patches_debug_10483
|
rasdani/github-patches
|
git_diff
|
piskvorky__gensim-2689
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LeveshteinSimilarityIndex fails when called from SparseTermSimilarityMatrix
<!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
When trying to build a SparseTermSimilarityMatrix using Levenshtein it fails.
#### Steps/code/corpus to reproduce
```python
index = LevenshteinSimilarityIndex(dictionary)
SparseTermSimilarityMatrix(index, dictionary) # <- fails here
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-59-c16b89564835> in <module>
----> 1 similarity_matrix1 = SparseTermSimilarityMatrix(similarity_index1, dictionary)
~/.local/share/virtualenvs/pdftagger-LHy_2RHk/lib/python3.6/site-packages/gensim/similarities/termsim.py in __init__(self, source, dictionary, tfidf, symmetric, positive_definite, nonzero_limit, dtype)
234 for term, similarity in index.most_similar(t1, topn=num_rows)
235 if term in dictionary.token2id
--> 236 ] if num_rows > 0 else []
237
238 if tfidf is None:
~/.local/share/virtualenvs/pdftagger-LHy_2RHk/lib/python3.6/site-packages/gensim/similarities/levenshtein.py in most_similar(self, t1, topn)
151 if similarity > 0
152 )
--> 153 return islice(most_similar, topn)
ValueError: Stop argument for islice() must be None or an integer: 0 <= x <= sys.maxsize.
```
#### Versions
Linux-4.18.0-25-generic-x86_64-with-debian-buster-sid
Python 3.6.9 (default, Nov 18 2019, 15:20:23)
[GCC 8.3.0]
NumPy 1.17.4
SciPy 1.3.3
gensim 3.8.1
FAST_VERSION 1
</issue>
<code>
[start of gensim/similarities/levenshtein.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) 2018 Vit Novotny <[email protected]>
5 # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
6
7 """
8 This module provides a namespace for functions that use the Levenshtein distance.
9 """
10
11 from itertools import islice
12 import logging
13 from math import floor
14
15 from gensim.similarities.termsim import TermSimilarityIndex
16
17 logger = logging.getLogger(__name__)
18
19
20 def levdist(t1, t2, max_distance=float("inf")):
21 """Get the Levenshtein distance between two terms.
22
23 Return the Levenshtein distance between two terms. The distance is a
24 number between <1.0, inf>, higher is less similar.
25
26 Parameters
27 ----------
28 t1 : {bytes, str, unicode}
29 The first compared term.
30 t2 : {bytes, str, unicode}
31 The second compared term.
32 max_distance : {int, float}, optional
33 If you don't care about distances larger than a known threshold, a more
34 efficient code path can be taken. For terms that are clearly "too far
35 apart", we will not compute the distance exactly, but we will return
36 `max(len(t1), len(t2))` more quickly, meaning "more than
37 `max_distance`".
38 Default: always compute distance exactly, no threshold clipping.
39
40 Returns
41 -------
42 int
43 The Levenshtein distance between `t1` and `t2`.
44
45 """
46 import Levenshtein
47
48 distance = Levenshtein.distance(t1, t2)
49 if distance > max_distance:
50 return max(len(t1), len(t2))
51 return distance
52
53
54 def levsim(t1, t2, alpha=1.8, beta=5.0, min_similarity=0.0):
55 """Get the Levenshtein similarity between two terms.
56
57 Return the Levenshtein similarity between two terms. The similarity is a
58 number between <0.0, 1.0>, higher is more similar.
59
60 Parameters
61 ----------
62 t1 : {bytes, str, unicode}
63 The first compared term.
64 t2 : {bytes, str, unicode}
65 The second compared term.
66 alpha : float, optional
67 The multiplicative factor alpha defined by Charlet and Damnati (2017).
68 beta : float, optional
69 The exponential factor beta defined by Charlet and Damnati (2017).
70 min_similarity : {int, float}, optional
71 If you don't care about similarities smaller than a known threshold, a
72 more efficient code path can be taken. For terms that are clearly "too
73 far apart", we will not compute the distance exactly, but we will
74 return zero more quickly, meaning "less than `min_similarity`".
75 Default: always compute similarity exactly, no threshold clipping.
76
77 Returns
78 -------
79 float
80 The Levenshtein similarity between `t1` and `t2`.
81
82 Notes
83 -----
84 This notion of Levenshtein similarity was first defined in section 2.2 of
85 `Delphine Charlet and Geraldine Damnati, "SimBow at SemEval-2017 Task 3:
86 Soft-Cosine Semantic Similarity between Questions for Community Question
87 Answering", 2017 <http://www.aclweb.org/anthology/S/S17/S17-2051.pdf>`_.
88
89 """
90 assert alpha >= 0
91 assert beta >= 0
92
93 max_lengths = max(len(t1), len(t2))
94 if max_lengths == 0:
95 return 1.0
96
97 min_similarity = float(max(min(min_similarity, 1.0), 0.0))
98 max_distance = int(floor(max_lengths * (1 - (min_similarity / alpha) ** (1 / beta))))
99 distance = levdist(t1, t2, max_distance)
100 similarity = alpha * (1 - distance * 1.0 / max_lengths)**beta
101 return similarity
102
103
104 class LevenshteinSimilarityIndex(TermSimilarityIndex):
105 """
106 Computes Levenshtein similarities between terms and retrieves most similar
107 terms for a given term.
108
109 Notes
110 -----
111 This is a naive implementation that iteratively computes pointwise Levenshtein similarities
112 between individual terms. Using this implementation to compute the similarity of all terms in
113 real-world dictionaries such as the English Wikipedia will take years.
114
115 Parameters
116 ----------
117 dictionary : :class:`~gensim.corpora.dictionary.Dictionary`
118 A dictionary that specifies the considered terms.
119 alpha : float, optional
120 The multiplicative factor alpha defined by Charlet and Damnati (2017).
121 beta : float, optional
122 The exponential factor beta defined by Charlet and Damnati (2017).
123 threshold : float, optional
124 Only terms more similar than `threshold` are considered when retrieving
125 the most similar terms for a given term.
126
127 See Also
128 --------
129 :func:`gensim.similarities.levenshtein.levsim`
130 The Levenshtein similarity.
131 :class:`~gensim.similarities.termsim.SparseTermSimilarityMatrix`
132 Build a term similarity matrix and compute the Soft Cosine Measure.
133
134 """
135 def __init__(self, dictionary, alpha=1.8, beta=5.0, threshold=0.0):
136 self.dictionary = dictionary
137 self.alpha = alpha
138 self.beta = beta
139 self.threshold = threshold
140 super(LevenshteinSimilarityIndex, self).__init__()
141
142 def most_similar(self, t1, topn=10):
143 similarities = (
144 (levsim(t1, t2, self.alpha, self.beta, self.threshold), t2)
145 for t2 in self.dictionary.values()
146 if t1 != t2
147 )
148 most_similar = (
149 (t2, similarity)
150 for (similarity, t2) in sorted(similarities, reverse=True)
151 if similarity > 0
152 )
153 return islice(most_similar, topn)
154
[end of gensim/similarities/levenshtein.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gensim/similarities/levenshtein.py b/gensim/similarities/levenshtein.py
--- a/gensim/similarities/levenshtein.py
+++ b/gensim/similarities/levenshtein.py
@@ -8,7 +8,7 @@
This module provides a namespace for functions that use the Levenshtein distance.
"""
-from itertools import islice
+import itertools
import logging
from math import floor
@@ -150,4 +150,4 @@
for (similarity, t2) in sorted(similarities, reverse=True)
if similarity > 0
)
- return islice(most_similar, topn)
+ return itertools.islice(most_similar, int(topn))
|
{"golden_diff": "diff --git a/gensim/similarities/levenshtein.py b/gensim/similarities/levenshtein.py\n--- a/gensim/similarities/levenshtein.py\n+++ b/gensim/similarities/levenshtein.py\n@@ -8,7 +8,7 @@\n This module provides a namespace for functions that use the Levenshtein distance.\n \"\"\"\n \n-from itertools import islice\n+import itertools\n import logging\n from math import floor\n \n@@ -150,4 +150,4 @@\n for (similarity, t2) in sorted(similarities, reverse=True)\n if similarity > 0\n )\n- return islice(most_similar, topn)\n+ return itertools.islice(most_similar, int(topn))\n", "issue": "LeveshteinSimilarityIndex fails when called from SparseTermSimilarityMatrix\n<!--\r\n**IMPORTANT**:\r\n\r\n- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.\r\n- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.\r\n\r\nGithub bug reports that do not include relevant information and context will be closed without an answer. Thanks!\r\n-->\r\n\r\n#### Problem description\r\n\r\nWhen trying to build a SparseTermSimilarityMatrix using Levenshtein it fails.\r\n\r\n#### Steps/code/corpus to reproduce\r\n\r\n```python\r\nindex = LevenshteinSimilarityIndex(dictionary)\r\nSparseTermSimilarityMatrix(index, dictionary) # <- fails here\r\n```\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-59-c16b89564835> in <module>\r\n----> 1 similarity_matrix1 = SparseTermSimilarityMatrix(similarity_index1, dictionary)\r\n\r\n~/.local/share/virtualenvs/pdftagger-LHy_2RHk/lib/python3.6/site-packages/gensim/similarities/termsim.py in __init__(self, source, dictionary, tfidf, symmetric, positive_definite, nonzero_limit, dtype)\r\n 234 for term, similarity in index.most_similar(t1, topn=num_rows)\r\n 235 if term in dictionary.token2id\r\n--> 236 ] if num_rows > 0 else []\r\n 237 \r\n 238 if tfidf is None:\r\n\r\n~/.local/share/virtualenvs/pdftagger-LHy_2RHk/lib/python3.6/site-packages/gensim/similarities/levenshtein.py in most_similar(self, t1, topn)\r\n 151 if similarity > 0\r\n 152 )\r\n--> 153 return islice(most_similar, topn)\r\n\r\nValueError: Stop argument for islice() must be None or an integer: 0 <= x <= sys.maxsize.\r\n```\r\n\r\n#### Versions\r\n\r\nLinux-4.18.0-25-generic-x86_64-with-debian-buster-sid\r\nPython 3.6.9 (default, Nov 18 2019, 15:20:23) \r\n[GCC 8.3.0]\r\nNumPy 1.17.4\r\nSciPy 1.3.3\r\ngensim 3.8.1\r\nFAST_VERSION 1\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) 2018 Vit Novotny <[email protected]>\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"\nThis module provides a namespace for functions that use the Levenshtein distance.\n\"\"\"\n\nfrom itertools import islice\nimport logging\nfrom math import floor\n\nfrom gensim.similarities.termsim import TermSimilarityIndex\n\nlogger = logging.getLogger(__name__)\n\n\ndef levdist(t1, t2, max_distance=float(\"inf\")):\n \"\"\"Get the Levenshtein distance between two terms.\n\n Return the Levenshtein distance between two terms. The distance is a\n number between <1.0, inf>, higher is less similar.\n\n Parameters\n ----------\n t1 : {bytes, str, unicode}\n The first compared term.\n t2 : {bytes, str, unicode}\n The second compared term.\n max_distance : {int, float}, optional\n If you don't care about distances larger than a known threshold, a more\n efficient code path can be taken. For terms that are clearly \"too far\n apart\", we will not compute the distance exactly, but we will return\n `max(len(t1), len(t2))` more quickly, meaning \"more than\n `max_distance`\".\n Default: always compute distance exactly, no threshold clipping.\n\n Returns\n -------\n int\n The Levenshtein distance between `t1` and `t2`.\n\n \"\"\"\n import Levenshtein\n\n distance = Levenshtein.distance(t1, t2)\n if distance > max_distance:\n return max(len(t1), len(t2))\n return distance\n\n\ndef levsim(t1, t2, alpha=1.8, beta=5.0, min_similarity=0.0):\n \"\"\"Get the Levenshtein similarity between two terms.\n\n Return the Levenshtein similarity between two terms. The similarity is a\n number between <0.0, 1.0>, higher is more similar.\n\n Parameters\n ----------\n t1 : {bytes, str, unicode}\n The first compared term.\n t2 : {bytes, str, unicode}\n The second compared term.\n alpha : float, optional\n The multiplicative factor alpha defined by Charlet and Damnati (2017).\n beta : float, optional\n The exponential factor beta defined by Charlet and Damnati (2017).\n min_similarity : {int, float}, optional\n If you don't care about similarities smaller than a known threshold, a\n more efficient code path can be taken. For terms that are clearly \"too\n far apart\", we will not compute the distance exactly, but we will\n return zero more quickly, meaning \"less than `min_similarity`\".\n Default: always compute similarity exactly, no threshold clipping.\n\n Returns\n -------\n float\n The Levenshtein similarity between `t1` and `t2`.\n\n Notes\n -----\n This notion of Levenshtein similarity was first defined in section 2.2 of\n `Delphine Charlet and Geraldine Damnati, \"SimBow at SemEval-2017 Task 3:\n Soft-Cosine Semantic Similarity between Questions for Community Question\n Answering\", 2017 <http://www.aclweb.org/anthology/S/S17/S17-2051.pdf>`_.\n\n \"\"\"\n assert alpha >= 0\n assert beta >= 0\n\n max_lengths = max(len(t1), len(t2))\n if max_lengths == 0:\n return 1.0\n\n min_similarity = float(max(min(min_similarity, 1.0), 0.0))\n max_distance = int(floor(max_lengths * (1 - (min_similarity / alpha) ** (1 / beta))))\n distance = levdist(t1, t2, max_distance)\n similarity = alpha * (1 - distance * 1.0 / max_lengths)**beta\n return similarity\n\n\nclass LevenshteinSimilarityIndex(TermSimilarityIndex):\n \"\"\"\n Computes Levenshtein similarities between terms and retrieves most similar\n terms for a given term.\n\n Notes\n -----\n This is a naive implementation that iteratively computes pointwise Levenshtein similarities\n between individual terms. Using this implementation to compute the similarity of all terms in\n real-world dictionaries such as the English Wikipedia will take years.\n\n Parameters\n ----------\n dictionary : :class:`~gensim.corpora.dictionary.Dictionary`\n A dictionary that specifies the considered terms.\n alpha : float, optional\n The multiplicative factor alpha defined by Charlet and Damnati (2017).\n beta : float, optional\n The exponential factor beta defined by Charlet and Damnati (2017).\n threshold : float, optional\n Only terms more similar than `threshold` are considered when retrieving\n the most similar terms for a given term.\n\n See Also\n --------\n :func:`gensim.similarities.levenshtein.levsim`\n The Levenshtein similarity.\n :class:`~gensim.similarities.termsim.SparseTermSimilarityMatrix`\n Build a term similarity matrix and compute the Soft Cosine Measure.\n\n \"\"\"\n def __init__(self, dictionary, alpha=1.8, beta=5.0, threshold=0.0):\n self.dictionary = dictionary\n self.alpha = alpha\n self.beta = beta\n self.threshold = threshold\n super(LevenshteinSimilarityIndex, self).__init__()\n\n def most_similar(self, t1, topn=10):\n similarities = (\n (levsim(t1, t2, self.alpha, self.beta, self.threshold), t2)\n for t2 in self.dictionary.values()\n if t1 != t2\n )\n most_similar = (\n (t2, similarity)\n for (similarity, t2) in sorted(similarities, reverse=True)\n if similarity > 0\n )\n return islice(most_similar, topn)\n", "path": "gensim/similarities/levenshtein.py"}]}
| 2,842 | 175 |
gh_patches_debug_31805
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-4356
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
NHL.COM URL Failure...
URL:
http://video.mapleleafs.nhl.com/videocenter/console?id=58665&catid=802
youtube-dl.py -v http://video.mapleleafs.nhl.com/videocenter/console?id=58665&catid=802
[debug] System config: []
[debug] User config: []
[debug] Command-line args: ['-v', 'http://video.mapleleafs.nhl.com/videocenter/console?id=58665']
[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252
[debug] youtube-dl version 2014.12.01
[debug] Python version 2.7.5 - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg N-40824-, rtmpdump 2.4
[debug] Proxy map: {}
[nhl.com] 58665: Downloading JSON metadata
[nhl.com] 58665: Extracting information
[nhl.com] 58665: Downloading final video url
[debug] Invoking downloader on 'http://mapleleafs.cdnllnwnl.neulion.net/s/mapleleafs/vod/flv/LWEB-100128-CLASSICGI6.flv?eid=58653&pid=58665&gid=3028?eid=58653&pid=58665&gid=3028&pt=1&ip=199.85.73.12&e
=1417456874&h=181627b3cec444da4267f94da6b83915'
ERROR: unable to download video data: HTTP Error 404: Not Found
Traceback (most recent call last):
File "C:\Transmogrifier\youtube-dl.py\youtube_dl\YoutubeDL.py", line 1087, in process_info
success = dl(filename, info_dict)
File "C:\Transmogrifier\youtube-dl.py\youtube_dl\YoutubeDL.py", line 1063, in dl
return fd.download(name, info)
File "C:\Transmogrifier\youtube-dl.py\youtube_dl\downloader\common.py", line 294, in download
return self.real_download(filename, info_dict)
File "C:\Transmogrifier\youtube-dl.py\youtube_dl\downloader\http.py", line 66, in real_download
data = self.ydl.urlopen(request)
File "C:\Transmogrifier\youtube-dl.py\youtube_dl\YoutubeDL.py", line 1321, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "C:\Python27\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 448, in error
return self._call_chain(_args)
File "C:\Python27\lib\urllib2.py", line 382, in _call_chain
result = func(_args)
File "C:\Python27\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 404: Not Found
'catid' is not recognized as an internal or external command,
operable program or batch file.
Thanks in advance..
Ringo
</issue>
<code>
[start of youtube_dl/extractor/nhl.py]
1 from __future__ import unicode_literals
2
3 import re
4 import json
5
6 from .common import InfoExtractor
7 from ..compat import (
8 compat_urlparse,
9 compat_urllib_parse,
10 )
11 from ..utils import (
12 unified_strdate,
13 )
14
15
16 class NHLBaseInfoExtractor(InfoExtractor):
17 @staticmethod
18 def _fix_json(json_string):
19 return json_string.replace('\\\'', '\'')
20
21 def _extract_video(self, info):
22 video_id = info['id']
23 self.report_extraction(video_id)
24
25 initial_video_url = info['publishPoint']
26 if info['formats'] == '1':
27 data = compat_urllib_parse.urlencode({
28 'type': 'fvod',
29 'path': initial_video_url.replace('.mp4', '_sd.mp4'),
30 })
31 path_url = 'http://video.nhl.com/videocenter/servlets/encryptvideopath?' + data
32 path_doc = self._download_xml(
33 path_url, video_id, 'Downloading final video url')
34 video_url = path_doc.find('path').text
35 else:
36 video_url = initial_video_url
37
38 join = compat_urlparse.urljoin
39 return {
40 'id': video_id,
41 'title': info['name'],
42 'url': video_url,
43 'description': info['description'],
44 'duration': int(info['duration']),
45 'thumbnail': join(join(video_url, '/u/'), info['bigImage']),
46 'upload_date': unified_strdate(info['releaseDate'].split('.')[0]),
47 }
48
49
50 class NHLIE(NHLBaseInfoExtractor):
51 IE_NAME = 'nhl.com'
52 _VALID_URL = r'https?://video(?P<team>\.[^.]*)?\.nhl\.com/videocenter/console(?:\?(?:.*?[?&])?)id=(?P<id>[0-9a-z-]+)'
53
54 _TESTS = [{
55 'url': 'http://video.canucks.nhl.com/videocenter/console?catid=6?id=453614',
56 'md5': 'db704a4ea09e8d3988c85e36cc892d09',
57 'info_dict': {
58 'id': '453614',
59 'ext': 'mp4',
60 'title': 'Quick clip: Weise 4-3 goal vs Flames',
61 'description': 'Dale Weise scores his first of the season to put the Canucks up 4-3.',
62 'duration': 18,
63 'upload_date': '20131006',
64 },
65 }, {
66 'url': 'http://video.nhl.com/videocenter/console?id=2014020024-628-h',
67 'md5': 'd22e82bc592f52d37d24b03531ee9696',
68 'info_dict': {
69 'id': '2014020024-628-h',
70 'ext': 'mp4',
71 'title': 'Alex Galchenyuk Goal on Ray Emery (14:40/3rd)',
72 'description': 'Home broadcast - Montreal Canadiens at Philadelphia Flyers - October 11, 2014',
73 'duration': 0,
74 'upload_date': '20141011',
75 },
76 }, {
77 'url': 'http://video.flames.nhl.com/videocenter/console?id=630616',
78 'only_matching': True,
79 }]
80
81 def _real_extract(self, url):
82 mobj = re.match(self._VALID_URL, url)
83 video_id = mobj.group('id')
84 json_url = 'http://video.nhl.com/videocenter/servlets/playlist?ids=%s&format=json' % video_id
85 data = self._download_json(
86 json_url, video_id, transform_source=self._fix_json)
87 return self._extract_video(data[0])
88
89
90 class NHLVideocenterIE(NHLBaseInfoExtractor):
91 IE_NAME = 'nhl.com:videocenter'
92 IE_DESC = 'NHL videocenter category'
93 _VALID_URL = r'https?://video\.(?P<team>[^.]*)\.nhl\.com/videocenter/(console\?.*?catid=(?P<catid>[0-9]+)(?![&?]id=).*?)?$'
94 _TEST = {
95 'url': 'http://video.canucks.nhl.com/videocenter/console?catid=999',
96 'info_dict': {
97 'id': '999',
98 'title': 'Highlights',
99 },
100 'playlist_count': 12,
101 }
102
103 def _real_extract(self, url):
104 mobj = re.match(self._VALID_URL, url)
105 team = mobj.group('team')
106 webpage = self._download_webpage(url, team)
107 cat_id = self._search_regex(
108 [r'var defaultCatId = "(.+?)";',
109 r'{statusIndex:0,index:0,.*?id:(.*?),'],
110 webpage, 'category id')
111 playlist_title = self._html_search_regex(
112 r'tab0"[^>]*?>(.*?)</td>',
113 webpage, 'playlist title', flags=re.DOTALL).lower().capitalize()
114
115 data = compat_urllib_parse.urlencode({
116 'cid': cat_id,
117 # This is the default value
118 'count': 12,
119 'ptrs': 3,
120 'format': 'json',
121 })
122 path = '/videocenter/servlets/browse?' + data
123 request_url = compat_urlparse.urljoin(url, path)
124 response = self._download_webpage(request_url, playlist_title)
125 response = self._fix_json(response)
126 if not response.strip():
127 self._downloader.report_warning('Got an empty reponse, trying '
128 'adding the "newvideos" parameter')
129 response = self._download_webpage(request_url + '&newvideos=true',
130 playlist_title)
131 response = self._fix_json(response)
132 videos = json.loads(response)
133
134 return {
135 '_type': 'playlist',
136 'title': playlist_title,
137 'id': cat_id,
138 'entries': [self._extract_video(v) for v in videos],
139 }
140
[end of youtube_dl/extractor/nhl.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/youtube_dl/extractor/nhl.py b/youtube_dl/extractor/nhl.py
--- a/youtube_dl/extractor/nhl.py
+++ b/youtube_dl/extractor/nhl.py
@@ -7,6 +7,7 @@
from ..compat import (
compat_urlparse,
compat_urllib_parse,
+ compat_urllib_parse_urlparse
)
from ..utils import (
unified_strdate,
@@ -24,9 +25,13 @@
initial_video_url = info['publishPoint']
if info['formats'] == '1':
+ parsed_url = compat_urllib_parse_urlparse(initial_video_url)
+ path = parsed_url.path
+ extension_index = path.rfind('.')
+ path = path[:extension_index] + '_sd' + path[extension_index:]
data = compat_urllib_parse.urlencode({
'type': 'fvod',
- 'path': initial_video_url.replace('.mp4', '_sd.mp4'),
+ 'path': compat_urlparse.urlunparse(parsed_url[:2] + (path,) + parsed_url[3:])
})
path_url = 'http://video.nhl.com/videocenter/servlets/encryptvideopath?' + data
path_doc = self._download_xml(
@@ -73,6 +78,17 @@
'duration': 0,
'upload_date': '20141011',
},
+ }, {
+ 'url': 'http://video.mapleleafs.nhl.com/videocenter/console?id=58665&catid=802',
+ 'md5': 'c78fc64ea01777e426cfc202b746c825',
+ 'info_dict': {
+ 'id': '58665',
+ 'ext': 'flv',
+ 'title': 'Classic Game In Six - April 22, 1979',
+ 'description': 'It was the last playoff game for the Leafs in the decade, and the last time the Leafs and Habs played in the playoffs. Great game, not a great ending.',
+ 'duration': 400,
+ 'upload_date': '20100129'
+ },
}, {
'url': 'http://video.flames.nhl.com/videocenter/console?id=630616',
'only_matching': True,
|
{"golden_diff": "diff --git a/youtube_dl/extractor/nhl.py b/youtube_dl/extractor/nhl.py\n--- a/youtube_dl/extractor/nhl.py\n+++ b/youtube_dl/extractor/nhl.py\n@@ -7,6 +7,7 @@\n from ..compat import (\n compat_urlparse,\n compat_urllib_parse,\n+ compat_urllib_parse_urlparse\n )\n from ..utils import (\n unified_strdate,\n@@ -24,9 +25,13 @@\n \n initial_video_url = info['publishPoint']\n if info['formats'] == '1':\n+ parsed_url = compat_urllib_parse_urlparse(initial_video_url)\n+ path = parsed_url.path\n+ extension_index = path.rfind('.')\n+ path = path[:extension_index] + '_sd' + path[extension_index:]\n data = compat_urllib_parse.urlencode({\n 'type': 'fvod',\n- 'path': initial_video_url.replace('.mp4', '_sd.mp4'),\n+ 'path': compat_urlparse.urlunparse(parsed_url[:2] + (path,) + parsed_url[3:])\n })\n path_url = 'http://video.nhl.com/videocenter/servlets/encryptvideopath?' + data\n path_doc = self._download_xml(\n@@ -73,6 +78,17 @@\n 'duration': 0,\n 'upload_date': '20141011',\n },\n+ }, {\n+ 'url': 'http://video.mapleleafs.nhl.com/videocenter/console?id=58665&catid=802',\n+ 'md5': 'c78fc64ea01777e426cfc202b746c825',\n+ 'info_dict': {\n+ 'id': '58665',\n+ 'ext': 'flv',\n+ 'title': 'Classic Game In Six - April 22, 1979',\n+ 'description': 'It was the last playoff game for the Leafs in the decade, and the last time the Leafs and Habs played in the playoffs. Great game, not a great ending.',\n+ 'duration': 400,\n+ 'upload_date': '20100129'\n+ },\n }, {\n 'url': 'http://video.flames.nhl.com/videocenter/console?id=630616',\n 'only_matching': True,\n", "issue": "NHL.COM URL Failure...\nURL:\nhttp://video.mapleleafs.nhl.com/videocenter/console?id=58665&catid=802 \n\nyoutube-dl.py -v http://video.mapleleafs.nhl.com/videocenter/console?id=58665&catid=802\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: ['-v', 'http://video.mapleleafs.nhl.com/videocenter/console?id=58665']\n[debug] Encodings: locale cp1252, fs mbcs, out cp850, pref cp1252\n[debug] youtube-dl version 2014.12.01\n[debug] Python version 2.7.5 - Windows-7-6.1.7601-SP1\n[debug] exe versions: ffmpeg N-40824-, rtmpdump 2.4\n[debug] Proxy map: {}\n[nhl.com] 58665: Downloading JSON metadata\n[nhl.com] 58665: Extracting information\n[nhl.com] 58665: Downloading final video url\n[debug] Invoking downloader on 'http://mapleleafs.cdnllnwnl.neulion.net/s/mapleleafs/vod/flv/LWEB-100128-CLASSICGI6.flv?eid=58653&pid=58665&gid=3028?eid=58653&pid=58665&gid=3028&pt=1&ip=199.85.73.12&e\n=1417456874&h=181627b3cec444da4267f94da6b83915'\nERROR: unable to download video data: HTTP Error 404: Not Found\nTraceback (most recent call last):\n File \"C:\\Transmogrifier\\youtube-dl.py\\youtube_dl\\YoutubeDL.py\", line 1087, in process_info\n success = dl(filename, info_dict)\n File \"C:\\Transmogrifier\\youtube-dl.py\\youtube_dl\\YoutubeDL.py\", line 1063, in dl\n return fd.download(name, info)\n File \"C:\\Transmogrifier\\youtube-dl.py\\youtube_dl\\downloader\\common.py\", line 294, in download\n return self.real_download(filename, info_dict)\n File \"C:\\Transmogrifier\\youtube-dl.py\\youtube_dl\\downloader\\http.py\", line 66, in real_download\n data = self.ydl.urlopen(request)\n File \"C:\\Transmogrifier\\youtube-dl.py\\youtube_dl\\YoutubeDL.py\", line 1321, in urlopen\n return self._opener.open(req, timeout=self._socket_timeout)\n File \"C:\\Python27\\lib\\urllib2.py\", line 410, in open\n response = meth(req, response)\n File \"C:\\Python27\\lib\\urllib2.py\", line 523, in http_response\n 'http', request, response, code, msg, hdrs)\n File \"C:\\Python27\\lib\\urllib2.py\", line 448, in error\n return self._call_chain(_args)\n File \"C:\\Python27\\lib\\urllib2.py\", line 382, in _call_chain\n result = func(_args)\n File \"C:\\Python27\\lib\\urllib2.py\", line 531, in http_error_default\n raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)\nHTTPError: HTTP Error 404: Not Found\n\n'catid' is not recognized as an internal or external command,\noperable program or batch file.\n\nThanks in advance..\nRingo\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport re\nimport json\n\nfrom .common import InfoExtractor\nfrom ..compat import (\n compat_urlparse,\n compat_urllib_parse,\n)\nfrom ..utils import (\n unified_strdate,\n)\n\n\nclass NHLBaseInfoExtractor(InfoExtractor):\n @staticmethod\n def _fix_json(json_string):\n return json_string.replace('\\\\\\'', '\\'')\n\n def _extract_video(self, info):\n video_id = info['id']\n self.report_extraction(video_id)\n\n initial_video_url = info['publishPoint']\n if info['formats'] == '1':\n data = compat_urllib_parse.urlencode({\n 'type': 'fvod',\n 'path': initial_video_url.replace('.mp4', '_sd.mp4'),\n })\n path_url = 'http://video.nhl.com/videocenter/servlets/encryptvideopath?' + data\n path_doc = self._download_xml(\n path_url, video_id, 'Downloading final video url')\n video_url = path_doc.find('path').text\n else:\n video_url = initial_video_url\n\n join = compat_urlparse.urljoin\n return {\n 'id': video_id,\n 'title': info['name'],\n 'url': video_url,\n 'description': info['description'],\n 'duration': int(info['duration']),\n 'thumbnail': join(join(video_url, '/u/'), info['bigImage']),\n 'upload_date': unified_strdate(info['releaseDate'].split('.')[0]),\n }\n\n\nclass NHLIE(NHLBaseInfoExtractor):\n IE_NAME = 'nhl.com'\n _VALID_URL = r'https?://video(?P<team>\\.[^.]*)?\\.nhl\\.com/videocenter/console(?:\\?(?:.*?[?&])?)id=(?P<id>[0-9a-z-]+)'\n\n _TESTS = [{\n 'url': 'http://video.canucks.nhl.com/videocenter/console?catid=6?id=453614',\n 'md5': 'db704a4ea09e8d3988c85e36cc892d09',\n 'info_dict': {\n 'id': '453614',\n 'ext': 'mp4',\n 'title': 'Quick clip: Weise 4-3 goal vs Flames',\n 'description': 'Dale Weise scores his first of the season to put the Canucks up 4-3.',\n 'duration': 18,\n 'upload_date': '20131006',\n },\n }, {\n 'url': 'http://video.nhl.com/videocenter/console?id=2014020024-628-h',\n 'md5': 'd22e82bc592f52d37d24b03531ee9696',\n 'info_dict': {\n 'id': '2014020024-628-h',\n 'ext': 'mp4',\n 'title': 'Alex Galchenyuk Goal on Ray Emery (14:40/3rd)',\n 'description': 'Home broadcast - Montreal Canadiens at Philadelphia Flyers - October 11, 2014',\n 'duration': 0,\n 'upload_date': '20141011',\n },\n }, {\n 'url': 'http://video.flames.nhl.com/videocenter/console?id=630616',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n video_id = mobj.group('id')\n json_url = 'http://video.nhl.com/videocenter/servlets/playlist?ids=%s&format=json' % video_id\n data = self._download_json(\n json_url, video_id, transform_source=self._fix_json)\n return self._extract_video(data[0])\n\n\nclass NHLVideocenterIE(NHLBaseInfoExtractor):\n IE_NAME = 'nhl.com:videocenter'\n IE_DESC = 'NHL videocenter category'\n _VALID_URL = r'https?://video\\.(?P<team>[^.]*)\\.nhl\\.com/videocenter/(console\\?.*?catid=(?P<catid>[0-9]+)(?![&?]id=).*?)?$'\n _TEST = {\n 'url': 'http://video.canucks.nhl.com/videocenter/console?catid=999',\n 'info_dict': {\n 'id': '999',\n 'title': 'Highlights',\n },\n 'playlist_count': 12,\n }\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n team = mobj.group('team')\n webpage = self._download_webpage(url, team)\n cat_id = self._search_regex(\n [r'var defaultCatId = \"(.+?)\";',\n r'{statusIndex:0,index:0,.*?id:(.*?),'],\n webpage, 'category id')\n playlist_title = self._html_search_regex(\n r'tab0\"[^>]*?>(.*?)</td>',\n webpage, 'playlist title', flags=re.DOTALL).lower().capitalize()\n\n data = compat_urllib_parse.urlencode({\n 'cid': cat_id,\n # This is the default value\n 'count': 12,\n 'ptrs': 3,\n 'format': 'json',\n })\n path = '/videocenter/servlets/browse?' + data\n request_url = compat_urlparse.urljoin(url, path)\n response = self._download_webpage(request_url, playlist_title)\n response = self._fix_json(response)\n if not response.strip():\n self._downloader.report_warning('Got an empty reponse, trying '\n 'adding the \"newvideos\" parameter')\n response = self._download_webpage(request_url + '&newvideos=true',\n playlist_title)\n response = self._fix_json(response)\n videos = json.loads(response)\n\n return {\n '_type': 'playlist',\n 'title': playlist_title,\n 'id': cat_id,\n 'entries': [self._extract_video(v) for v in videos],\n }\n", "path": "youtube_dl/extractor/nhl.py"}]}
| 3,152 | 548 |
gh_patches_debug_30818
|
rasdani/github-patches
|
git_diff
|
PennyLaneAI__pennylane-4663
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] `qml.data.load()` fails when using 'full' parameter value
### Expected behavior
`qml.data.load('qspin',sysname='Heisenberg',periodicity='open',lattice='full',layout='2x2')`
Should return the Heisenberg dataset corresponding to the `open` periodicity, `rectangular` lattice, and `2x2` layout.
### Actual behavior
`qml.data.load('qspin',sysname='Heisenberg',periodicity='open',lattice='full',layout='2x2')`
Raises `ValueError: layout value of '2x2' is not available. Available values are ['1x16', '1x4', '1x8']`
### Additional information
_No response_
### Source code
```shell
qml.data.load('qspin',sysname='Heisenberg',periodicity='open',lattice='full',layout='1x4') # works
qml.data.load('qspin',sysname='Heisenberg',periodicity='open',lattice='full',layout='2x2') # fails
```
### Tracebacks
_No response_
### System information
```shell
Name: PennyLane
Version: 0.31.0.dev0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/XanaduAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: .venv/lib/python3.8/site-packages
Editable project location:
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml
Required-by: PennyLane-Lightning
Platform info: Linux-5.15.0-73-generic-x86_64-with-glibc2.29
Python version: 3.8.10
Numpy version: 1.23.5
Scipy version: 1.10.1
Installed devices:
- default.gaussian (PennyLane-0.31.0.dev0)
- default.mixed (PennyLane-0.31.0.dev0)
- default.qubit (PennyLane-0.31.0.dev0)
- default.qubit.autograd (PennyLane-0.31.0.dev0)
- default.qubit.jax (PennyLane-0.31.0.dev0)
- default.qubit.tf (PennyLane-0.31.0.dev0)
- default.qubit.torch (PennyLane-0.31.0.dev0)
- default.qutrit (PennyLane-0.31.0.dev0)
- null.qubit (PennyLane-0.31.0.dev0)
- lightning.qubit (PennyLane-Lightning-0.30.0)
```
### Existing GitHub issues
- [X] I have searched existing GitHub issues to make sure the issue does not already exist.
</issue>
<code>
[start of pennylane/data/data_manager/foldermap.py]
1 # Copyright 2018-2023 Xanadu Quantum Technologies Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 Contains ``FolderMapView`` for reading the ``foldermap.json`` file in the
16 datasets bucket.
17 """
18
19
20 import typing
21 from collections.abc import Mapping
22 from pathlib import PurePosixPath
23 from typing import Any, List, Literal, Optional, Tuple, Union
24
25 from .params import Description, ParamArg, ParamVal
26
27
28 # Type for a dataset path, relative to the foldermap.json file
29 class DataPath(PurePosixPath):
30 """Type for Dataset Path, relative to the foldermap.json file."""
31
32 def __repr__(self) -> str:
33 return repr(str(self))
34
35
36 class FolderMapView(typing.Mapping[str, Union["FolderMapView", DataPath]]):
37 """Provides a read-only view of the ``foldermap.json`` file in
38 the datasets bucket. The folder map is a nested mapping of
39 dataset parameters to their path, relative to the ``foldermap.json``
40 file.
41
42 A dictionary in the folder map can optionally specify a default
43 paramater using the '__default' key. This view hides that
44 key, and allows the default parameter to be accessed.
45
46 For example, the underlying foldermap data will look like
47 this:
48
49 {
50 "__params": {
51 "qchem": ["molname", "basis", "bondlength"]
52 },
53 "qchem": {
54 "O2": {
55 "__default": "STO-3G",
56 "STO-3G": {
57 "__default": "0.5",
58 "0.5": "qchem/O2/STO-3G/0.5.h5",
59 "0.6": "qchem/O2/STO-3G/0.6.h5"
60 }
61 },
62 "H2": {
63 "__default": "STO-3G",
64 "STO-3G": {
65 "__default": "0.7",
66 "0.7": "qchem/H2/STO-3G/0.7.h5"
67 }
68 }
69 },
70 }
71
72 When accessed through ``FolderMapView``, the '__default' and '__params'
73 keys will be hidden.
74 """
75
76 __PRIVATE_KEYS = {"__default", "__params"}
77
78 def __init__(self, __curr_level: typing.Mapping[str, Any]) -> None:
79 """Initialize the mapping.
80
81 Args:
82 __data_struct: The top level foldermap
83 """
84 self.__curr_level = __curr_level
85
86 def get_default_key(self) -> Optional[str]:
87 """Get the default key for this level of the foldermap.
88 Raises a ValueError if it does not have a default.
89 """
90 return self.__curr_level.get("__default")
91
92 def find(
93 self,
94 data_name: str,
95 missing_default: Optional[ParamArg] = ParamArg.DEFAULT,
96 **params: Union[typing.Iterable[ParamVal], ParamArg],
97 ) -> List[Tuple[Description, DataPath]]:
98 """Returns a 2-tuple of dataset description and paths, for each dataset that
99 matches ``params``."""
100
101 try:
102 data_names_to_params = self.__curr_level["__params"]
103 except KeyError as exc:
104 raise RuntimeError("Can only call find() from top level of foldermap") from exc
105
106 try:
107 param_names: List[str] = data_names_to_params[data_name]
108 except KeyError as exc:
109 raise ValueError(f"No datasets with data name: '{data_name}'") from exc
110
111 curr: List[Tuple[Description, Union[FolderMapView, DataPath]]] = [
112 (Description(()), self[data_name])
113 ]
114 todo: List[Tuple[Description, Union[FolderMapView, DataPath]]] = []
115 done: List[Tuple[Description, DataPath]] = []
116
117 for param_name in param_names:
118 param_arg = params.get(param_name, missing_default)
119
120 while curr:
121 curr_description, curr_level = curr.pop()
122 if param_arg == ParamArg.FULL:
123 next_params = curr_level
124 elif param_arg == ParamArg.DEFAULT:
125 default = curr_level.get_default_key()
126 if default is None:
127 raise ValueError(f"No default available for parameter '{param_name}'")
128 next_params = (default,)
129 elif isinstance(param_arg, str):
130 next_params = (param_arg,)
131 else:
132 next_params = param_arg
133
134 try:
135 todo.extend(
136 (
137 Description((*curr_description.items(), (param_name, next_param))),
138 curr_level[next_param],
139 )
140 for next_param in next_params
141 )
142 except KeyError as exc:
143 raise ValueError(
144 f"{param_name} '{exc.args[0]}' is not available. Available values are: {list(curr_level)}"
145 ) from exc
146
147 curr, todo = todo, curr
148
149 done.extend(curr)
150
151 return done
152
153 def __getitem__(
154 self, __key: Union[str, Literal[ParamArg.DEFAULT]]
155 ) -> Union["FolderMapView", DataPath]:
156 """Gets the item with key. If key is ``ParamArg.DEFAULT``, return the
157 item under the default parameter, or raise a ``ValueError`` if no
158 default exists."""
159 if __key in self.__PRIVATE_KEYS:
160 raise KeyError(__key)
161
162 if __key == ParamArg.DEFAULT:
163 default = self.get_default_key()
164 if default is None:
165 raise ValueError("No default available")
166 return self[default]
167
168 elem = self.__curr_level[__key]
169 if isinstance(elem, Mapping):
170 return FolderMapView(elem)
171
172 return DataPath(elem)
173
174 def __iter__(self) -> typing.Iterator[str]:
175 return (key for key in self.__curr_level.keys() if key not in self.__PRIVATE_KEYS)
176
177 def keys(self) -> typing.FrozenSet[str]:
178 return frozenset(iter(self))
179
180 def __len__(self) -> int:
181 return sum(1 for _ in self.__iter__())
182
183 def __repr__(self) -> str:
184 return repr(dict(self))
185
186 def __str__(self) -> str:
187 return str(dict(self))
188
[end of pennylane/data/data_manager/foldermap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pennylane/data/data_manager/foldermap.py b/pennylane/data/data_manager/foldermap.py
--- a/pennylane/data/data_manager/foldermap.py
+++ b/pennylane/data/data_manager/foldermap.py
@@ -119,6 +119,7 @@
while curr:
curr_description, curr_level = curr.pop()
+
if param_arg == ParamArg.FULL:
next_params = curr_level
elif param_arg == ParamArg.DEFAULT:
@@ -131,18 +132,29 @@
else:
next_params = param_arg
- try:
- todo.extend(
+ for next_param in next_params:
+ try:
+ fmap_next = curr_level[next_param]
+ except KeyError:
+ continue
+
+ todo.append(
(
Description((*curr_description.items(), (param_name, next_param))),
- curr_level[next_param],
+ fmap_next,
)
- for next_param in next_params
)
- except KeyError as exc:
- raise ValueError(
- f"{param_name} '{exc.args[0]}' is not available. Available values are: {list(curr_level)}"
- ) from exc
+
+ if len(todo) == 0:
+ # None of the parameters matched
+ param_arg_repr = (
+ repr([param_arg])
+ if isinstance(param_arg, (str, ParamArg))
+ else repr(list(param_arg))
+ )
+ raise ValueError(
+ f"{param_name} value(s) {param_arg_repr} are not available. Available values are: {list(curr_level)}"
+ )
curr, todo = todo, curr
|
{"golden_diff": "diff --git a/pennylane/data/data_manager/foldermap.py b/pennylane/data/data_manager/foldermap.py\n--- a/pennylane/data/data_manager/foldermap.py\n+++ b/pennylane/data/data_manager/foldermap.py\n@@ -119,6 +119,7 @@\n \n while curr:\n curr_description, curr_level = curr.pop()\n+\n if param_arg == ParamArg.FULL:\n next_params = curr_level\n elif param_arg == ParamArg.DEFAULT:\n@@ -131,18 +132,29 @@\n else:\n next_params = param_arg\n \n- try:\n- todo.extend(\n+ for next_param in next_params:\n+ try:\n+ fmap_next = curr_level[next_param]\n+ except KeyError:\n+ continue\n+\n+ todo.append(\n (\n Description((*curr_description.items(), (param_name, next_param))),\n- curr_level[next_param],\n+ fmap_next,\n )\n- for next_param in next_params\n )\n- except KeyError as exc:\n- raise ValueError(\n- f\"{param_name} '{exc.args[0]}' is not available. Available values are: {list(curr_level)}\"\n- ) from exc\n+\n+ if len(todo) == 0:\n+ # None of the parameters matched\n+ param_arg_repr = (\n+ repr([param_arg])\n+ if isinstance(param_arg, (str, ParamArg))\n+ else repr(list(param_arg))\n+ )\n+ raise ValueError(\n+ f\"{param_name} value(s) {param_arg_repr} are not available. Available values are: {list(curr_level)}\"\n+ )\n \n curr, todo = todo, curr\n", "issue": "[BUG] `qml.data.load()` fails when using 'full' parameter value\n### Expected behavior\r\n\r\n`qml.data.load('qspin',sysname='Heisenberg',periodicity='open',lattice='full',layout='2x2')`\r\nShould return the Heisenberg dataset corresponding to the `open` periodicity, `rectangular` lattice, and `2x2` layout.\r\n\r\n### Actual behavior\r\n\r\n`qml.data.load('qspin',sysname='Heisenberg',periodicity='open',lattice='full',layout='2x2')`\r\n\r\nRaises `ValueError: layout value of '2x2' is not available. Available values are ['1x16', '1x4', '1x8']`\r\n\r\n### Additional information\r\n\r\n_No response_\r\n\r\n### Source code\r\n\r\n```shell\r\nqml.data.load('qspin',sysname='Heisenberg',periodicity='open',lattice='full',layout='1x4') # works\r\n\r\nqml.data.load('qspin',sysname='Heisenberg',periodicity='open',lattice='full',layout='2x2') # fails\r\n```\r\n\r\n\r\n### Tracebacks\r\n\r\n_No response_\r\n\r\n### System information\r\n\r\n```shell\r\nName: PennyLane\r\nVersion: 0.31.0.dev0\r\nSummary: PennyLane is a Python quantum machine learning library by Xanadu Inc.\r\nHome-page: https://github.com/XanaduAI/pennylane\r\nAuthor: \r\nAuthor-email: \r\nLicense: Apache License 2.0\r\nLocation: .venv/lib/python3.8/site-packages\r\nEditable project location: \r\nRequires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml\r\nRequired-by: PennyLane-Lightning\r\n\r\nPlatform info: Linux-5.15.0-73-generic-x86_64-with-glibc2.29\r\nPython version: 3.8.10\r\nNumpy version: 1.23.5\r\nScipy version: 1.10.1\r\nInstalled devices:\r\n- default.gaussian (PennyLane-0.31.0.dev0)\r\n- default.mixed (PennyLane-0.31.0.dev0)\r\n- default.qubit (PennyLane-0.31.0.dev0)\r\n- default.qubit.autograd (PennyLane-0.31.0.dev0)\r\n- default.qubit.jax (PennyLane-0.31.0.dev0)\r\n- default.qubit.tf (PennyLane-0.31.0.dev0)\r\n- default.qubit.torch (PennyLane-0.31.0.dev0)\r\n- default.qutrit (PennyLane-0.31.0.dev0)\r\n- null.qubit (PennyLane-0.31.0.dev0)\r\n- lightning.qubit (PennyLane-Lightning-0.30.0)\r\n```\r\n\r\n\r\n### Existing GitHub issues\r\n\r\n- [X] I have searched existing GitHub issues to make sure the issue does not already exist.\n", "before_files": [{"content": "# Copyright 2018-2023 Xanadu Quantum Technologies Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nContains ``FolderMapView`` for reading the ``foldermap.json`` file in the\ndatasets bucket.\n\"\"\"\n\n\nimport typing\nfrom collections.abc import Mapping\nfrom pathlib import PurePosixPath\nfrom typing import Any, List, Literal, Optional, Tuple, Union\n\nfrom .params import Description, ParamArg, ParamVal\n\n\n# Type for a dataset path, relative to the foldermap.json file\nclass DataPath(PurePosixPath):\n \"\"\"Type for Dataset Path, relative to the foldermap.json file.\"\"\"\n\n def __repr__(self) -> str:\n return repr(str(self))\n\n\nclass FolderMapView(typing.Mapping[str, Union[\"FolderMapView\", DataPath]]):\n \"\"\"Provides a read-only view of the ``foldermap.json`` file in\n the datasets bucket. The folder map is a nested mapping of\n dataset parameters to their path, relative to the ``foldermap.json``\n file.\n\n A dictionary in the folder map can optionally specify a default\n paramater using the '__default' key. This view hides that\n key, and allows the default parameter to be accessed.\n\n For example, the underlying foldermap data will look like\n this:\n\n {\n \"__params\": {\n \"qchem\": [\"molname\", \"basis\", \"bondlength\"]\n },\n \"qchem\": {\n \"O2\": {\n \"__default\": \"STO-3G\",\n \"STO-3G\": {\n \"__default\": \"0.5\",\n \"0.5\": \"qchem/O2/STO-3G/0.5.h5\",\n \"0.6\": \"qchem/O2/STO-3G/0.6.h5\"\n }\n },\n \"H2\": {\n \"__default\": \"STO-3G\",\n \"STO-3G\": {\n \"__default\": \"0.7\",\n \"0.7\": \"qchem/H2/STO-3G/0.7.h5\"\n }\n }\n },\n }\n\n When accessed through ``FolderMapView``, the '__default' and '__params'\n keys will be hidden.\n \"\"\"\n\n __PRIVATE_KEYS = {\"__default\", \"__params\"}\n\n def __init__(self, __curr_level: typing.Mapping[str, Any]) -> None:\n \"\"\"Initialize the mapping.\n\n Args:\n __data_struct: The top level foldermap\n \"\"\"\n self.__curr_level = __curr_level\n\n def get_default_key(self) -> Optional[str]:\n \"\"\"Get the default key for this level of the foldermap.\n Raises a ValueError if it does not have a default.\n \"\"\"\n return self.__curr_level.get(\"__default\")\n\n def find(\n self,\n data_name: str,\n missing_default: Optional[ParamArg] = ParamArg.DEFAULT,\n **params: Union[typing.Iterable[ParamVal], ParamArg],\n ) -> List[Tuple[Description, DataPath]]:\n \"\"\"Returns a 2-tuple of dataset description and paths, for each dataset that\n matches ``params``.\"\"\"\n\n try:\n data_names_to_params = self.__curr_level[\"__params\"]\n except KeyError as exc:\n raise RuntimeError(\"Can only call find() from top level of foldermap\") from exc\n\n try:\n param_names: List[str] = data_names_to_params[data_name]\n except KeyError as exc:\n raise ValueError(f\"No datasets with data name: '{data_name}'\") from exc\n\n curr: List[Tuple[Description, Union[FolderMapView, DataPath]]] = [\n (Description(()), self[data_name])\n ]\n todo: List[Tuple[Description, Union[FolderMapView, DataPath]]] = []\n done: List[Tuple[Description, DataPath]] = []\n\n for param_name in param_names:\n param_arg = params.get(param_name, missing_default)\n\n while curr:\n curr_description, curr_level = curr.pop()\n if param_arg == ParamArg.FULL:\n next_params = curr_level\n elif param_arg == ParamArg.DEFAULT:\n default = curr_level.get_default_key()\n if default is None:\n raise ValueError(f\"No default available for parameter '{param_name}'\")\n next_params = (default,)\n elif isinstance(param_arg, str):\n next_params = (param_arg,)\n else:\n next_params = param_arg\n\n try:\n todo.extend(\n (\n Description((*curr_description.items(), (param_name, next_param))),\n curr_level[next_param],\n )\n for next_param in next_params\n )\n except KeyError as exc:\n raise ValueError(\n f\"{param_name} '{exc.args[0]}' is not available. Available values are: {list(curr_level)}\"\n ) from exc\n\n curr, todo = todo, curr\n\n done.extend(curr)\n\n return done\n\n def __getitem__(\n self, __key: Union[str, Literal[ParamArg.DEFAULT]]\n ) -> Union[\"FolderMapView\", DataPath]:\n \"\"\"Gets the item with key. If key is ``ParamArg.DEFAULT``, return the\n item under the default parameter, or raise a ``ValueError`` if no\n default exists.\"\"\"\n if __key in self.__PRIVATE_KEYS:\n raise KeyError(__key)\n\n if __key == ParamArg.DEFAULT:\n default = self.get_default_key()\n if default is None:\n raise ValueError(\"No default available\")\n return self[default]\n\n elem = self.__curr_level[__key]\n if isinstance(elem, Mapping):\n return FolderMapView(elem)\n\n return DataPath(elem)\n\n def __iter__(self) -> typing.Iterator[str]:\n return (key for key in self.__curr_level.keys() if key not in self.__PRIVATE_KEYS)\n\n def keys(self) -> typing.FrozenSet[str]:\n return frozenset(iter(self))\n\n def __len__(self) -> int:\n return sum(1 for _ in self.__iter__())\n\n def __repr__(self) -> str:\n return repr(dict(self))\n\n def __str__(self) -> str:\n return str(dict(self))\n", "path": "pennylane/data/data_manager/foldermap.py"}]}
| 3,131 | 375 |
gh_patches_debug_36019
|
rasdani/github-patches
|
git_diff
|
pytorch__TensorRT-905
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
✨[Feature] Enable debug logging with a context
**Is your feature request related to a problem? Please describe.**
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->
Right now seems like users don't know how or don't enable debug logging. We can probably add some syntax to make this easier.
**Describe the solution you'd like**
<!--A clear and concise description of what you want to happen.-->
I would love to see something like:
```py
import torch_tensorrt as torchtrt
with torchtrt.debug:
torchtrt.ts.compile(....)
```
under the hood this would be equivalent to:
```py
import torch_tensorrt as torchtrt
torchtrt.logging.set_reportable_log_level(torchtrt.logging.Level.Debug)
torchtrt.ts.compile(....)
torchtrt.logging.set_reportable_log_level(torchtrt.logging.Level.Error)
```
**Describe alternatives you've considered**
<!--A clear and concise description of any alternative solutions or features you've considered.-->
**Additional context**
<!--Add any other context or screenshots about the feature request here.-->
</issue>
<code>
[start of py/torch_tensorrt/logging.py]
1 from enum import Enum
2 from torch_tensorrt._C import _get_logging_prefix, _set_logging_prefix, \
3 _get_reportable_log_level, _set_reportable_log_level, \
4 _get_is_colored_output_on, _set_is_colored_output_on, \
5 _log, LogLevel
6
7
8 class Level(Enum):
9 """Enum to set the minimum required logging level to print a message to stdout
10 """
11 InternalError = LogLevel.INTERNAL_ERROR
12 Error = LogLevel.ERROR
13 Warning = LogLevel.WARNING
14 Info = LogLevel.INFO
15 Debug = LogLevel.DEBUG
16 Graph = LogLevel.GRAPH
17
18 @staticmethod
19 def _to_internal_level(external) -> LogLevel:
20 if external == Level.InternalError:
21 return LogLevel.INTERNAL_ERROR
22 if external == Level.Error:
23 return LogLevel.ERROR
24 if external == Level.Warning:
25 return LogLevel.WARNING
26 if external == Level.Info:
27 return LogLevel.INFO
28 if external == Level.Debug:
29 return LogLevel.DEBUG
30 if external == Level.Graph:
31 return LogLevel.GRAPH
32
33
34 def get_logging_prefix() -> str:
35 """Get the prefix set for logging messages
36
37 Returns:
38 str: Prefix used for logger
39 """
40 return _get_logging_prefix()
41
42
43 def set_logging_prefix(prefix: str):
44 """Set the prefix used when logging messages
45
46 Args:
47 prefix (str): Prefix to use for logging messages
48 """
49 _set_logging_prefix(prefix)
50
51
52 def get_reportable_log_level() -> Level:
53 """Get the level required for a message to be printed in the log
54
55 Returns:
56 torch_tensorrt.logging.Level: The enum representing the level required to print
57 """
58 return Level(_get_reportable_log_level())
59
60
61 def set_reportable_log_level(level: Level):
62 """Set the level required for a message to be printed to the log
63
64 Args:
65 level (torch_tensorrt.logging.Level): The enum representing the level required to print
66 """
67 _set_reportable_log_level(Level._to_internal_level(level))
68
69
70 def get_is_colored_output_on() -> bool:
71 """Get if colored output is enabled for logging
72
73 Returns:
74 bool: If colored output is one
75 """
76 return _get_is_colored_output_on()
77
78
79 def set_is_colored_output_on(colored_output_on: bool):
80 """Enable or disable color in the log output
81
82 Args:
83 colored_output_on (bool): If colored output should be enabled or not
84 """
85 _set_is_colored_output_on(colored_output_on)
86
87
88 def log(level: Level, msg: str):
89 """Add a new message to the log
90
91 Adds a new message to the log at a specified level. The message
92 will only get printed out if Level > reportable_log_level
93
94 Args:
95 level (torch_tensorrt.logging.Level): Severity of the message
96 msg (str): Actual message text
97 """
98 _log(Level._to_internal_level(level), msg)
99
[end of py/torch_tensorrt/logging.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/py/torch_tensorrt/logging.py b/py/torch_tensorrt/logging.py
--- a/py/torch_tensorrt/logging.py
+++ b/py/torch_tensorrt/logging.py
@@ -96,3 +96,113 @@
msg (str): Actual message text
"""
_log(Level._to_internal_level(level), msg)
+
+ InternalError = LogLevel.INTERNAL_ERROR
+ Error = LogLevel.ERROR
+ Warning = LogLevel.WARNING
+ Info = LogLevel.INFO
+ Debug = LogLevel.DEBUG
+ Graph = LogLevel.GRAPH
+
+
+class internal_errors:
+ """Context-manager to limit displayed log messages to just internal errors
+
+ Example::
+
+ with torch_tensorrt.logging.internal_errors():
+ outputs = model_torchtrt(inputs)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.InternalError)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
+
+
+class errors:
+ """Context-manager to limit displayed log messages to just errors and above
+
+ Example::
+
+ with torch_tensorrt.logging.errors():
+ outputs = model_torchtrt(inputs)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.Error)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
+
+
+class warnings:
+ """Context-manager to limit displayed log messages to just warnings and above
+
+ Example::
+
+ with torch_tensorrt.logging.warnings():
+ model_trt = torch_tensorrt.compile(model, **spec)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.Warning)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
+
+
+class info:
+ """Context-manager to display all info and greater severity messages
+
+ Example::
+
+ with torch_tensorrt.logging.info():
+ model_trt = torch_tensorrt.compile(model, **spec)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.Info)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
+
+
+class debug:
+ """Context-manager to display full debug information through the logger
+
+ Example::
+
+ with torch_tensorrt.logging.debug():
+ model_trt = torch_tensorrt.compile(model, **spec)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.Debug)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
+
+
+class graphs:
+ """Context-manager to display the results of intermediate lowering passes
+ as well as full debug information through the logger
+
+ Example::
+
+ with torch_tensorrt.logging.graphs():
+ model_trt = torch_tensorrt.compile(model, **spec)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.Graph)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
|
{"golden_diff": "diff --git a/py/torch_tensorrt/logging.py b/py/torch_tensorrt/logging.py\n--- a/py/torch_tensorrt/logging.py\n+++ b/py/torch_tensorrt/logging.py\n@@ -96,3 +96,113 @@\n msg (str): Actual message text\n \"\"\"\n _log(Level._to_internal_level(level), msg)\n+\n+ InternalError = LogLevel.INTERNAL_ERROR\n+ Error = LogLevel.ERROR\n+ Warning = LogLevel.WARNING\n+ Info = LogLevel.INFO\n+ Debug = LogLevel.DEBUG\n+ Graph = LogLevel.GRAPH\n+\n+\n+class internal_errors:\n+ \"\"\"Context-manager to limit displayed log messages to just internal errors\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.internal_errors():\n+ outputs = model_torchtrt(inputs)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.InternalError)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n+\n+\n+class errors:\n+ \"\"\"Context-manager to limit displayed log messages to just errors and above\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.errors():\n+ outputs = model_torchtrt(inputs)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.Error)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n+\n+\n+class warnings:\n+ \"\"\"Context-manager to limit displayed log messages to just warnings and above\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.warnings():\n+ model_trt = torch_tensorrt.compile(model, **spec)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.Warning)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n+\n+\n+class info:\n+ \"\"\"Context-manager to display all info and greater severity messages\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.info():\n+ model_trt = torch_tensorrt.compile(model, **spec)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.Info)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n+\n+\n+class debug:\n+ \"\"\"Context-manager to display full debug information through the logger\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.debug():\n+ model_trt = torch_tensorrt.compile(model, **spec)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.Debug)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n+\n+\n+class graphs:\n+ \"\"\"Context-manager to display the results of intermediate lowering passes\n+ as well as full debug information through the logger\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.graphs():\n+ model_trt = torch_tensorrt.compile(model, **spec)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.Graph)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n", "issue": "\u2728[Feature] Enable debug logging with a context \n**Is your feature request related to a problem? Please describe.**\r\n<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->\r\n\r\nRight now seems like users don't know how or don't enable debug logging. We can probably add some syntax to make this easier.\r\n\r\n**Describe the solution you'd like**\r\n<!--A clear and concise description of what you want to happen.-->\r\n\r\nI would love to see something like: \r\n\r\n```py\r\nimport torch_tensorrt as torchtrt \r\n\r\nwith torchtrt.debug:\r\n torchtrt.ts.compile(....)\r\n\r\n```\r\n\r\nunder the hood this would be equivalent to:\r\n\r\n ```py\r\nimport torch_tensorrt as torchtrt \r\n\r\ntorchtrt.logging.set_reportable_log_level(torchtrt.logging.Level.Debug)\r\ntorchtrt.ts.compile(....)\r\ntorchtrt.logging.set_reportable_log_level(torchtrt.logging.Level.Error)\r\n```\r\n\r\n**Describe alternatives you've considered**\r\n<!--A clear and concise description of any alternative solutions or features you've considered.-->\r\n\r\n**Additional context**\r\n<!--Add any other context or screenshots about the feature request here.-->\r\n\n", "before_files": [{"content": "from enum import Enum\nfrom torch_tensorrt._C import _get_logging_prefix, _set_logging_prefix, \\\n _get_reportable_log_level, _set_reportable_log_level, \\\n _get_is_colored_output_on, _set_is_colored_output_on, \\\n _log, LogLevel\n\n\nclass Level(Enum):\n \"\"\"Enum to set the minimum required logging level to print a message to stdout\n \"\"\"\n InternalError = LogLevel.INTERNAL_ERROR\n Error = LogLevel.ERROR\n Warning = LogLevel.WARNING\n Info = LogLevel.INFO\n Debug = LogLevel.DEBUG\n Graph = LogLevel.GRAPH\n\n @staticmethod\n def _to_internal_level(external) -> LogLevel:\n if external == Level.InternalError:\n return LogLevel.INTERNAL_ERROR\n if external == Level.Error:\n return LogLevel.ERROR\n if external == Level.Warning:\n return LogLevel.WARNING\n if external == Level.Info:\n return LogLevel.INFO\n if external == Level.Debug:\n return LogLevel.DEBUG\n if external == Level.Graph:\n return LogLevel.GRAPH\n\n\ndef get_logging_prefix() -> str:\n \"\"\"Get the prefix set for logging messages\n\n Returns:\n str: Prefix used for logger\n \"\"\"\n return _get_logging_prefix()\n\n\ndef set_logging_prefix(prefix: str):\n \"\"\"Set the prefix used when logging messages\n\n Args:\n prefix (str): Prefix to use for logging messages\n \"\"\"\n _set_logging_prefix(prefix)\n\n\ndef get_reportable_log_level() -> Level:\n \"\"\"Get the level required for a message to be printed in the log\n\n Returns:\n torch_tensorrt.logging.Level: The enum representing the level required to print\n \"\"\"\n return Level(_get_reportable_log_level())\n\n\ndef set_reportable_log_level(level: Level):\n \"\"\"Set the level required for a message to be printed to the log\n\n Args:\n level (torch_tensorrt.logging.Level): The enum representing the level required to print\n \"\"\"\n _set_reportable_log_level(Level._to_internal_level(level))\n\n\ndef get_is_colored_output_on() -> bool:\n \"\"\"Get if colored output is enabled for logging\n\n Returns:\n bool: If colored output is one\n \"\"\"\n return _get_is_colored_output_on()\n\n\ndef set_is_colored_output_on(colored_output_on: bool):\n \"\"\"Enable or disable color in the log output\n\n Args:\n colored_output_on (bool): If colored output should be enabled or not\n \"\"\"\n _set_is_colored_output_on(colored_output_on)\n\n\ndef log(level: Level, msg: str):\n \"\"\"Add a new message to the log\n\n Adds a new message to the log at a specified level. The message\n will only get printed out if Level > reportable_log_level\n\n Args:\n level (torch_tensorrt.logging.Level): Severity of the message\n msg (str): Actual message text\n \"\"\"\n _log(Level._to_internal_level(level), msg)\n", "path": "py/torch_tensorrt/logging.py"}]}
| 1,608 | 816 |
gh_patches_debug_20562
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-13464
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pants package does not build missing docker images if previous build was cached.
**Describe the bug**
Pant's caching of build targets does not take into consideration that the final target does not exist.
Take this example: https://www.pantsbuild.org/v2.8/docs/docker#example
```
$ ./pants package src/docker/hw/Dockerfile
[...]
18:07:29.66 [INFO] Completed: Building src.python.hw/bin.pex
18:07:31.83 [INFO] Completed: Building docker image helloworld:latest
18:07:31.83 [INFO] Built docker image: helloworld:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
helloworld latest abcdefabcdef 6 seconds ago 420MB
$ docker rmi helloworld:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$ ./pants package src/docker/hw/Dockerfile
19:07:31.83 [INFO] Built docker image: helloworld:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
```
If you did the equivalent commands for the `helloworld.pex` files, `pants package` would replace the missing file in the `dist/` folder.
**Pants version**
2.8rc1
**OS**
Linux
</issue>
<code>
[start of src/python/pants/backend/docker/util_rules/docker_binary.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 from dataclasses import dataclass
7 from typing import Mapping
8
9 from pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs
10 from pants.engine.fs import Digest
11 from pants.engine.process import (
12 BinaryNotFoundError,
13 BinaryPath,
14 BinaryPathRequest,
15 BinaryPaths,
16 BinaryPathTest,
17 Process,
18 SearchPath,
19 )
20 from pants.engine.rules import Get, collect_rules, rule
21 from pants.util.logging import LogLevel
22 from pants.util.strutil import pluralize
23
24
25 class DockerBinary(BinaryPath):
26 """The `docker` binary."""
27
28 DEFAULT_SEARCH_PATH = SearchPath(("/usr/bin", "/bin", "/usr/local/bin"))
29
30 def build_image(
31 self,
32 tags: tuple[str, ...],
33 digest: Digest,
34 dockerfile: str | None = None,
35 build_args: DockerBuildArgs | None = None,
36 env: Mapping[str, str] | None = None,
37 ) -> Process:
38 args = [self.path, "build"]
39
40 for tag in tags:
41 args.extend(["-t", tag])
42
43 if build_args:
44 for build_arg in build_args:
45 args.extend(["--build-arg", build_arg])
46
47 if dockerfile:
48 args.extend(["-f", dockerfile])
49
50 # Add build context root.
51 args.append(".")
52
53 return Process(
54 argv=tuple(args),
55 description=(
56 f"Building docker image {tags[0]}"
57 + (f" +{pluralize(len(tags)-1, 'additional tag')}." if len(tags) > 1 else ".")
58 ),
59 env=env,
60 input_digest=digest,
61 )
62
63 def push_image(self, tags: tuple[str, ...]) -> Process | None:
64 if not tags:
65 return None
66
67 return Process(
68 argv=(self.path, "push", *tags), description="Pushing docker image {tags[0]}"
69 )
70
71
72 @dataclass(frozen=True)
73 class DockerBinaryRequest:
74 search_path: SearchPath = DockerBinary.DEFAULT_SEARCH_PATH
75
76
77 @rule(desc="Finding the `docker` binary", level=LogLevel.DEBUG)
78 async def find_docker(docker_request: DockerBinaryRequest) -> DockerBinary:
79 request = BinaryPathRequest(
80 binary_name="docker",
81 search_path=docker_request.search_path,
82 test=BinaryPathTest(args=["-v"]),
83 )
84 paths = await Get(BinaryPaths, BinaryPathRequest, request)
85 first_path = paths.first_path
86 if not first_path:
87 raise BinaryNotFoundError.from_request(request, rationale="interact with the docker daemon")
88 return DockerBinary(first_path.path, first_path.fingerprint)
89
90
91 @rule
92 async def get_docker() -> DockerBinary:
93 return await Get(DockerBinary, DockerBinaryRequest())
94
95
96 def rules():
97 return collect_rules()
98
[end of src/python/pants/backend/docker/util_rules/docker_binary.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/python/pants/backend/docker/util_rules/docker_binary.py b/src/python/pants/backend/docker/util_rules/docker_binary.py
--- a/src/python/pants/backend/docker/util_rules/docker_binary.py
+++ b/src/python/pants/backend/docker/util_rules/docker_binary.py
@@ -15,6 +15,7 @@
BinaryPaths,
BinaryPathTest,
Process,
+ ProcessCacheScope,
SearchPath,
)
from pants.engine.rules import Get, collect_rules, rule
@@ -58,6 +59,7 @@
),
env=env,
input_digest=digest,
+ cache_scope=ProcessCacheScope.PER_SESSION,
)
def push_image(self, tags: tuple[str, ...]) -> Process | None:
@@ -65,7 +67,9 @@
return None
return Process(
- argv=(self.path, "push", *tags), description="Pushing docker image {tags[0]}"
+ argv=(self.path, "push", *tags),
+ cache_scope=ProcessCacheScope.PER_SESSION,
+ description=f"Pushing docker image {tags[0]}",
)
|
{"golden_diff": "diff --git a/src/python/pants/backend/docker/util_rules/docker_binary.py b/src/python/pants/backend/docker/util_rules/docker_binary.py\n--- a/src/python/pants/backend/docker/util_rules/docker_binary.py\n+++ b/src/python/pants/backend/docker/util_rules/docker_binary.py\n@@ -15,6 +15,7 @@\n BinaryPaths,\n BinaryPathTest,\n Process,\n+ ProcessCacheScope,\n SearchPath,\n )\n from pants.engine.rules import Get, collect_rules, rule\n@@ -58,6 +59,7 @@\n ),\n env=env,\n input_digest=digest,\n+ cache_scope=ProcessCacheScope.PER_SESSION,\n )\n \n def push_image(self, tags: tuple[str, ...]) -> Process | None:\n@@ -65,7 +67,9 @@\n return None\n \n return Process(\n- argv=(self.path, \"push\", *tags), description=\"Pushing docker image {tags[0]}\"\n+ argv=(self.path, \"push\", *tags),\n+ cache_scope=ProcessCacheScope.PER_SESSION,\n+ description=f\"Pushing docker image {tags[0]}\",\n )\n", "issue": "pants package does not build missing docker images if previous build was cached.\n**Describe the bug**\r\nPant's caching of build targets does not take into consideration that the final target does not exist.\r\n\r\nTake this example: https://www.pantsbuild.org/v2.8/docs/docker#example\r\n\r\n```\r\n$ ./pants package src/docker/hw/Dockerfile\r\n[...]\r\n18:07:29.66 [INFO] Completed: Building src.python.hw/bin.pex\r\n18:07:31.83 [INFO] Completed: Building docker image helloworld:latest\r\n18:07:31.83 [INFO] Built docker image: helloworld:latest\r\n\r\n$ docker images\r\nREPOSITORY TAG IMAGE ID CREATED SIZE\r\nhelloworld latest abcdefabcdef 6 seconds ago 420MB\r\n\r\n$ docker rmi helloworld:latest\r\n\r\n$ docker images\r\nREPOSITORY TAG IMAGE ID CREATED SIZE\r\n\r\n$ ./pants package src/docker/hw/Dockerfile\r\n19:07:31.83 [INFO] Built docker image: helloworld:latest\r\n\r\n$ docker images\r\nREPOSITORY TAG IMAGE ID CREATED SIZE\r\n```\r\nIf you did the equivalent commands for the `helloworld.pex` files, `pants package` would replace the missing file in the `dist/` folder.\r\n\r\n**Pants version**\r\n2.8rc1\r\n\r\n**OS**\r\nLinux\r\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import Mapping\n\nfrom pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs\nfrom pants.engine.fs import Digest\nfrom pants.engine.process import (\n BinaryNotFoundError,\n BinaryPath,\n BinaryPathRequest,\n BinaryPaths,\n BinaryPathTest,\n Process,\n SearchPath,\n)\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.util.logging import LogLevel\nfrom pants.util.strutil import pluralize\n\n\nclass DockerBinary(BinaryPath):\n \"\"\"The `docker` binary.\"\"\"\n\n DEFAULT_SEARCH_PATH = SearchPath((\"/usr/bin\", \"/bin\", \"/usr/local/bin\"))\n\n def build_image(\n self,\n tags: tuple[str, ...],\n digest: Digest,\n dockerfile: str | None = None,\n build_args: DockerBuildArgs | None = None,\n env: Mapping[str, str] | None = None,\n ) -> Process:\n args = [self.path, \"build\"]\n\n for tag in tags:\n args.extend([\"-t\", tag])\n\n if build_args:\n for build_arg in build_args:\n args.extend([\"--build-arg\", build_arg])\n\n if dockerfile:\n args.extend([\"-f\", dockerfile])\n\n # Add build context root.\n args.append(\".\")\n\n return Process(\n argv=tuple(args),\n description=(\n f\"Building docker image {tags[0]}\"\n + (f\" +{pluralize(len(tags)-1, 'additional tag')}.\" if len(tags) > 1 else \".\")\n ),\n env=env,\n input_digest=digest,\n )\n\n def push_image(self, tags: tuple[str, ...]) -> Process | None:\n if not tags:\n return None\n\n return Process(\n argv=(self.path, \"push\", *tags), description=\"Pushing docker image {tags[0]}\"\n )\n\n\n@dataclass(frozen=True)\nclass DockerBinaryRequest:\n search_path: SearchPath = DockerBinary.DEFAULT_SEARCH_PATH\n\n\n@rule(desc=\"Finding the `docker` binary\", level=LogLevel.DEBUG)\nasync def find_docker(docker_request: DockerBinaryRequest) -> DockerBinary:\n request = BinaryPathRequest(\n binary_name=\"docker\",\n search_path=docker_request.search_path,\n test=BinaryPathTest(args=[\"-v\"]),\n )\n paths = await Get(BinaryPaths, BinaryPathRequest, request)\n first_path = paths.first_path\n if not first_path:\n raise BinaryNotFoundError.from_request(request, rationale=\"interact with the docker daemon\")\n return DockerBinary(first_path.path, first_path.fingerprint)\n\n\n@rule\nasync def get_docker() -> DockerBinary:\n return await Get(DockerBinary, DockerBinaryRequest())\n\n\ndef rules():\n return collect_rules()\n", "path": "src/python/pants/backend/docker/util_rules/docker_binary.py"}]}
| 1,682 | 245 |
gh_patches_debug_33920
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-5988
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Please cut a release of Cloud Asset
</issue>
<code>
[start of asset/setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20 name = 'google-cloud-cloudasset'
21 description = 'Cloud Asset API API client library'
22 version = '0.1.0'
23 release_status = '3 - Alpha'
24 dependencies = [
25 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',
26 'enum34; python_version < "3.4"',
27 'grpc-google-iam-v1<0.12dev,>=0.11.4',
28 ]
29
30 package_root = os.path.abspath(os.path.dirname(__file__))
31
32 readme_filename = os.path.join(package_root, 'README.rst')
33 with io.open(readme_filename, encoding='utf-8') as readme_file:
34 readme = readme_file.read()
35
36 packages = [
37 package for package in setuptools.find_packages()
38 if package.startswith('google')
39 ]
40
41 namespaces = ['google']
42 if 'google.cloud' in packages:
43 namespaces.append('google.cloud')
44
45 setuptools.setup(
46 name=name,
47 version=version,
48 description=description,
49 long_description=readme,
50 author='Google LLC',
51 author_email='[email protected]',
52 license='Apache 2.0',
53 url='https://github.com/GoogleCloudPlatform/google-cloud-python',
54 classifiers=[
55 release_status,
56 'Intended Audience :: Developers',
57 'License :: OSI Approved :: Apache Software License',
58 'Programming Language :: Python',
59 'Programming Language :: Python :: 2',
60 'Programming Language :: Python :: 2.7',
61 'Programming Language :: Python :: 3',
62 'Programming Language :: Python :: 3.4',
63 'Programming Language :: Python :: 3.5',
64 'Programming Language :: Python :: 3.6',
65 'Operating System :: OS Independent',
66 'Topic :: Internet',
67 ],
68 platforms='Posix; MacOS X; Windows',
69 packages=packages,
70 namespace_packages=namespaces,
71 install_requires=dependencies,
72 include_package_data=True,
73 zip_safe=False,
74 )
75
[end of asset/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/asset/setup.py b/asset/setup.py
--- a/asset/setup.py
+++ b/asset/setup.py
@@ -17,27 +17,38 @@
import setuptools
+# Package metadata.
+
name = 'google-cloud-cloudasset'
description = 'Cloud Asset API API client library'
version = '0.1.0'
-release_status = '3 - Alpha'
+# Should be one of:
+# 'Development Status :: 3 - Alpha'
+# 'Development Status :: 4 - Beta'
+# 'Development Status :: 5 - Production/Stable'
+release_status = 'Development Status :: 3 - Alpha'
dependencies = [
'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',
'enum34; python_version < "3.4"',
'grpc-google-iam-v1<0.12dev,>=0.11.4',
]
+# Setup boilerplate below this line.
+
package_root = os.path.abspath(os.path.dirname(__file__))
readme_filename = os.path.join(package_root, 'README.rst')
with io.open(readme_filename, encoding='utf-8') as readme_file:
readme = readme_file.read()
+# Only include packages under the 'google' namespace. Do not include tests,
+# benchmarks, etc.
packages = [
package for package in setuptools.find_packages()
if package.startswith('google')
]
+# Determine which namespaces are needed.
namespaces = ['google']
if 'google.cloud' in packages:
namespaces.append('google.cloud')
@@ -59,9 +70,9 @@
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
+ 'Programming Language :: Python :: 3.7',
'Operating System :: OS Independent',
'Topic :: Internet',
],
|
{"golden_diff": "diff --git a/asset/setup.py b/asset/setup.py\n--- a/asset/setup.py\n+++ b/asset/setup.py\n@@ -17,27 +17,38 @@\n \n import setuptools\n \n+# Package metadata.\n+\n name = 'google-cloud-cloudasset'\n description = 'Cloud Asset API API client library'\n version = '0.1.0'\n-release_status = '3 - Alpha'\n+# Should be one of:\n+# 'Development Status :: 3 - Alpha'\n+# 'Development Status :: 4 - Beta'\n+# 'Development Status :: 5 - Production/Stable'\n+release_status = 'Development Status :: 3 - Alpha'\n dependencies = [\n 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',\n 'enum34; python_version < \"3.4\"',\n 'grpc-google-iam-v1<0.12dev,>=0.11.4',\n ]\n \n+# Setup boilerplate below this line.\n+\n package_root = os.path.abspath(os.path.dirname(__file__))\n \n readme_filename = os.path.join(package_root, 'README.rst')\n with io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n \n+# Only include packages under the 'google' namespace. Do not include tests,\n+# benchmarks, etc.\n packages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')\n ]\n \n+# Determine which namespaces are needed.\n namespaces = ['google']\n if 'google.cloud' in packages:\n namespaces.append('google.cloud')\n@@ -59,9 +70,9 @@\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n+ 'Programming Language :: Python :: 3.7',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n", "issue": "Please cut a release of Cloud Asset\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\nname = 'google-cloud-cloudasset'\ndescription = 'Cloud Asset API API client library'\nversion = '0.1.0'\nrelease_status = '3 - Alpha'\ndependencies = [\n 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',\n 'enum34; python_version < \"3.4\"',\n 'grpc-google-iam-v1<0.12dev,>=0.11.4',\n]\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')\n]\n\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "asset/setup.py"}]}
| 1,240 | 444 |
gh_patches_debug_6963
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__pytorch-lightning-1193
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wandb logger doesn't upload saved model checkpoint for final epoch
## 🐛 Bug
When training a model on the TPU and using the wandb logger, the checkpoint for the last epoch trained doesn't get uploaded to wandb.
### To Reproduce
Colab notebook: https://colab.research.google.com/drive/1oPaRWGZcz6YEol012xFADN42LV-jowtT
</issue>
<code>
[start of pytorch_lightning/loggers/wandb.py]
1 r"""
2
3 .. _wandb:
4
5 WandbLogger
6 -------------
7 """
8 import os
9 from argparse import Namespace
10 from typing import Optional, List, Dict, Union, Any
11
12 import torch.nn as nn
13
14 try:
15 import wandb
16 from wandb.wandb_run import Run
17 except ImportError: # pragma: no-cover
18 raise ImportError('You want to use `wandb` logger which is not installed yet,' # pragma: no-cover
19 ' install it with `pip install wandb`.')
20
21 from pytorch_lightning.loggers.base import LightningLoggerBase, rank_zero_only
22
23
24 class WandbLogger(LightningLoggerBase):
25 """
26 Logger for `W&B <https://www.wandb.com/>`_.
27
28 Args:
29 name (str): display name for the run.
30 save_dir (str): path where data is saved.
31 offline (bool): run offline (data can be streamed later to wandb servers).
32 id or version (str): sets the version, mainly used to resume a previous run.
33 anonymous (bool): enables or explicitly disables anonymous logging.
34 project (str): the name of the project to which this run will belong.
35 tags (list of str): tags associated with this run.
36
37 Example
38 --------
39 .. code-block:: python
40
41 from pytorch_lightning.loggers import WandbLogger
42 from pytorch_lightning import Trainer
43
44 wandb_logger = WandbLogger()
45 trainer = Trainer(logger=wandb_logger)
46 """
47
48 def __init__(self, name: Optional[str] = None, save_dir: Optional[str] = None,
49 offline: bool = False, id: Optional[str] = None, anonymous: bool = False,
50 version: Optional[str] = None, project: Optional[str] = None,
51 tags: Optional[List[str]] = None, experiment=None, entity=None):
52 super().__init__()
53 self._name = name
54 self._save_dir = save_dir
55 self._anonymous = 'allow' if anonymous else None
56 self._id = version or id
57 self._tags = tags
58 self._project = project
59 self._experiment = experiment
60 self._offline = offline
61 self._entity = entity
62
63 def __getstate__(self):
64 state = self.__dict__.copy()
65 # cannot be pickled
66 state['_experiment'] = None
67 # args needed to reload correct experiment
68 state['_id'] = self.experiment.id
69 return state
70
71 @property
72 def experiment(self) -> Run:
73 r"""
74
75 Actual wandb object. To use wandb features do the following.
76
77 Example::
78
79 self.logger.experiment.some_wandb_function()
80
81 """
82 if self._experiment is None:
83 if self._offline:
84 os.environ['WANDB_MODE'] = 'dryrun'
85 self._experiment = wandb.init(
86 name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,
87 id=self._id, resume='allow', tags=self._tags, entity=self._entity)
88 return self._experiment
89
90 def watch(self, model: nn.Module, log: str = 'gradients', log_freq: int = 100):
91 wandb.watch(model, log=log, log_freq=log_freq)
92
93 @rank_zero_only
94 def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:
95 params = self._convert_params(params)
96 self.experiment.config.update(params)
97
98 @rank_zero_only
99 def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None:
100 if step is not None:
101 metrics['global_step'] = step
102 self.experiment.log(metrics)
103
104 @rank_zero_only
105 def finalize(self, status: str = 'success') -> None:
106 try:
107 exit_code = 0 if status == 'success' else 1
108 wandb.join(exit_code)
109 except TypeError:
110 wandb.join()
111
112 @property
113 def name(self) -> str:
114 return self.experiment.project_name()
115
116 @property
117 def version(self) -> str:
118 return self.experiment.id
119
[end of pytorch_lightning/loggers/wandb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pytorch_lightning/loggers/wandb.py b/pytorch_lightning/loggers/wandb.py
--- a/pytorch_lightning/loggers/wandb.py
+++ b/pytorch_lightning/loggers/wandb.py
@@ -101,14 +101,6 @@
metrics['global_step'] = step
self.experiment.log(metrics)
- @rank_zero_only
- def finalize(self, status: str = 'success') -> None:
- try:
- exit_code = 0 if status == 'success' else 1
- wandb.join(exit_code)
- except TypeError:
- wandb.join()
-
@property
def name(self) -> str:
return self.experiment.project_name()
|
{"golden_diff": "diff --git a/pytorch_lightning/loggers/wandb.py b/pytorch_lightning/loggers/wandb.py\n--- a/pytorch_lightning/loggers/wandb.py\n+++ b/pytorch_lightning/loggers/wandb.py\n@@ -101,14 +101,6 @@\n metrics['global_step'] = step\n self.experiment.log(metrics)\n \n- @rank_zero_only\n- def finalize(self, status: str = 'success') -> None:\n- try:\n- exit_code = 0 if status == 'success' else 1\n- wandb.join(exit_code)\n- except TypeError:\n- wandb.join()\n-\n @property\n def name(self) -> str:\n return self.experiment.project_name()\n", "issue": "Wandb logger doesn't upload saved model checkpoint for final epoch\n## \ud83d\udc1b Bug\r\n\r\nWhen training a model on the TPU and using the wandb logger, the checkpoint for the last epoch trained doesn't get uploaded to wandb.\r\n\r\n### To Reproduce\r\n\r\nColab notebook: https://colab.research.google.com/drive/1oPaRWGZcz6YEol012xFADN42LV-jowtT\n", "before_files": [{"content": "r\"\"\"\n\n.. _wandb:\n\nWandbLogger\n-------------\n\"\"\"\nimport os\nfrom argparse import Namespace\nfrom typing import Optional, List, Dict, Union, Any\n\nimport torch.nn as nn\n\ntry:\n import wandb\n from wandb.wandb_run import Run\nexcept ImportError: # pragma: no-cover\n raise ImportError('You want to use `wandb` logger which is not installed yet,' # pragma: no-cover\n ' install it with `pip install wandb`.')\n\nfrom pytorch_lightning.loggers.base import LightningLoggerBase, rank_zero_only\n\n\nclass WandbLogger(LightningLoggerBase):\n \"\"\"\n Logger for `W&B <https://www.wandb.com/>`_.\n\n Args:\n name (str): display name for the run.\n save_dir (str): path where data is saved.\n offline (bool): run offline (data can be streamed later to wandb servers).\n id or version (str): sets the version, mainly used to resume a previous run.\n anonymous (bool): enables or explicitly disables anonymous logging.\n project (str): the name of the project to which this run will belong.\n tags (list of str): tags associated with this run.\n\n Example\n --------\n .. code-block:: python\n\n from pytorch_lightning.loggers import WandbLogger\n from pytorch_lightning import Trainer\n\n wandb_logger = WandbLogger()\n trainer = Trainer(logger=wandb_logger)\n \"\"\"\n\n def __init__(self, name: Optional[str] = None, save_dir: Optional[str] = None,\n offline: bool = False, id: Optional[str] = None, anonymous: bool = False,\n version: Optional[str] = None, project: Optional[str] = None,\n tags: Optional[List[str]] = None, experiment=None, entity=None):\n super().__init__()\n self._name = name\n self._save_dir = save_dir\n self._anonymous = 'allow' if anonymous else None\n self._id = version or id\n self._tags = tags\n self._project = project\n self._experiment = experiment\n self._offline = offline\n self._entity = entity\n\n def __getstate__(self):\n state = self.__dict__.copy()\n # cannot be pickled\n state['_experiment'] = None\n # args needed to reload correct experiment\n state['_id'] = self.experiment.id\n return state\n\n @property\n def experiment(self) -> Run:\n r\"\"\"\n\n Actual wandb object. To use wandb features do the following.\n\n Example::\n\n self.logger.experiment.some_wandb_function()\n\n \"\"\"\n if self._experiment is None:\n if self._offline:\n os.environ['WANDB_MODE'] = 'dryrun'\n self._experiment = wandb.init(\n name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,\n id=self._id, resume='allow', tags=self._tags, entity=self._entity)\n return self._experiment\n\n def watch(self, model: nn.Module, log: str = 'gradients', log_freq: int = 100):\n wandb.watch(model, log=log, log_freq=log_freq)\n\n @rank_zero_only\n def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:\n params = self._convert_params(params)\n self.experiment.config.update(params)\n\n @rank_zero_only\n def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None:\n if step is not None:\n metrics['global_step'] = step\n self.experiment.log(metrics)\n\n @rank_zero_only\n def finalize(self, status: str = 'success') -> None:\n try:\n exit_code = 0 if status == 'success' else 1\n wandb.join(exit_code)\n except TypeError:\n wandb.join()\n\n @property\n def name(self) -> str:\n return self.experiment.project_name()\n\n @property\n def version(self) -> str:\n return self.experiment.id\n", "path": "pytorch_lightning/loggers/wandb.py"}]}
| 1,803 | 168 |
gh_patches_debug_36338
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-3199
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add `--conf-source` option to `kedro ipython`
## Description
Right now we can pass `env` and `params` as options with `kedro ipython` / `%reload_kedro`. Add `--conf-source` option as well.
## Context
User question from slack :
> Speaking of kedro ipython :
Is there a way to specify /override the conf_source ? :slightly_smiling_face:
> My problem comes from the fact that we have a single pipeline / repo for all our clients.
To better enforce “tenant isolation”
I have organized conf and data as follow:
conf
└── client_A
├── base
└── local
└── client_B
├── base
└── local
data
├── client_A
│ └── 01_raw
│ └── 02_intermediate
│ └── ...
└── client_B
│ └── 01_raw
│ └── 02_intermediate
│ └── ...conf
I did so because I liked the idea of not being able to run the pipeline without being explicit about which client it should be done for…
(the structure above makes it so that kedro run without --conf-source will raise an error)
Another reason I did so (if I recall correctly) was to avoid having “duplicate keys” conflicts in the different yaml files across clients…
## Possible Implementation
Changes to be made in `kedro/ipython/__init__.py`
PR for when `--conf-source` was added to `kedro run` - https://github.com/kedro-org/kedro/pull/2117
</issue>
<code>
[start of kedro/ipython/__init__.py]
1 """
2 This script creates an IPython extension to load Kedro-related variables in
3 local scope.
4 """
5 from __future__ import annotations
6
7 import logging
8 import sys
9 from pathlib import Path
10 from typing import Any
11
12 from IPython import get_ipython
13 from IPython.core.magic import needs_local_scope, register_line_magic
14 from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring
15
16 from kedro.framework.cli import load_entry_points
17 from kedro.framework.cli.project import PARAMS_ARG_HELP
18 from kedro.framework.cli.utils import ENV_HELP, _split_params
19 from kedro.framework.project import (
20 LOGGING, # noqa
21 configure_project,
22 pipelines,
23 )
24 from kedro.framework.session import KedroSession
25 from kedro.framework.startup import _is_project, bootstrap_project
26
27 logger = logging.getLogger(__name__)
28
29
30 def load_ipython_extension(ipython):
31 """
32 Main entry point when %load_ext kedro.ipython is executed, either manually or
33 automatically through `kedro ipython` or `kedro jupyter lab/notebook`.
34 IPython will look for this function specifically.
35 See https://ipython.readthedocs.io/en/stable/config/extensions/index.html
36 """
37 ipython.register_magic_function(magic_reload_kedro, magic_name="reload_kedro")
38
39 if _find_kedro_project(Path.cwd()) is None:
40 logger.warning(
41 "Kedro extension was registered but couldn't find a Kedro project. "
42 "Make sure you run '%reload_kedro <project_root>'."
43 )
44 return
45
46 reload_kedro()
47
48
49 @needs_local_scope
50 @magic_arguments()
51 @argument(
52 "path",
53 type=str,
54 help=(
55 "Path to the project root directory. If not given, use the previously set"
56 "project root."
57 ),
58 nargs="?",
59 default=None,
60 )
61 @argument("-e", "--env", type=str, default=None, help=ENV_HELP)
62 @argument(
63 "--params",
64 type=lambda value: _split_params(None, None, value),
65 default=None,
66 help=PARAMS_ARG_HELP,
67 )
68 def magic_reload_kedro(line: str, local_ns: dict[str, Any] = None):
69 """
70 The `%reload_kedro` IPython line magic.
71 See https://kedro.readthedocs.io/en/stable/notebooks_and_ipython/kedro_and_notebooks.html#reload-kedro-line-magic # noqa: line-too-long
72 for more.
73 """
74 args = parse_argstring(magic_reload_kedro, line)
75 reload_kedro(args.path, args.env, args.params, local_ns)
76
77
78 def reload_kedro(
79 path: str = None,
80 env: str = None,
81 extra_params: dict[str, Any] = None,
82 local_namespace: dict[str, Any] | None = None,
83 ) -> None: # pragma: no cover
84 """Function that underlies the %reload_kedro Line magic. This should not be imported
85 or run directly but instead invoked through %reload_kedro."""
86
87 project_path = _resolve_project_path(path, local_namespace)
88
89 metadata = bootstrap_project(project_path)
90 _remove_cached_modules(metadata.package_name)
91 configure_project(metadata.package_name)
92
93 session = KedroSession.create(
94 metadata.package_name, project_path, env=env, extra_params=extra_params
95 )
96 context = session.load_context()
97 catalog = context.catalog
98
99 get_ipython().push(
100 variables={
101 "context": context,
102 "catalog": catalog,
103 "session": session,
104 "pipelines": pipelines,
105 }
106 )
107
108 logger.info("Kedro project %s", str(metadata.project_name))
109 logger.info(
110 "Defined global variable 'context', 'session', 'catalog' and 'pipelines'"
111 )
112
113 for line_magic in load_entry_points("line_magic"):
114 register_line_magic(needs_local_scope(line_magic))
115 logger.info("Registered line magic '%s'", line_magic.__name__) # type: ignore
116
117
118 def _resolve_project_path(
119 path: str | None = None, local_namespace: dict[str, Any] | None = None
120 ) -> Path:
121 """
122 Resolve the project path to use with reload_kedro, updating or adding it
123 (in-place) to the local ipython Namespace (``local_namespace``) if necessary.
124
125 Arguments:
126 path: the path to use as a string object
127 local_namespace: Namespace with local variables of the scope where the line
128 magic is invoked in a dict.
129 """
130 if path:
131 project_path = Path(path).expanduser().resolve()
132 else:
133 if local_namespace and "context" in local_namespace:
134 # noqa: protected-access
135 project_path = local_namespace["context"]._project_path
136 else:
137 project_path = _find_kedro_project(Path.cwd())
138 if project_path:
139 logger.info(
140 "Resolved project path as: %s.\nTo set a different path, run "
141 "'%%reload_kedro <project_root>'",
142 project_path,
143 )
144
145 # noqa: protected-access
146 if (
147 project_path
148 and local_namespace
149 and "context" in local_namespace
150 and project_path != local_namespace["context"]._project_path
151 ):
152 logger.info("Updating path to Kedro project: %s...", project_path)
153
154 return project_path
155
156
157 def _remove_cached_modules(package_name): # pragma: no cover
158 to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]
159 # `del` is used instead of `reload()` because: If the new version of a module does not
160 # define a name that was defined by the old version, the old definition remains.
161 for module in to_remove:
162 del sys.modules[module]
163
164
165 def _find_kedro_project(current_dir: Path): # pragma: no cover
166 while current_dir != current_dir.parent:
167 if _is_project(current_dir):
168 return current_dir
169 current_dir = current_dir.parent
170
171 return None
172
[end of kedro/ipython/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kedro/ipython/__init__.py b/kedro/ipython/__init__.py
--- a/kedro/ipython/__init__.py
+++ b/kedro/ipython/__init__.py
@@ -14,7 +14,7 @@
from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring
from kedro.framework.cli import load_entry_points
-from kedro.framework.cli.project import PARAMS_ARG_HELP
+from kedro.framework.cli.project import CONF_SOURCE_HELP, PARAMS_ARG_HELP
from kedro.framework.cli.utils import ENV_HELP, _split_params
from kedro.framework.project import (
LOGGING, # noqa
@@ -65,14 +65,17 @@
default=None,
help=PARAMS_ARG_HELP,
)
-def magic_reload_kedro(line: str, local_ns: dict[str, Any] = None):
+@argument("--conf-source", type=str, default=None, help=CONF_SOURCE_HELP)
+def magic_reload_kedro(
+ line: str, local_ns: dict[str, Any] = None, conf_source: str = None
+):
"""
The `%reload_kedro` IPython line magic.
See https://kedro.readthedocs.io/en/stable/notebooks_and_ipython/kedro_and_notebooks.html#reload-kedro-line-magic # noqa: line-too-long
for more.
"""
args = parse_argstring(magic_reload_kedro, line)
- reload_kedro(args.path, args.env, args.params, local_ns)
+ reload_kedro(args.path, args.env, args.params, local_ns, args.conf_source)
def reload_kedro(
@@ -80,6 +83,7 @@
env: str = None,
extra_params: dict[str, Any] = None,
local_namespace: dict[str, Any] | None = None,
+ conf_source: str = None,
) -> None: # pragma: no cover
"""Function that underlies the %reload_kedro Line magic. This should not be imported
or run directly but instead invoked through %reload_kedro."""
@@ -91,7 +95,11 @@
configure_project(metadata.package_name)
session = KedroSession.create(
- metadata.package_name, project_path, env=env, extra_params=extra_params
+ metadata.package_name,
+ project_path,
+ env=env,
+ extra_params=extra_params,
+ conf_source=conf_source,
)
context = session.load_context()
catalog = context.catalog
|
{"golden_diff": "diff --git a/kedro/ipython/__init__.py b/kedro/ipython/__init__.py\n--- a/kedro/ipython/__init__.py\n+++ b/kedro/ipython/__init__.py\n@@ -14,7 +14,7 @@\n from IPython.core.magic_arguments import argument, magic_arguments, parse_argstring\n \n from kedro.framework.cli import load_entry_points\n-from kedro.framework.cli.project import PARAMS_ARG_HELP\n+from kedro.framework.cli.project import CONF_SOURCE_HELP, PARAMS_ARG_HELP\n from kedro.framework.cli.utils import ENV_HELP, _split_params\n from kedro.framework.project import (\n LOGGING, # noqa\n@@ -65,14 +65,17 @@\n default=None,\n help=PARAMS_ARG_HELP,\n )\n-def magic_reload_kedro(line: str, local_ns: dict[str, Any] = None):\n+@argument(\"--conf-source\", type=str, default=None, help=CONF_SOURCE_HELP)\n+def magic_reload_kedro(\n+ line: str, local_ns: dict[str, Any] = None, conf_source: str = None\n+):\n \"\"\"\n The `%reload_kedro` IPython line magic.\n See https://kedro.readthedocs.io/en/stable/notebooks_and_ipython/kedro_and_notebooks.html#reload-kedro-line-magic # noqa: line-too-long\n for more.\n \"\"\"\n args = parse_argstring(magic_reload_kedro, line)\n- reload_kedro(args.path, args.env, args.params, local_ns)\n+ reload_kedro(args.path, args.env, args.params, local_ns, args.conf_source)\n \n \n def reload_kedro(\n@@ -80,6 +83,7 @@\n env: str = None,\n extra_params: dict[str, Any] = None,\n local_namespace: dict[str, Any] | None = None,\n+ conf_source: str = None,\n ) -> None: # pragma: no cover\n \"\"\"Function that underlies the %reload_kedro Line magic. This should not be imported\n or run directly but instead invoked through %reload_kedro.\"\"\"\n@@ -91,7 +95,11 @@\n configure_project(metadata.package_name)\n \n session = KedroSession.create(\n- metadata.package_name, project_path, env=env, extra_params=extra_params\n+ metadata.package_name,\n+ project_path,\n+ env=env,\n+ extra_params=extra_params,\n+ conf_source=conf_source,\n )\n context = session.load_context()\n catalog = context.catalog\n", "issue": "Add `--conf-source` option to `kedro ipython`\n## Description\r\nRight now we can pass `env` and `params` as options with `kedro ipython` / `%reload_kedro`. Add `--conf-source` option as well.\r\n\r\n## Context\r\nUser question from slack : \r\n\r\n> Speaking of kedro ipython :\r\nIs there a way to specify /override the conf_source ? :slightly_smiling_face:\r\n\r\n> My problem comes from the fact that we have a single pipeline / repo for all our clients.\r\nTo better enforce \u201ctenant isolation\u201d\r\nI have organized conf and data as follow:\r\nconf\r\n\u2514\u2500\u2500 client_A\r\n \u251c\u2500\u2500 base\r\n \u2514\u2500\u2500 local\r\n\u2514\u2500\u2500 client_B\r\n \u251c\u2500\u2500 base\r\n \u2514\u2500\u2500 local\r\ndata\r\n\u251c\u2500\u2500 client_A\r\n\u2502 \u2514\u2500\u2500 01_raw\r\n\u2502 \u2514\u2500\u2500 02_intermediate\r\n\u2502 \u2514\u2500\u2500 ...\r\n\u2514\u2500\u2500 client_B\r\n\u2502 \u2514\u2500\u2500 01_raw\r\n\u2502 \u2514\u2500\u2500 02_intermediate\r\n\u2502 \u2514\u2500\u2500 ...conf\r\nI did so because I liked the idea of not being able to run the pipeline without being explicit about which client it should be done for\u2026\r\n(the structure above makes it so that kedro run without --conf-source will raise an error)\r\nAnother reason I did so (if I recall correctly) was to avoid having \u201cduplicate keys\u201d conflicts in the different yaml files across clients\u2026 \r\n\r\n## Possible Implementation\r\nChanges to be made in `kedro/ipython/__init__.py`\r\nPR for when `--conf-source` was added to `kedro run` - https://github.com/kedro-org/kedro/pull/2117\r\n\n", "before_files": [{"content": "\"\"\"\nThis script creates an IPython extension to load Kedro-related variables in\nlocal scope.\n\"\"\"\nfrom __future__ import annotations\n\nimport logging\nimport sys\nfrom pathlib import Path\nfrom typing import Any\n\nfrom IPython import get_ipython\nfrom IPython.core.magic import needs_local_scope, register_line_magic\nfrom IPython.core.magic_arguments import argument, magic_arguments, parse_argstring\n\nfrom kedro.framework.cli import load_entry_points\nfrom kedro.framework.cli.project import PARAMS_ARG_HELP\nfrom kedro.framework.cli.utils import ENV_HELP, _split_params\nfrom kedro.framework.project import (\n LOGGING, # noqa\n configure_project,\n pipelines,\n)\nfrom kedro.framework.session import KedroSession\nfrom kedro.framework.startup import _is_project, bootstrap_project\n\nlogger = logging.getLogger(__name__)\n\n\ndef load_ipython_extension(ipython):\n \"\"\"\n Main entry point when %load_ext kedro.ipython is executed, either manually or\n automatically through `kedro ipython` or `kedro jupyter lab/notebook`.\n IPython will look for this function specifically.\n See https://ipython.readthedocs.io/en/stable/config/extensions/index.html\n \"\"\"\n ipython.register_magic_function(magic_reload_kedro, magic_name=\"reload_kedro\")\n\n if _find_kedro_project(Path.cwd()) is None:\n logger.warning(\n \"Kedro extension was registered but couldn't find a Kedro project. \"\n \"Make sure you run '%reload_kedro <project_root>'.\"\n )\n return\n\n reload_kedro()\n\n\n@needs_local_scope\n@magic_arguments()\n@argument(\n \"path\",\n type=str,\n help=(\n \"Path to the project root directory. If not given, use the previously set\"\n \"project root.\"\n ),\n nargs=\"?\",\n default=None,\n)\n@argument(\"-e\", \"--env\", type=str, default=None, help=ENV_HELP)\n@argument(\n \"--params\",\n type=lambda value: _split_params(None, None, value),\n default=None,\n help=PARAMS_ARG_HELP,\n)\ndef magic_reload_kedro(line: str, local_ns: dict[str, Any] = None):\n \"\"\"\n The `%reload_kedro` IPython line magic.\n See https://kedro.readthedocs.io/en/stable/notebooks_and_ipython/kedro_and_notebooks.html#reload-kedro-line-magic # noqa: line-too-long\n for more.\n \"\"\"\n args = parse_argstring(magic_reload_kedro, line)\n reload_kedro(args.path, args.env, args.params, local_ns)\n\n\ndef reload_kedro(\n path: str = None,\n env: str = None,\n extra_params: dict[str, Any] = None,\n local_namespace: dict[str, Any] | None = None,\n) -> None: # pragma: no cover\n \"\"\"Function that underlies the %reload_kedro Line magic. This should not be imported\n or run directly but instead invoked through %reload_kedro.\"\"\"\n\n project_path = _resolve_project_path(path, local_namespace)\n\n metadata = bootstrap_project(project_path)\n _remove_cached_modules(metadata.package_name)\n configure_project(metadata.package_name)\n\n session = KedroSession.create(\n metadata.package_name, project_path, env=env, extra_params=extra_params\n )\n context = session.load_context()\n catalog = context.catalog\n\n get_ipython().push(\n variables={\n \"context\": context,\n \"catalog\": catalog,\n \"session\": session,\n \"pipelines\": pipelines,\n }\n )\n\n logger.info(\"Kedro project %s\", str(metadata.project_name))\n logger.info(\n \"Defined global variable 'context', 'session', 'catalog' and 'pipelines'\"\n )\n\n for line_magic in load_entry_points(\"line_magic\"):\n register_line_magic(needs_local_scope(line_magic))\n logger.info(\"Registered line magic '%s'\", line_magic.__name__) # type: ignore\n\n\ndef _resolve_project_path(\n path: str | None = None, local_namespace: dict[str, Any] | None = None\n) -> Path:\n \"\"\"\n Resolve the project path to use with reload_kedro, updating or adding it\n (in-place) to the local ipython Namespace (``local_namespace``) if necessary.\n\n Arguments:\n path: the path to use as a string object\n local_namespace: Namespace with local variables of the scope where the line\n magic is invoked in a dict.\n \"\"\"\n if path:\n project_path = Path(path).expanduser().resolve()\n else:\n if local_namespace and \"context\" in local_namespace:\n # noqa: protected-access\n project_path = local_namespace[\"context\"]._project_path\n else:\n project_path = _find_kedro_project(Path.cwd())\n if project_path:\n logger.info(\n \"Resolved project path as: %s.\\nTo set a different path, run \"\n \"'%%reload_kedro <project_root>'\",\n project_path,\n )\n\n # noqa: protected-access\n if (\n project_path\n and local_namespace\n and \"context\" in local_namespace\n and project_path != local_namespace[\"context\"]._project_path\n ):\n logger.info(\"Updating path to Kedro project: %s...\", project_path)\n\n return project_path\n\n\ndef _remove_cached_modules(package_name): # pragma: no cover\n to_remove = [mod for mod in sys.modules if mod.startswith(package_name)]\n # `del` is used instead of `reload()` because: If the new version of a module does not\n # define a name that was defined by the old version, the old definition remains.\n for module in to_remove:\n del sys.modules[module]\n\n\ndef _find_kedro_project(current_dir: Path): # pragma: no cover\n while current_dir != current_dir.parent:\n if _is_project(current_dir):\n return current_dir\n current_dir = current_dir.parent\n\n return None\n", "path": "kedro/ipython/__init__.py"}]}
| 2,635 | 570 |
gh_patches_debug_35487
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-3129
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider jiffylube is broken
During the global build at 2021-09-01-14-42-16, spider **jiffylube** failed with **0 features** and **49 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/logs/jiffylube.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/jiffylube.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/jiffylube.geojson))
</issue>
<code>
[start of locations/spiders/jiffylube.py]
1 # -*- coding: utf-8 -*-
2 import json
3
4 import scrapy
5
6 from locations.items import GeojsonPointItem
7 from locations.hours import OpeningHours
8
9
10 STATES = [
11 'AL', 'AK', 'AS', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'DC', 'FM', 'FL',
12 'GA', 'GU', 'HI', 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MH',
13 'MD', 'MA', 'MI', 'MN', 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM',
14 'NY', 'NC', 'ND', 'MP', 'OH', 'OK', 'OR', 'PW', 'PA', 'PR', 'RI', 'SC',
15 'SD', 'TN', 'TX', 'UT', 'VT', 'VI', 'VA', 'WA', 'WV', 'WI', 'WY'
16 ]
17
18 DAY_MAPPING = {
19 'Monday': 'Mo',
20 'Tuesday': 'Tu',
21 'Wednesday': 'We',
22 'Thursday': 'Th',
23 'Friday': 'Fr',
24 'Saturday': 'Sa',
25 'Sunday': 'Su'
26 }
27
28 class JiffyLubeSpider(scrapy.Spider):
29 name = "jiffylube"
30 item_attributes = {'brand': "Jiffy Lube"}
31 allowed_domains = ["www.jiffylube.com"]
32
33 def start_requests(self):
34 template = 'https://www.jiffylube.com/api/locations?state={state}'
35
36 headers = {
37 'Accept': 'application/json',
38 }
39
40 for state in STATES:
41 yield scrapy.http.FormRequest(
42 url=template.format(state=state),
43 method='GET',
44 headers=headers,
45 callback=self.parse
46 )
47 def parse(self, response):
48 jsonresponse = json.loads(response.body_as_unicode())
49
50 for stores in jsonresponse:
51 store = json.dumps(stores)
52 store_data = json.loads(store)
53
54 properties = {
55 'name': store_data["nickname"],
56 'ref': store_data["id"],
57 'addr_full': store_data["address"],
58 'city': store_data["city"],
59 'state': store_data["state"],
60 'postcode': store_data["postal_code"].strip(),
61 'country': store_data["country"],
62 'phone': store_data["phone_main"],
63 'lat': float(store_data["coordinates"]["latitude"]),
64 'lon': float(store_data["coordinates"]["longitude"]),
65 'website': "https://www.jiffylube.com{}".format(store_data["_links"]["_self"])
66 }
67
68 hours = store_data["hours_schema"]
69
70 if hours:
71 properties['opening_hours'] = self.process_hours(hours)
72
73 yield GeojsonPointItem(**properties)
74
75 def process_hours(self, hours):
76 opening_hours = OpeningHours()
77
78 for hour in hours:
79 day = hour["name"]
80 open_time = hour["time_open"]
81 close_time = hour["time_close"]
82
83 opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time,
84 time_format='%H:%M')
85 return opening_hours.as_opening_hours()
[end of locations/spiders/jiffylube.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/jiffylube.py b/locations/spiders/jiffylube.py
--- a/locations/spiders/jiffylube.py
+++ b/locations/spiders/jiffylube.py
@@ -29,30 +29,27 @@
name = "jiffylube"
item_attributes = {'brand': "Jiffy Lube"}
allowed_domains = ["www.jiffylube.com"]
+ start_urls = (
+ 'https://www.jiffylube.com/api/locations',
+ )
- def start_requests(self):
- template = 'https://www.jiffylube.com/api/locations?state={state}'
- headers = {
- 'Accept': 'application/json',
- }
-
- for state in STATES:
- yield scrapy.http.FormRequest(
- url=template.format(state=state),
- method='GET',
- headers=headers,
- callback=self.parse
- )
def parse(self, response):
- jsonresponse = json.loads(response.body_as_unicode())
+ stores = json.loads(response.text)
+
+
+ for store in stores:
+ store_url = "https://www.jiffylube.com/api" + store["_links"]["_self"]
+ yield scrapy.Request(
+ store_url,
+ callback=self.parse_store
+ )
- for stores in jsonresponse:
- store = json.dumps(stores)
- store_data = json.loads(store)
+
+ def parse_store(self, response):
+ store_data = json.loads(response.text)
properties = {
- 'name': store_data["nickname"],
'ref': store_data["id"],
'addr_full': store_data["address"],
'city': store_data["city"],
@@ -64,22 +61,5 @@
'lon': float(store_data["coordinates"]["longitude"]),
'website': "https://www.jiffylube.com{}".format(store_data["_links"]["_self"])
}
-
- hours = store_data["hours_schema"]
-
- if hours:
- properties['opening_hours'] = self.process_hours(hours)
-
+
yield GeojsonPointItem(**properties)
-
- def process_hours(self, hours):
- opening_hours = OpeningHours()
-
- for hour in hours:
- day = hour["name"]
- open_time = hour["time_open"]
- close_time = hour["time_close"]
-
- opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time,
- time_format='%H:%M')
- return opening_hours.as_opening_hours()
\ No newline at end of file
|
{"golden_diff": "diff --git a/locations/spiders/jiffylube.py b/locations/spiders/jiffylube.py\n--- a/locations/spiders/jiffylube.py\n+++ b/locations/spiders/jiffylube.py\n@@ -29,30 +29,27 @@\n name = \"jiffylube\"\n item_attributes = {'brand': \"Jiffy Lube\"}\n allowed_domains = [\"www.jiffylube.com\"]\n+ start_urls = (\n+ 'https://www.jiffylube.com/api/locations',\n+ )\n \n- def start_requests(self):\n- template = 'https://www.jiffylube.com/api/locations?state={state}'\n \n- headers = {\n- 'Accept': 'application/json',\n- }\n-\n- for state in STATES:\n- yield scrapy.http.FormRequest(\n- url=template.format(state=state),\n- method='GET',\n- headers=headers,\n- callback=self.parse\n- )\n def parse(self, response):\n- jsonresponse = json.loads(response.body_as_unicode())\n+ stores = json.loads(response.text)\n+ \n+\n+ for store in stores:\n+ store_url = \"https://www.jiffylube.com/api\" + store[\"_links\"][\"_self\"]\n+ yield scrapy.Request(\n+ store_url,\n+ callback=self.parse_store\n+ )\n \n- for stores in jsonresponse:\n- store = json.dumps(stores)\n- store_data = json.loads(store)\n+\n+ def parse_store(self, response):\n+ store_data = json.loads(response.text)\n \n properties = {\n- 'name': store_data[\"nickname\"],\n 'ref': store_data[\"id\"],\n 'addr_full': store_data[\"address\"],\n 'city': store_data[\"city\"],\n@@ -64,22 +61,5 @@\n 'lon': float(store_data[\"coordinates\"][\"longitude\"]),\n 'website': \"https://www.jiffylube.com{}\".format(store_data[\"_links\"][\"_self\"])\n }\n-\n- hours = store_data[\"hours_schema\"]\n-\n- if hours:\n- properties['opening_hours'] = self.process_hours(hours)\n-\n+ \n yield GeojsonPointItem(**properties)\n-\n- def process_hours(self, hours):\n- opening_hours = OpeningHours()\n-\n- for hour in hours:\n- day = hour[\"name\"]\n- open_time = hour[\"time_open\"]\n- close_time = hour[\"time_close\"]\n-\n- opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time,\n- time_format='%H:%M')\n- return opening_hours.as_opening_hours()\n\\ No newline at end of file\n", "issue": "Spider jiffylube is broken\nDuring the global build at 2021-09-01-14-42-16, spider **jiffylube** failed with **0 features** and **49 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/logs/jiffylube.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/jiffylube.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/jiffylube.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nSTATES = [\n 'AL', 'AK', 'AS', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'DC', 'FM', 'FL',\n 'GA', 'GU', 'HI', 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MH',\n 'MD', 'MA', 'MI', 'MN', 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM',\n 'NY', 'NC', 'ND', 'MP', 'OH', 'OK', 'OR', 'PW', 'PA', 'PR', 'RI', 'SC',\n 'SD', 'TN', 'TX', 'UT', 'VT', 'VI', 'VA', 'WA', 'WV', 'WI', 'WY'\n]\n\nDAY_MAPPING = {\n 'Monday': 'Mo',\n 'Tuesday': 'Tu',\n 'Wednesday': 'We',\n 'Thursday': 'Th',\n 'Friday': 'Fr',\n 'Saturday': 'Sa',\n 'Sunday': 'Su'\n}\n\nclass JiffyLubeSpider(scrapy.Spider):\n name = \"jiffylube\"\n item_attributes = {'brand': \"Jiffy Lube\"}\n allowed_domains = [\"www.jiffylube.com\"]\n\n def start_requests(self):\n template = 'https://www.jiffylube.com/api/locations?state={state}'\n\n headers = {\n 'Accept': 'application/json',\n }\n\n for state in STATES:\n yield scrapy.http.FormRequest(\n url=template.format(state=state),\n method='GET',\n headers=headers,\n callback=self.parse\n )\n def parse(self, response):\n jsonresponse = json.loads(response.body_as_unicode())\n\n for stores in jsonresponse:\n store = json.dumps(stores)\n store_data = json.loads(store)\n\n properties = {\n 'name': store_data[\"nickname\"],\n 'ref': store_data[\"id\"],\n 'addr_full': store_data[\"address\"],\n 'city': store_data[\"city\"],\n 'state': store_data[\"state\"],\n 'postcode': store_data[\"postal_code\"].strip(),\n 'country': store_data[\"country\"],\n 'phone': store_data[\"phone_main\"],\n 'lat': float(store_data[\"coordinates\"][\"latitude\"]),\n 'lon': float(store_data[\"coordinates\"][\"longitude\"]),\n 'website': \"https://www.jiffylube.com{}\".format(store_data[\"_links\"][\"_self\"])\n }\n\n hours = store_data[\"hours_schema\"]\n\n if hours:\n properties['opening_hours'] = self.process_hours(hours)\n\n yield GeojsonPointItem(**properties)\n\n def process_hours(self, hours):\n opening_hours = OpeningHours()\n\n for hour in hours:\n day = hour[\"name\"]\n open_time = hour[\"time_open\"]\n close_time = hour[\"time_close\"]\n\n opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time,\n time_format='%H:%M')\n return opening_hours.as_opening_hours()", "path": "locations/spiders/jiffylube.py"}]}
| 1,597 | 586 |
gh_patches_debug_1933
|
rasdani/github-patches
|
git_diff
|
spack__spack-5099
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
spack find : always prompt 0 installed packages
On a clean `develop` checkout :
```
$ git clone https://github.com/LLNL/spack.git
Cloning into 'spack'...
remote: Counting objects: 25613, done.
remote: Compressing objects: 100% (42/42), done.
remote: Total 25613 (delta 12), reused 3 (delta 3), pack-reused 25557
Receiving objects: 100% (25613/25613), 6.65 MiB | 6.46 MiB/s, done.
Resolving deltas: 100% (13031/13031), done.
Checking connectivity... done.
$ cd spack
$ . share/spack/setup-env.sh
$ spack compilers
==> Available compilers
-- gcc ----------------------------------------------------------
[email protected]
$ spack install zlib
==> Installing zlib
==> Trying to fetch from file:///home/mculpo/production/spack-mirror/zlib/zlib-1.2.8.tar.gz
######################################################################## 100,0%
==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix/zlib-1.2.8.tar.gz
==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix
==> No patches needed for zlib
==> Building zlib
==> Successfully installed zlib
Fetch: 0.01s. Build: 3.69s. Total: 3.70s.
[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix
$ spack find
==> 0 installed packages.
$ spack install szip
==> Installing szip
==> Trying to fetch from file:///home/mculpo/production/spack-mirror/szip/szip-2.1.tar.gz
######################################################################## 100,0%
==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq/szip-2.1.tar.gz
==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq
==> No patches needed for szip
==> Building szip
==> Successfully installed szip
Fetch: 0.01s. Build: 8.09s. Total: 8.10s.
[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq
$ spack find
==> 0 installed packages.
```
The db seems to be written correctly :
```
database:
installs:
d6pdl6xvnvap6ihrqcqtgvweghbszmix:
explicit: true
installed: true
path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix
ref_count: 0
spec:
zlib:
arch: linux-x86_64
compiler:
name: gcc
version: '4.8'
dependencies: {}
namespace: builtin
parameters:
cflags: []
cppflags: []
cxxflags: []
fflags: []
ldflags: []
ldlibs: []
version: 1.2.8
esfmhl54wbdb7nnnip6y6jbxlbmxs2jq:
explicit: true
installed: true
path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq
ref_count: 0
spec:
szip:
arch: linux-x86_64
compiler:
name: gcc
version: '4.8'
dependencies: {}
namespace: builtin
parameters:
cflags: []
cppflags: []
cxxflags: []
fflags: []
ldflags: []
ldlibs: []
version: '2.1'
version: 0.9.1
```
</issue>
<code>
[start of var/spack/repos/builtin/packages/h5z-zfp/package.py]
1 ##############################################################################
2 # Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/llnl/spack
10 # Please also see the NOTICE and LICENSE files for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25 from spack import *
26
27
28 class H5zZfp(MakefilePackage):
29 """A highly flexible floating point and integer compression plugin for the
30 HDF5 library using ZFP compression."""
31
32 homepage = "http://h5z-zfp.readthedocs.io/en/latest"
33 url = "https://github.com/LLNL/H5Z-ZFP"
34
35 version('develop', git='https://github.com/LLNL/H5Z-ZFP.git', tag='master')
36 version('0.7.0', git='https://github.com/LLNL/H5Z-ZFP.git', commit='58ac811')
37
38 variant('fortran', default=True, description='Enable Fortran support')
39
40 depends_on('hdf5')
41 # depends_on('zfp bsws=8')
42 depends_on('zfp')
43
44 @property
45 def make_defs(self):
46 make_defs = [
47 'PREFIX=%s' % prefix,
48 'CC=%s' % spack_cc,
49 'HDF5_HOME=%s' % self.spec['hdf5'].prefix,
50 'ZFP_HOME=%s' % self.spec['zfp'].prefix]
51
52 if '+fortran' in self.spec and spack_fc:
53 make_defs += ['FC=%s' % spack_fc]
54
55 return make_defs
56
57 @property
58 def build_targets(self):
59 targets = ['all']
60 return self.make_defs + targets
61
62 @property
63 def install_targets(self):
64 make_args = ['install']
65 return make_args + self.make_defs
66
[end of var/spack/repos/builtin/packages/h5z-zfp/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/var/spack/repos/builtin/packages/h5z-zfp/package.py b/var/spack/repos/builtin/packages/h5z-zfp/package.py
--- a/var/spack/repos/builtin/packages/h5z-zfp/package.py
+++ b/var/spack/repos/builtin/packages/h5z-zfp/package.py
@@ -38,8 +38,7 @@
variant('fortran', default=True, description='Enable Fortran support')
depends_on('hdf5')
-# depends_on('zfp bsws=8')
- depends_on('zfp')
+ depends_on('zfp bsws=8')
@property
def make_defs(self):
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/h5z-zfp/package.py b/var/spack/repos/builtin/packages/h5z-zfp/package.py\n--- a/var/spack/repos/builtin/packages/h5z-zfp/package.py\n+++ b/var/spack/repos/builtin/packages/h5z-zfp/package.py\n@@ -38,8 +38,7 @@\n variant('fortran', default=True, description='Enable Fortran support')\n \n depends_on('hdf5')\n-# depends_on('zfp bsws=8')\n- depends_on('zfp')\n+ depends_on('zfp bsws=8')\n \n @property\n def make_defs(self):\n", "issue": "spack find : always prompt 0 installed packages\nOn a clean `develop` checkout : \n\n```\n$ git clone https://github.com/LLNL/spack.git\nCloning into 'spack'...\nremote: Counting objects: 25613, done.\nremote: Compressing objects: 100% (42/42), done.\nremote: Total 25613 (delta 12), reused 3 (delta 3), pack-reused 25557\nReceiving objects: 100% (25613/25613), 6.65 MiB | 6.46 MiB/s, done.\nResolving deltas: 100% (13031/13031), done.\nChecking connectivity... done.\n\n$ cd spack\n$ . share/spack/setup-env.sh \n$ spack compilers\n==> Available compilers\n-- gcc ----------------------------------------------------------\[email protected]\n\n$ spack install zlib\n==> Installing zlib\n==> Trying to fetch from file:///home/mculpo/production/spack-mirror/zlib/zlib-1.2.8.tar.gz\n######################################################################## 100,0%\n==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix/zlib-1.2.8.tar.gz\n==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n==> No patches needed for zlib\n==> Building zlib\n==> Successfully installed zlib\n Fetch: 0.01s. Build: 3.69s. Total: 3.70s.\n[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n\n$ spack find\n==> 0 installed packages.\n\n$ spack install szip\n==> Installing szip\n==> Trying to fetch from file:///home/mculpo/production/spack-mirror/szip/szip-2.1.tar.gz\n######################################################################## 100,0%\n==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq/szip-2.1.tar.gz\n==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n==> No patches needed for szip\n==> Building szip\n==> Successfully installed szip\n Fetch: 0.01s. Build: 8.09s. Total: 8.10s.\n[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n\n$ spack find \n==> 0 installed packages.\n```\n\nThe db seems to be written correctly : \n\n```\ndatabase:\n installs:\n d6pdl6xvnvap6ihrqcqtgvweghbszmix:\n explicit: true\n installed: true\n path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n ref_count: 0\n spec:\n zlib:\n arch: linux-x86_64\n compiler:\n name: gcc\n version: '4.8'\n dependencies: {}\n namespace: builtin\n parameters:\n cflags: []\n cppflags: []\n cxxflags: []\n fflags: []\n ldflags: []\n ldlibs: []\n version: 1.2.8\n esfmhl54wbdb7nnnip6y6jbxlbmxs2jq:\n explicit: true\n installed: true\n path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n ref_count: 0\n spec:\n szip:\n arch: linux-x86_64\n compiler:\n name: gcc\n version: '4.8'\n dependencies: {}\n namespace: builtin\n parameters:\n cflags: []\n cppflags: []\n cxxflags: []\n fflags: []\n ldflags: []\n ldlibs: []\n version: '2.1'\n version: 0.9.1\n```\n\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\n\n\nclass H5zZfp(MakefilePackage):\n \"\"\"A highly flexible floating point and integer compression plugin for the\n HDF5 library using ZFP compression.\"\"\"\n\n homepage = \"http://h5z-zfp.readthedocs.io/en/latest\"\n url = \"https://github.com/LLNL/H5Z-ZFP\"\n\n version('develop', git='https://github.com/LLNL/H5Z-ZFP.git', tag='master')\n version('0.7.0', git='https://github.com/LLNL/H5Z-ZFP.git', commit='58ac811')\n\n variant('fortran', default=True, description='Enable Fortran support')\n\n depends_on('hdf5')\n# depends_on('zfp bsws=8')\n depends_on('zfp')\n\n @property\n def make_defs(self):\n make_defs = [\n 'PREFIX=%s' % prefix,\n 'CC=%s' % spack_cc,\n 'HDF5_HOME=%s' % self.spec['hdf5'].prefix,\n 'ZFP_HOME=%s' % self.spec['zfp'].prefix]\n\n if '+fortran' in self.spec and spack_fc:\n make_defs += ['FC=%s' % spack_fc]\n\n return make_defs\n\n @property\n def build_targets(self):\n targets = ['all']\n return self.make_defs + targets\n\n @property\n def install_targets(self):\n make_args = ['install']\n return make_args + self.make_defs\n", "path": "var/spack/repos/builtin/packages/h5z-zfp/package.py"}]}
| 2,427 | 151 |
gh_patches_debug_31028
|
rasdani/github-patches
|
git_diff
|
pretix__pretix-346
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Log old email when changing an order's email address
… because otherwise it's completely lost.
</issue>
<code>
[start of src/pretix/control/logdisplay.py]
1 import json
2 from decimal import Decimal
3
4 from django.dispatch import receiver
5 from django.utils import formats
6 from django.utils.translation import ugettext_lazy as _
7
8 from pretix.base.models import Event, ItemVariation, LogEntry
9 from pretix.base.signals import logentry_display
10
11
12 def _display_order_changed(event: Event, logentry: LogEntry):
13 data = json.loads(logentry.data)
14
15 text = _('The order has been changed:')
16 if logentry.action_type == 'pretix.event.order.changed.item':
17 old_item = str(event.items.get(pk=data['old_item']))
18 if data['old_variation']:
19 old_item += ' - ' + str(event.itemvariations.get(pk=data['old_variation']))
20 new_item = str(event.items.get(pk=data['new_item']))
21 if data['new_variation']:
22 new_item += ' - ' + str(event.itemvariations.get(pk=data['new_variation']))
23 return text + ' ' + _('{old_item} ({old_price} {currency}) changed to {new_item} ({new_price} {currency}).').format(
24 old_item=old_item, new_item=new_item,
25 old_price=formats.localize(Decimal(data['old_price'])),
26 new_price=formats.localize(Decimal(data['new_price'])),
27 currency=event.currency
28 )
29 elif logentry.action_type == 'pretix.event.order.changed.price':
30 return text + ' ' + _('Price of a position changed from {old_price} {currency} to {new_price} {currency}.').format(
31 old_price=formats.localize(Decimal(data['old_price'])),
32 new_price=formats.localize(Decimal(data['new_price'])),
33 currency=event.currency
34 )
35 elif logentry.action_type == 'pretix.event.order.changed.cancel':
36 old_item = str(event.items.get(pk=data['old_item']))
37 if data['old_variation']:
38 old_item += ' - ' + str(ItemVariation.objects.get(pk=data['old_variation']))
39 return text + ' ' + _('{old_item} ({old_price} {currency}) removed.').format(
40 old_item=old_item,
41 old_price=formats.localize(Decimal(data['old_price'])),
42 currency=event.currency
43 )
44
45
46 @receiver(signal=logentry_display, dispatch_uid="pretixcontrol_logentry_display")
47 def pretixcontrol_logentry_display(sender: Event, logentry: LogEntry, **kwargs):
48 plains = {
49 'pretix.event.order.modified': _('The order details have been modified.'),
50 'pretix.event.order.unpaid': _('The order has been marked as unpaid.'),
51 'pretix.event.order.resend': _('The link to the order detail page has been resent to the user.'),
52 'pretix.event.order.expirychanged': _('The order\'s expiry date has been changed.'),
53 'pretix.event.order.expired': _('The order has been marked as expired.'),
54 'pretix.event.order.paid': _('The order has been marked as paid.'),
55 'pretix.event.order.refunded': _('The order has been refunded.'),
56 'pretix.event.order.canceled': _('The order has been canceled.'),
57 'pretix.event.order.placed': _('The order has been created.'),
58 'pretix.event.order.invoice.generated': _('The invoice has been generated.'),
59 'pretix.event.order.invoice.regenerated': _('The invoice has been regenerated.'),
60 'pretix.event.order.invoice.reissued': _('The invoice has been reissued.'),
61 'pretix.event.order.comment': _('The order\'s internal comment has been updated.'),
62 'pretix.event.order.contact.changed': _('The email address has been changed.'),
63 'pretix.event.order.payment.changed': _('The payment method has been changed.'),
64 'pretix.event.order.expire_warning_sent': _('An email has been sent with a warning that the order is about to expire.'),
65 'pretix.user.settings.2fa.enabled': _('Two-factor authentication has been enabled.'),
66 'pretix.user.settings.2fa.disabled': _('Two-factor authentication has been disabled.'),
67 'pretix.user.settings.2fa.regenemergency': _('Your two-factor emergency codes have been regenerated.'),
68 'pretix.control.auth.user.forgot_password.mail_sent': _('Password reset mail sent.'),
69 'pretix.control.auth.user.forgot_password.recovered': _('The password has been reset.')
70
71 }
72 if logentry.action_type in plains:
73 return plains[logentry.action_type]
74
75 if logentry.action_type.startswith('pretix.event.order.changed'):
76 return _display_order_changed(sender, logentry)
77
78 if logentry.action_type == 'pretix.user.settings.2fa.device.added':
79 data = json.loads(logentry.data)
80 return _('A new two-factor authentication device "{name}" has been added to your account.').format(
81 name=data['name']
82 )
83 if logentry.action_type == 'pretix.user.settings.2fa.device.deleted':
84 data = json.loads(logentry.data)
85 return _('The two-factor authentication device "{name}" has been removed from your account.').format(
86 name=data['name']
87 )
88 if logentry.action_type == 'pretix.user.settings.changed':
89 data = json.loads(logentry.data)
90 text = str(_('Your account settings have been changed.'))
91 if 'email' in data:
92 text = text + ' ' + str(_('Your email address has been changed to {email}.').format(email=data['email']))
93 if 'new_pw' in data:
94 text = text + ' ' + str(_('Your password has been changed.'))
95 return text
96
[end of src/pretix/control/logdisplay.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pretix/control/logdisplay.py b/src/pretix/control/logdisplay.py
--- a/src/pretix/control/logdisplay.py
+++ b/src/pretix/control/logdisplay.py
@@ -59,7 +59,6 @@
'pretix.event.order.invoice.regenerated': _('The invoice has been regenerated.'),
'pretix.event.order.invoice.reissued': _('The invoice has been reissued.'),
'pretix.event.order.comment': _('The order\'s internal comment has been updated.'),
- 'pretix.event.order.contact.changed': _('The email address has been changed.'),
'pretix.event.order.payment.changed': _('The payment method has been changed.'),
'pretix.event.order.expire_warning_sent': _('An email has been sent with a warning that the order is about to expire.'),
'pretix.user.settings.2fa.enabled': _('Two-factor authentication has been enabled.'),
@@ -75,6 +74,13 @@
if logentry.action_type.startswith('pretix.event.order.changed'):
return _display_order_changed(sender, logentry)
+ if logentry.action_type.startswith('pretix.event.order.contact.changed'):
+ data = json.loads(logentry.data)
+ return _('The email address has been changed from "{old}" to "{new}".').format(
+ old=data['old_email'],
+ new=data['new_email'],
+ )
+
if logentry.action_type == 'pretix.user.settings.2fa.device.added':
data = json.loads(logentry.data)
return _('A new two-factor authentication device "{name}" has been added to your account.').format(
|
{"golden_diff": "diff --git a/src/pretix/control/logdisplay.py b/src/pretix/control/logdisplay.py\n--- a/src/pretix/control/logdisplay.py\n+++ b/src/pretix/control/logdisplay.py\n@@ -59,7 +59,6 @@\n 'pretix.event.order.invoice.regenerated': _('The invoice has been regenerated.'),\n 'pretix.event.order.invoice.reissued': _('The invoice has been reissued.'),\n 'pretix.event.order.comment': _('The order\\'s internal comment has been updated.'),\n- 'pretix.event.order.contact.changed': _('The email address has been changed.'),\n 'pretix.event.order.payment.changed': _('The payment method has been changed.'),\n 'pretix.event.order.expire_warning_sent': _('An email has been sent with a warning that the order is about to expire.'),\n 'pretix.user.settings.2fa.enabled': _('Two-factor authentication has been enabled.'),\n@@ -75,6 +74,13 @@\n if logentry.action_type.startswith('pretix.event.order.changed'):\n return _display_order_changed(sender, logentry)\n \n+ if logentry.action_type.startswith('pretix.event.order.contact.changed'):\n+ data = json.loads(logentry.data)\n+ return _('The email address has been changed from \"{old}\" to \"{new}\".').format(\n+ old=data['old_email'],\n+ new=data['new_email'],\n+ )\n+\n if logentry.action_type == 'pretix.user.settings.2fa.device.added':\n data = json.loads(logentry.data)\n return _('A new two-factor authentication device \"{name}\" has been added to your account.').format(\n", "issue": "Log old email when changing an order's email address\n\u2026 because otherwise it's completely lost.\n", "before_files": [{"content": "import json\nfrom decimal import Decimal\n\nfrom django.dispatch import receiver\nfrom django.utils import formats\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom pretix.base.models import Event, ItemVariation, LogEntry\nfrom pretix.base.signals import logentry_display\n\n\ndef _display_order_changed(event: Event, logentry: LogEntry):\n data = json.loads(logentry.data)\n\n text = _('The order has been changed:')\n if logentry.action_type == 'pretix.event.order.changed.item':\n old_item = str(event.items.get(pk=data['old_item']))\n if data['old_variation']:\n old_item += ' - ' + str(event.itemvariations.get(pk=data['old_variation']))\n new_item = str(event.items.get(pk=data['new_item']))\n if data['new_variation']:\n new_item += ' - ' + str(event.itemvariations.get(pk=data['new_variation']))\n return text + ' ' + _('{old_item} ({old_price} {currency}) changed to {new_item} ({new_price} {currency}).').format(\n old_item=old_item, new_item=new_item,\n old_price=formats.localize(Decimal(data['old_price'])),\n new_price=formats.localize(Decimal(data['new_price'])),\n currency=event.currency\n )\n elif logentry.action_type == 'pretix.event.order.changed.price':\n return text + ' ' + _('Price of a position changed from {old_price} {currency} to {new_price} {currency}.').format(\n old_price=formats.localize(Decimal(data['old_price'])),\n new_price=formats.localize(Decimal(data['new_price'])),\n currency=event.currency\n )\n elif logentry.action_type == 'pretix.event.order.changed.cancel':\n old_item = str(event.items.get(pk=data['old_item']))\n if data['old_variation']:\n old_item += ' - ' + str(ItemVariation.objects.get(pk=data['old_variation']))\n return text + ' ' + _('{old_item} ({old_price} {currency}) removed.').format(\n old_item=old_item,\n old_price=formats.localize(Decimal(data['old_price'])),\n currency=event.currency\n )\n\n\n@receiver(signal=logentry_display, dispatch_uid=\"pretixcontrol_logentry_display\")\ndef pretixcontrol_logentry_display(sender: Event, logentry: LogEntry, **kwargs):\n plains = {\n 'pretix.event.order.modified': _('The order details have been modified.'),\n 'pretix.event.order.unpaid': _('The order has been marked as unpaid.'),\n 'pretix.event.order.resend': _('The link to the order detail page has been resent to the user.'),\n 'pretix.event.order.expirychanged': _('The order\\'s expiry date has been changed.'),\n 'pretix.event.order.expired': _('The order has been marked as expired.'),\n 'pretix.event.order.paid': _('The order has been marked as paid.'),\n 'pretix.event.order.refunded': _('The order has been refunded.'),\n 'pretix.event.order.canceled': _('The order has been canceled.'),\n 'pretix.event.order.placed': _('The order has been created.'),\n 'pretix.event.order.invoice.generated': _('The invoice has been generated.'),\n 'pretix.event.order.invoice.regenerated': _('The invoice has been regenerated.'),\n 'pretix.event.order.invoice.reissued': _('The invoice has been reissued.'),\n 'pretix.event.order.comment': _('The order\\'s internal comment has been updated.'),\n 'pretix.event.order.contact.changed': _('The email address has been changed.'),\n 'pretix.event.order.payment.changed': _('The payment method has been changed.'),\n 'pretix.event.order.expire_warning_sent': _('An email has been sent with a warning that the order is about to expire.'),\n 'pretix.user.settings.2fa.enabled': _('Two-factor authentication has been enabled.'),\n 'pretix.user.settings.2fa.disabled': _('Two-factor authentication has been disabled.'),\n 'pretix.user.settings.2fa.regenemergency': _('Your two-factor emergency codes have been regenerated.'),\n 'pretix.control.auth.user.forgot_password.mail_sent': _('Password reset mail sent.'),\n 'pretix.control.auth.user.forgot_password.recovered': _('The password has been reset.')\n\n }\n if logentry.action_type in plains:\n return plains[logentry.action_type]\n\n if logentry.action_type.startswith('pretix.event.order.changed'):\n return _display_order_changed(sender, logentry)\n\n if logentry.action_type == 'pretix.user.settings.2fa.device.added':\n data = json.loads(logentry.data)\n return _('A new two-factor authentication device \"{name}\" has been added to your account.').format(\n name=data['name']\n )\n if logentry.action_type == 'pretix.user.settings.2fa.device.deleted':\n data = json.loads(logentry.data)\n return _('The two-factor authentication device \"{name}\" has been removed from your account.').format(\n name=data['name']\n )\n if logentry.action_type == 'pretix.user.settings.changed':\n data = json.loads(logentry.data)\n text = str(_('Your account settings have been changed.'))\n if 'email' in data:\n text = text + ' ' + str(_('Your email address has been changed to {email}.').format(email=data['email']))\n if 'new_pw' in data:\n text = text + ' ' + str(_('Your password has been changed.'))\n return text\n", "path": "src/pretix/control/logdisplay.py"}]}
| 1,913 | 348 |
gh_patches_debug_58561
|
rasdani/github-patches
|
git_diff
|
codespell-project__codespell-86
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
codespell.py does nothng if [fileN] is not specified
Previously running `codespell` without file parameter starts the check. Now `codespell.py` does nothing. The behavior should stay the same as before - if file/dir argument is not specefied then current directory should be used as a default parameter.
</issue>
<code>
[start of bin/codespell.py]
1 #!/usr/bin/env python
2
3 import sys
4
5 if __name__ == '__main__':
6 import codespell_lib
7 sys.exit(codespell_lib.main(*sys.argv))
8
[end of bin/codespell.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bin/codespell.py b/bin/codespell.py
--- a/bin/codespell.py
+++ b/bin/codespell.py
@@ -4,4 +4,4 @@
if __name__ == '__main__':
import codespell_lib
- sys.exit(codespell_lib.main(*sys.argv))
+ sys.exit(codespell_lib.main(*sys.argv[1:]))
|
{"golden_diff": "diff --git a/bin/codespell.py b/bin/codespell.py\n--- a/bin/codespell.py\n+++ b/bin/codespell.py\n@@ -4,4 +4,4 @@\n \n if __name__ == '__main__':\n import codespell_lib\n- sys.exit(codespell_lib.main(*sys.argv))\n+ sys.exit(codespell_lib.main(*sys.argv[1:]))\n", "issue": "codespell.py does nothng if [fileN] is not specified\nPreviously running `codespell` without file parameter starts the check. Now `codespell.py` does nothing. The behavior should stay the same as before - if file/dir argument is not specefied then current directory should be used as a default parameter.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport sys\n\nif __name__ == '__main__':\n import codespell_lib\n sys.exit(codespell_lib.main(*sys.argv))\n", "path": "bin/codespell.py"}]}
| 648 | 86 |
gh_patches_debug_6744
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-3626
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_DOCKER_10 mistakes quoted absolute paths for relative paths
**Describe the issue**
CKV_DOCKER_10 mistakes quoted absolute paths for relative paths.
**Examples**
```
cat << EOF > Dockerfile
FROM alpine:3.16
WORKDIR "/app"
EOF
checkov --check CKV_DOCKER_10 --file Dockerfile
```

**Version (please complete the following information):**
2.1.258
</issue>
<code>
[start of checkov/dockerfile/checks/WorkdirIsAbsolute.py]
1 from __future__ import annotations
2
3 import re
4
5 from checkov.common.models.enums import CheckCategories, CheckResult
6 from checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck
7
8 ISABSOLUTE = re.compile("(^/[A-Za-z0-9-_+]*)|(^[A-Za-z0-9-_+]:\\\\.*)|(^\\$[{}A-Za-z0-9-_+].*)")
9
10
11 class WorkdirIsAbsolute(BaseDockerfileCheck):
12 def __init__(self) -> None:
13 """
14 For clarity and reliability, you should always use absolute paths for your WORKDIR.
15 """
16 name = "Ensure that WORKDIR values are absolute paths"
17 id = "CKV_DOCKER_10"
18 supported_instructions = ("WORKDIR",)
19 categories = (CheckCategories.CONVENTION,)
20 super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)
21
22 def scan_entity_conf(self, conf: list[dict[str, int | str]]) -> tuple[CheckResult, list[dict[str, int | str]] | None]:
23 workdirs = []
24 for workdir in conf:
25 path = workdir["value"]
26 if not re.match(ISABSOLUTE, path):
27 workdirs.append(workdir)
28
29 if workdirs:
30 return CheckResult.FAILED, workdirs
31
32 return CheckResult.PASSED, None
33
34
35 check = WorkdirIsAbsolute()
36
[end of checkov/dockerfile/checks/WorkdirIsAbsolute.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/dockerfile/checks/WorkdirIsAbsolute.py b/checkov/dockerfile/checks/WorkdirIsAbsolute.py
--- a/checkov/dockerfile/checks/WorkdirIsAbsolute.py
+++ b/checkov/dockerfile/checks/WorkdirIsAbsolute.py
@@ -5,7 +5,7 @@
from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck
-ISABSOLUTE = re.compile("(^/[A-Za-z0-9-_+]*)|(^[A-Za-z0-9-_+]:\\\\.*)|(^\\$[{}A-Za-z0-9-_+].*)")
+ISABSOLUTE = re.compile("^\"?((/[A-Za-z0-9-_+]*)|([A-Za-z0-9-_+]:\\\\.*)|(\\$[{}A-Za-z0-9-_+].*))")
class WorkdirIsAbsolute(BaseDockerfileCheck):
|
{"golden_diff": "diff --git a/checkov/dockerfile/checks/WorkdirIsAbsolute.py b/checkov/dockerfile/checks/WorkdirIsAbsolute.py\n--- a/checkov/dockerfile/checks/WorkdirIsAbsolute.py\n+++ b/checkov/dockerfile/checks/WorkdirIsAbsolute.py\n@@ -5,7 +5,7 @@\n from checkov.common.models.enums import CheckCategories, CheckResult\n from checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n \n-ISABSOLUTE = re.compile(\"(^/[A-Za-z0-9-_+]*)|(^[A-Za-z0-9-_+]:\\\\\\\\.*)|(^\\\\$[{}A-Za-z0-9-_+].*)\")\n+ISABSOLUTE = re.compile(\"^\\\"?((/[A-Za-z0-9-_+]*)|([A-Za-z0-9-_+]:\\\\\\\\.*)|(\\\\$[{}A-Za-z0-9-_+].*))\")\n \n \n class WorkdirIsAbsolute(BaseDockerfileCheck):\n", "issue": "CKV_DOCKER_10 mistakes quoted absolute paths for relative paths\n**Describe the issue**\r\nCKV_DOCKER_10 mistakes quoted absolute paths for relative paths.\r\n\r\n**Examples**\r\n```\r\ncat << EOF > Dockerfile\r\nFROM alpine:3.16\r\nWORKDIR \"/app\"\r\nEOF\r\n\r\ncheckov --check CKV_DOCKER_10 --file Dockerfile\r\n```\r\n\r\n\r\n\r\n**Version (please complete the following information):**\r\n2.1.258\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport re\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n\nISABSOLUTE = re.compile(\"(^/[A-Za-z0-9-_+]*)|(^[A-Za-z0-9-_+]:\\\\\\\\.*)|(^\\\\$[{}A-Za-z0-9-_+].*)\")\n\n\nclass WorkdirIsAbsolute(BaseDockerfileCheck):\n def __init__(self) -> None:\n \"\"\"\n For clarity and reliability, you should always use absolute paths for your WORKDIR.\n \"\"\"\n name = \"Ensure that WORKDIR values are absolute paths\"\n id = \"CKV_DOCKER_10\"\n supported_instructions = (\"WORKDIR\",)\n categories = (CheckCategories.CONVENTION,)\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n\n def scan_entity_conf(self, conf: list[dict[str, int | str]]) -> tuple[CheckResult, list[dict[str, int | str]] | None]:\n workdirs = []\n for workdir in conf:\n path = workdir[\"value\"]\n if not re.match(ISABSOLUTE, path):\n workdirs.append(workdir)\n\n if workdirs:\n return CheckResult.FAILED, workdirs\n\n return CheckResult.PASSED, None\n\n\ncheck = WorkdirIsAbsolute()\n", "path": "checkov/dockerfile/checks/WorkdirIsAbsolute.py"}]}
| 1,093 | 210 |
gh_patches_debug_11182
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-24512
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Prefer ffmpeg HLS decoder for v.redd.it
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.11.15*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.11.15**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### Description of your *issue*, suggested solution and other information
Site v.reddit.com should prefer ffmpeg HLS downloader instead of native. Using the native downloader results in some video corruption and artifacting, e.g. visible within the first few seconds of this source: `youtube-dl https://v.redd.it/poqkxthgcpxz`
</issue>
<code>
[start of youtube_dl/downloader/hls.py]
1 from __future__ import unicode_literals
2
3 import re
4 import binascii
5 try:
6 from Crypto.Cipher import AES
7 can_decrypt_frag = True
8 except ImportError:
9 can_decrypt_frag = False
10
11 from .fragment import FragmentFD
12 from .external import FFmpegFD
13
14 from ..compat import (
15 compat_urllib_error,
16 compat_urlparse,
17 compat_struct_pack,
18 )
19 from ..utils import (
20 parse_m3u8_attributes,
21 update_url_query,
22 )
23
24
25 class HlsFD(FragmentFD):
26 """ A limited implementation that does not require ffmpeg """
27
28 FD_NAME = 'hlsnative'
29
30 @staticmethod
31 def can_download(manifest, info_dict):
32 UNSUPPORTED_FEATURES = (
33 r'#EXT-X-KEY:METHOD=(?!NONE|AES-128)', # encrypted streams [1]
34 # r'#EXT-X-BYTERANGE', # playlists composed of byte ranges of media files [2]
35
36 # Live streams heuristic does not always work (e.g. geo restricted to Germany
37 # http://hls-geo.daserste.de/i/videoportal/Film/c_620000/622873/format,716451,716457,716450,716458,716459,.mp4.csmil/index_4_av.m3u8?null=0)
38 # r'#EXT-X-MEDIA-SEQUENCE:(?!0$)', # live streams [3]
39
40 # This heuristic also is not correct since segments may not be appended as well.
41 # Twitch vods of finished streams have EXT-X-PLAYLIST-TYPE:EVENT despite
42 # no segments will definitely be appended to the end of the playlist.
43 # r'#EXT-X-PLAYLIST-TYPE:EVENT', # media segments may be appended to the end of
44 # # event media playlists [4]
45
46 # 1. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.2.4
47 # 2. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.2.2
48 # 3. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.3.2
49 # 4. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.3.5
50 )
51 check_results = [not re.search(feature, manifest) for feature in UNSUPPORTED_FEATURES]
52 is_aes128_enc = '#EXT-X-KEY:METHOD=AES-128' in manifest
53 check_results.append(can_decrypt_frag or not is_aes128_enc)
54 check_results.append(not (is_aes128_enc and r'#EXT-X-BYTERANGE' in manifest))
55 check_results.append(not info_dict.get('is_live'))
56 return all(check_results)
57
58 def real_download(self, filename, info_dict):
59 man_url = info_dict['url']
60 self.to_screen('[%s] Downloading m3u8 manifest' % self.FD_NAME)
61
62 urlh = self.ydl.urlopen(self._prepare_url(info_dict, man_url))
63 man_url = urlh.geturl()
64 s = urlh.read().decode('utf-8', 'ignore')
65
66 if not self.can_download(s, info_dict):
67 if info_dict.get('extra_param_to_segment_url') or info_dict.get('_decryption_key_url'):
68 self.report_error('pycrypto not found. Please install it.')
69 return False
70 self.report_warning(
71 'hlsnative has detected features it does not support, '
72 'extraction will be delegated to ffmpeg')
73 fd = FFmpegFD(self.ydl, self.params)
74 for ph in self._progress_hooks:
75 fd.add_progress_hook(ph)
76 return fd.real_download(filename, info_dict)
77
78 def is_ad_fragment_start(s):
79 return (s.startswith('#ANVATO-SEGMENT-INFO') and 'type=ad' in s
80 or s.startswith('#UPLYNK-SEGMENT') and s.endswith(',ad'))
81
82 def is_ad_fragment_end(s):
83 return (s.startswith('#ANVATO-SEGMENT-INFO') and 'type=master' in s
84 or s.startswith('#UPLYNK-SEGMENT') and s.endswith(',segment'))
85
86 media_frags = 0
87 ad_frags = 0
88 ad_frag_next = False
89 for line in s.splitlines():
90 line = line.strip()
91 if not line:
92 continue
93 if line.startswith('#'):
94 if is_ad_fragment_start(line):
95 ad_frag_next = True
96 elif is_ad_fragment_end(line):
97 ad_frag_next = False
98 continue
99 if ad_frag_next:
100 ad_frags += 1
101 continue
102 media_frags += 1
103
104 ctx = {
105 'filename': filename,
106 'total_frags': media_frags,
107 'ad_frags': ad_frags,
108 }
109
110 self._prepare_and_start_frag_download(ctx)
111
112 fragment_retries = self.params.get('fragment_retries', 0)
113 skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True)
114 test = self.params.get('test', False)
115
116 extra_query = None
117 extra_param_to_segment_url = info_dict.get('extra_param_to_segment_url')
118 if extra_param_to_segment_url:
119 extra_query = compat_urlparse.parse_qs(extra_param_to_segment_url)
120 i = 0
121 media_sequence = 0
122 decrypt_info = {'METHOD': 'NONE'}
123 byte_range = {}
124 frag_index = 0
125 ad_frag_next = False
126 for line in s.splitlines():
127 line = line.strip()
128 if line:
129 if not line.startswith('#'):
130 if ad_frag_next:
131 continue
132 frag_index += 1
133 if frag_index <= ctx['fragment_index']:
134 continue
135 frag_url = (
136 line
137 if re.match(r'^https?://', line)
138 else compat_urlparse.urljoin(man_url, line))
139 if extra_query:
140 frag_url = update_url_query(frag_url, extra_query)
141 count = 0
142 headers = info_dict.get('http_headers', {})
143 if byte_range:
144 headers['Range'] = 'bytes=%d-%d' % (byte_range['start'], byte_range['end'])
145 while count <= fragment_retries:
146 try:
147 success, frag_content = self._download_fragment(
148 ctx, frag_url, info_dict, headers)
149 if not success:
150 return False
151 break
152 except compat_urllib_error.HTTPError as err:
153 # Unavailable (possibly temporary) fragments may be served.
154 # First we try to retry then either skip or abort.
155 # See https://github.com/ytdl-org/youtube-dl/issues/10165,
156 # https://github.com/ytdl-org/youtube-dl/issues/10448).
157 count += 1
158 if count <= fragment_retries:
159 self.report_retry_fragment(err, frag_index, count, fragment_retries)
160 if count > fragment_retries:
161 if skip_unavailable_fragments:
162 i += 1
163 media_sequence += 1
164 self.report_skip_fragment(frag_index)
165 continue
166 self.report_error(
167 'giving up after %s fragment retries' % fragment_retries)
168 return False
169 if decrypt_info['METHOD'] == 'AES-128':
170 iv = decrypt_info.get('IV') or compat_struct_pack('>8xq', media_sequence)
171 decrypt_info['KEY'] = decrypt_info.get('KEY') or self.ydl.urlopen(
172 self._prepare_url(info_dict, info_dict.get('_decryption_key_url') or decrypt_info['URI'])).read()
173 frag_content = AES.new(
174 decrypt_info['KEY'], AES.MODE_CBC, iv).decrypt(frag_content)
175 self._append_fragment(ctx, frag_content)
176 # We only download the first fragment during the test
177 if test:
178 break
179 i += 1
180 media_sequence += 1
181 elif line.startswith('#EXT-X-KEY'):
182 decrypt_url = decrypt_info.get('URI')
183 decrypt_info = parse_m3u8_attributes(line[11:])
184 if decrypt_info['METHOD'] == 'AES-128':
185 if 'IV' in decrypt_info:
186 decrypt_info['IV'] = binascii.unhexlify(decrypt_info['IV'][2:].zfill(32))
187 if not re.match(r'^https?://', decrypt_info['URI']):
188 decrypt_info['URI'] = compat_urlparse.urljoin(
189 man_url, decrypt_info['URI'])
190 if extra_query:
191 decrypt_info['URI'] = update_url_query(decrypt_info['URI'], extra_query)
192 if decrypt_url != decrypt_info['URI']:
193 decrypt_info['KEY'] = None
194 elif line.startswith('#EXT-X-MEDIA-SEQUENCE'):
195 media_sequence = int(line[22:])
196 elif line.startswith('#EXT-X-BYTERANGE'):
197 splitted_byte_range = line[17:].split('@')
198 sub_range_start = int(splitted_byte_range[1]) if len(splitted_byte_range) == 2 else byte_range['end']
199 byte_range = {
200 'start': sub_range_start,
201 'end': sub_range_start + int(splitted_byte_range[0]),
202 }
203 elif is_ad_fragment_start(line):
204 ad_frag_next = True
205 elif is_ad_fragment_end(line):
206 ad_frag_next = False
207
208 self._finish_frag_download(ctx)
209
210 return True
211
[end of youtube_dl/downloader/hls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/youtube_dl/downloader/hls.py b/youtube_dl/downloader/hls.py
--- a/youtube_dl/downloader/hls.py
+++ b/youtube_dl/downloader/hls.py
@@ -141,7 +141,7 @@
count = 0
headers = info_dict.get('http_headers', {})
if byte_range:
- headers['Range'] = 'bytes=%d-%d' % (byte_range['start'], byte_range['end'])
+ headers['Range'] = 'bytes=%d-%d' % (byte_range['start'], byte_range['end'] - 1)
while count <= fragment_retries:
try:
success, frag_content = self._download_fragment(
|
{"golden_diff": "diff --git a/youtube_dl/downloader/hls.py b/youtube_dl/downloader/hls.py\n--- a/youtube_dl/downloader/hls.py\n+++ b/youtube_dl/downloader/hls.py\n@@ -141,7 +141,7 @@\n count = 0\n headers = info_dict.get('http_headers', {})\n if byte_range:\n- headers['Range'] = 'bytes=%d-%d' % (byte_range['start'], byte_range['end'])\n+ headers['Range'] = 'bytes=%d-%d' % (byte_range['start'], byte_range['end'] - 1)\n while count <= fragment_retries:\n try:\n success, frag_content = self._download_fragment(\n", "issue": "Prefer ffmpeg HLS decoder for v.redd.it\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.11.15*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.11.15**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n\r\n### What is the purpose of your *issue*?\r\n- [x] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nSite v.reddit.com should prefer ffmpeg HLS downloader instead of native. Using the native downloader results in some video corruption and artifacting, e.g. visible within the first few seconds of this source: `youtube-dl https://v.redd.it/poqkxthgcpxz`\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport re\nimport binascii\ntry:\n from Crypto.Cipher import AES\n can_decrypt_frag = True\nexcept ImportError:\n can_decrypt_frag = False\n\nfrom .fragment import FragmentFD\nfrom .external import FFmpegFD\n\nfrom ..compat import (\n compat_urllib_error,\n compat_urlparse,\n compat_struct_pack,\n)\nfrom ..utils import (\n parse_m3u8_attributes,\n update_url_query,\n)\n\n\nclass HlsFD(FragmentFD):\n \"\"\" A limited implementation that does not require ffmpeg \"\"\"\n\n FD_NAME = 'hlsnative'\n\n @staticmethod\n def can_download(manifest, info_dict):\n UNSUPPORTED_FEATURES = (\n r'#EXT-X-KEY:METHOD=(?!NONE|AES-128)', # encrypted streams [1]\n # r'#EXT-X-BYTERANGE', # playlists composed of byte ranges of media files [2]\n\n # Live streams heuristic does not always work (e.g. geo restricted to Germany\n # http://hls-geo.daserste.de/i/videoportal/Film/c_620000/622873/format,716451,716457,716450,716458,716459,.mp4.csmil/index_4_av.m3u8?null=0)\n # r'#EXT-X-MEDIA-SEQUENCE:(?!0$)', # live streams [3]\n\n # This heuristic also is not correct since segments may not be appended as well.\n # Twitch vods of finished streams have EXT-X-PLAYLIST-TYPE:EVENT despite\n # no segments will definitely be appended to the end of the playlist.\n # r'#EXT-X-PLAYLIST-TYPE:EVENT', # media segments may be appended to the end of\n # # event media playlists [4]\n\n # 1. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.2.4\n # 2. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.2.2\n # 3. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.3.2\n # 4. https://tools.ietf.org/html/draft-pantos-http-live-streaming-17#section-4.3.3.5\n )\n check_results = [not re.search(feature, manifest) for feature in UNSUPPORTED_FEATURES]\n is_aes128_enc = '#EXT-X-KEY:METHOD=AES-128' in manifest\n check_results.append(can_decrypt_frag or not is_aes128_enc)\n check_results.append(not (is_aes128_enc and r'#EXT-X-BYTERANGE' in manifest))\n check_results.append(not info_dict.get('is_live'))\n return all(check_results)\n\n def real_download(self, filename, info_dict):\n man_url = info_dict['url']\n self.to_screen('[%s] Downloading m3u8 manifest' % self.FD_NAME)\n\n urlh = self.ydl.urlopen(self._prepare_url(info_dict, man_url))\n man_url = urlh.geturl()\n s = urlh.read().decode('utf-8', 'ignore')\n\n if not self.can_download(s, info_dict):\n if info_dict.get('extra_param_to_segment_url') or info_dict.get('_decryption_key_url'):\n self.report_error('pycrypto not found. Please install it.')\n return False\n self.report_warning(\n 'hlsnative has detected features it does not support, '\n 'extraction will be delegated to ffmpeg')\n fd = FFmpegFD(self.ydl, self.params)\n for ph in self._progress_hooks:\n fd.add_progress_hook(ph)\n return fd.real_download(filename, info_dict)\n\n def is_ad_fragment_start(s):\n return (s.startswith('#ANVATO-SEGMENT-INFO') and 'type=ad' in s\n or s.startswith('#UPLYNK-SEGMENT') and s.endswith(',ad'))\n\n def is_ad_fragment_end(s):\n return (s.startswith('#ANVATO-SEGMENT-INFO') and 'type=master' in s\n or s.startswith('#UPLYNK-SEGMENT') and s.endswith(',segment'))\n\n media_frags = 0\n ad_frags = 0\n ad_frag_next = False\n for line in s.splitlines():\n line = line.strip()\n if not line:\n continue\n if line.startswith('#'):\n if is_ad_fragment_start(line):\n ad_frag_next = True\n elif is_ad_fragment_end(line):\n ad_frag_next = False\n continue\n if ad_frag_next:\n ad_frags += 1\n continue\n media_frags += 1\n\n ctx = {\n 'filename': filename,\n 'total_frags': media_frags,\n 'ad_frags': ad_frags,\n }\n\n self._prepare_and_start_frag_download(ctx)\n\n fragment_retries = self.params.get('fragment_retries', 0)\n skip_unavailable_fragments = self.params.get('skip_unavailable_fragments', True)\n test = self.params.get('test', False)\n\n extra_query = None\n extra_param_to_segment_url = info_dict.get('extra_param_to_segment_url')\n if extra_param_to_segment_url:\n extra_query = compat_urlparse.parse_qs(extra_param_to_segment_url)\n i = 0\n media_sequence = 0\n decrypt_info = {'METHOD': 'NONE'}\n byte_range = {}\n frag_index = 0\n ad_frag_next = False\n for line in s.splitlines():\n line = line.strip()\n if line:\n if not line.startswith('#'):\n if ad_frag_next:\n continue\n frag_index += 1\n if frag_index <= ctx['fragment_index']:\n continue\n frag_url = (\n line\n if re.match(r'^https?://', line)\n else compat_urlparse.urljoin(man_url, line))\n if extra_query:\n frag_url = update_url_query(frag_url, extra_query)\n count = 0\n headers = info_dict.get('http_headers', {})\n if byte_range:\n headers['Range'] = 'bytes=%d-%d' % (byte_range['start'], byte_range['end'])\n while count <= fragment_retries:\n try:\n success, frag_content = self._download_fragment(\n ctx, frag_url, info_dict, headers)\n if not success:\n return False\n break\n except compat_urllib_error.HTTPError as err:\n # Unavailable (possibly temporary) fragments may be served.\n # First we try to retry then either skip or abort.\n # See https://github.com/ytdl-org/youtube-dl/issues/10165,\n # https://github.com/ytdl-org/youtube-dl/issues/10448).\n count += 1\n if count <= fragment_retries:\n self.report_retry_fragment(err, frag_index, count, fragment_retries)\n if count > fragment_retries:\n if skip_unavailable_fragments:\n i += 1\n media_sequence += 1\n self.report_skip_fragment(frag_index)\n continue\n self.report_error(\n 'giving up after %s fragment retries' % fragment_retries)\n return False\n if decrypt_info['METHOD'] == 'AES-128':\n iv = decrypt_info.get('IV') or compat_struct_pack('>8xq', media_sequence)\n decrypt_info['KEY'] = decrypt_info.get('KEY') or self.ydl.urlopen(\n self._prepare_url(info_dict, info_dict.get('_decryption_key_url') or decrypt_info['URI'])).read()\n frag_content = AES.new(\n decrypt_info['KEY'], AES.MODE_CBC, iv).decrypt(frag_content)\n self._append_fragment(ctx, frag_content)\n # We only download the first fragment during the test\n if test:\n break\n i += 1\n media_sequence += 1\n elif line.startswith('#EXT-X-KEY'):\n decrypt_url = decrypt_info.get('URI')\n decrypt_info = parse_m3u8_attributes(line[11:])\n if decrypt_info['METHOD'] == 'AES-128':\n if 'IV' in decrypt_info:\n decrypt_info['IV'] = binascii.unhexlify(decrypt_info['IV'][2:].zfill(32))\n if not re.match(r'^https?://', decrypt_info['URI']):\n decrypt_info['URI'] = compat_urlparse.urljoin(\n man_url, decrypt_info['URI'])\n if extra_query:\n decrypt_info['URI'] = update_url_query(decrypt_info['URI'], extra_query)\n if decrypt_url != decrypt_info['URI']:\n decrypt_info['KEY'] = None\n elif line.startswith('#EXT-X-MEDIA-SEQUENCE'):\n media_sequence = int(line[22:])\n elif line.startswith('#EXT-X-BYTERANGE'):\n splitted_byte_range = line[17:].split('@')\n sub_range_start = int(splitted_byte_range[1]) if len(splitted_byte_range) == 2 else byte_range['end']\n byte_range = {\n 'start': sub_range_start,\n 'end': sub_range_start + int(splitted_byte_range[0]),\n }\n elif is_ad_fragment_start(line):\n ad_frag_next = True\n elif is_ad_fragment_end(line):\n ad_frag_next = False\n\n self._finish_frag_download(ctx)\n\n return True\n", "path": "youtube_dl/downloader/hls.py"}]}
| 3,566 | 158 |
gh_patches_debug_2508
|
rasdani/github-patches
|
git_diff
|
coala__coala-6088
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Small typo in coalib/output/printers/LogPrinter.py
Should read responsibility instead of reponsibility.
</issue>
<code>
[start of coalib/output/printers/LogPrinter.py]
1 import traceback
2 import logging
3
4 from coalib.output.printers.LOG_LEVEL import LOG_LEVEL
5 from coalib.processes.communication.LogMessage import LogMessage
6
7
8 class LogPrinterMixin:
9 """
10 Provides access to the logging interfaces (e.g. err, warn, info) by routing
11 them to the log_message method, which should be implemented by descendants
12 of this class.
13 """
14
15 def debug(self, *messages, delimiter=' ', timestamp=None, **kwargs):
16 self.log_message(LogMessage(LOG_LEVEL.DEBUG,
17 *messages,
18 delimiter=delimiter,
19 timestamp=timestamp),
20 **kwargs)
21
22 def info(self, *messages, delimiter=' ', timestamp=None, **kwargs):
23 self.log_message(LogMessage(LOG_LEVEL.INFO,
24 *messages,
25 delimiter=delimiter,
26 timestamp=timestamp),
27 **kwargs)
28
29 def warn(self, *messages, delimiter=' ', timestamp=None, **kwargs):
30 self.log_message(LogMessage(LOG_LEVEL.WARNING,
31 *messages,
32 delimiter=delimiter,
33 timestamp=timestamp),
34 **kwargs)
35
36 def err(self, *messages, delimiter=' ', timestamp=None, **kwargs):
37 self.log_message(LogMessage(LOG_LEVEL.ERROR,
38 *messages,
39 delimiter=delimiter,
40 timestamp=timestamp),
41 **kwargs)
42
43 def log(self, log_level, message, timestamp=None, **kwargs):
44 self.log_message(LogMessage(log_level,
45 message,
46 timestamp=timestamp),
47 **kwargs)
48
49 def log_exception(self,
50 message,
51 exception,
52 log_level=LOG_LEVEL.ERROR,
53 timestamp=None,
54 **kwargs):
55 """
56 If the log_level of the printer is greater than DEBUG, it prints
57 only the message. If it is DEBUG or lower, it shows the message
58 along with the traceback of the exception.
59
60 :param message: The message to print.
61 :param exception: The exception to print.
62 :param log_level: The log_level of this message (not used when
63 logging the traceback. Tracebacks always have
64 a level of DEBUG).
65 :param timestamp: The time at which this log occurred. Defaults to
66 the current time.
67 :param kwargs: Keyword arguments to be passed when logging the
68 message (not used when logging the traceback).
69 """
70 if not isinstance(exception, BaseException):
71 raise TypeError('log_exception can only log derivatives of '
72 'BaseException.')
73
74 traceback_str = '\n'.join(
75 traceback.format_exception(type(exception),
76 exception,
77 exception.__traceback__))
78
79 self.log(log_level, message, timestamp=timestamp, **kwargs)
80 self.log_message(
81 LogMessage(LOG_LEVEL.INFO,
82 'Exception was:' + '\n' + traceback_str,
83 timestamp=timestamp),
84 **kwargs)
85
86 def log_message(self, log_message, **kwargs):
87 """
88 It is your reponsibility to implement this method, if you're using this
89 mixin.
90 """
91 raise NotImplementedError
92
93
94 class LogPrinter(LogPrinterMixin):
95 """
96 This class is deprecated and will be soon removed. To get logger use
97 logging.getLogger(__name__). Make sure that you're getting it when the
98 logging configuration is loaded.
99
100 The LogPrinter class allows to print log messages to an underlying Printer.
101
102 This class is an adapter, means you can create a LogPrinter from every
103 existing Printer instance.
104 """
105
106 def __init__(self,
107 printer=None,
108 log_level=LOG_LEVEL.DEBUG,
109 timestamp_format='%X'):
110 """
111 Creates a new log printer from an existing Printer.
112
113 :param printer: The underlying Printer where log messages
114 shall be written to. If you inherit from
115 LogPrinter, set it to self.
116 :param log_level: The minimum log level, everything below will
117 not be logged.
118 :param timestamp_format: The format string for the
119 datetime.today().strftime(format) method.
120 """
121 self.logger = logging.getLogger()
122
123 self._printer = printer
124 self.log_level = log_level
125 self.timestamp_format = timestamp_format
126
127 @property
128 def log_level(self):
129 """
130 Returns current log_level used in logger.
131 """
132 return self.logger.getEffectiveLevel()
133
134 @log_level.setter
135 def log_level(self, log_level):
136 """
137 Sets log_level for logger.
138 """
139 self.logger.setLevel(log_level)
140
141 @property
142 def printer(self):
143 """
144 Returns the underlying printer where logs are printed to.
145 """
146 return self._printer
147
148 def log_message(self, log_message, **kwargs):
149 if not isinstance(log_message, LogMessage):
150 raise TypeError('log_message should be of type LogMessage.')
151 self.logger.log(log_message.log_level, log_message.message)
152
153 def __getstate__(self):
154 # on Windows there are problems with serializing loggers, so omit it
155 oldict = self.__dict__.copy()
156 del oldict['logger']
157 return oldict
158
159 def __setstate__(self, newdict):
160 self.__dict__.update(newdict)
161 # restore logger by name
162 self.logger = logging.getLogger()
163
[end of coalib/output/printers/LogPrinter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/coalib/output/printers/LogPrinter.py b/coalib/output/printers/LogPrinter.py
--- a/coalib/output/printers/LogPrinter.py
+++ b/coalib/output/printers/LogPrinter.py
@@ -85,7 +85,7 @@
def log_message(self, log_message, **kwargs):
"""
- It is your reponsibility to implement this method, if you're using this
+ It is your responsibility to implement this method, if you're using this
mixin.
"""
raise NotImplementedError
|
{"golden_diff": "diff --git a/coalib/output/printers/LogPrinter.py b/coalib/output/printers/LogPrinter.py\n--- a/coalib/output/printers/LogPrinter.py\n+++ b/coalib/output/printers/LogPrinter.py\n@@ -85,7 +85,7 @@\n \n def log_message(self, log_message, **kwargs):\n \"\"\"\n- It is your reponsibility to implement this method, if you're using this\n+ It is your responsibility to implement this method, if you're using this\n mixin.\n \"\"\"\n raise NotImplementedError\n", "issue": "Small typo in coalib/output/printers/LogPrinter.py\nShould read responsibility instead of reponsibility.\r\n\n", "before_files": [{"content": "import traceback\nimport logging\n\nfrom coalib.output.printers.LOG_LEVEL import LOG_LEVEL\nfrom coalib.processes.communication.LogMessage import LogMessage\n\n\nclass LogPrinterMixin:\n \"\"\"\n Provides access to the logging interfaces (e.g. err, warn, info) by routing\n them to the log_message method, which should be implemented by descendants\n of this class.\n \"\"\"\n\n def debug(self, *messages, delimiter=' ', timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.DEBUG,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def info(self, *messages, delimiter=' ', timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.INFO,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def warn(self, *messages, delimiter=' ', timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.WARNING,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def err(self, *messages, delimiter=' ', timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.ERROR,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def log(self, log_level, message, timestamp=None, **kwargs):\n self.log_message(LogMessage(log_level,\n message,\n timestamp=timestamp),\n **kwargs)\n\n def log_exception(self,\n message,\n exception,\n log_level=LOG_LEVEL.ERROR,\n timestamp=None,\n **kwargs):\n \"\"\"\n If the log_level of the printer is greater than DEBUG, it prints\n only the message. If it is DEBUG or lower, it shows the message\n along with the traceback of the exception.\n\n :param message: The message to print.\n :param exception: The exception to print.\n :param log_level: The log_level of this message (not used when\n logging the traceback. Tracebacks always have\n a level of DEBUG).\n :param timestamp: The time at which this log occurred. Defaults to\n the current time.\n :param kwargs: Keyword arguments to be passed when logging the\n message (not used when logging the traceback).\n \"\"\"\n if not isinstance(exception, BaseException):\n raise TypeError('log_exception can only log derivatives of '\n 'BaseException.')\n\n traceback_str = '\\n'.join(\n traceback.format_exception(type(exception),\n exception,\n exception.__traceback__))\n\n self.log(log_level, message, timestamp=timestamp, **kwargs)\n self.log_message(\n LogMessage(LOG_LEVEL.INFO,\n 'Exception was:' + '\\n' + traceback_str,\n timestamp=timestamp),\n **kwargs)\n\n def log_message(self, log_message, **kwargs):\n \"\"\"\n It is your reponsibility to implement this method, if you're using this\n mixin.\n \"\"\"\n raise NotImplementedError\n\n\nclass LogPrinter(LogPrinterMixin):\n \"\"\"\n This class is deprecated and will be soon removed. To get logger use\n logging.getLogger(__name__). Make sure that you're getting it when the\n logging configuration is loaded.\n\n The LogPrinter class allows to print log messages to an underlying Printer.\n\n This class is an adapter, means you can create a LogPrinter from every\n existing Printer instance.\n \"\"\"\n\n def __init__(self,\n printer=None,\n log_level=LOG_LEVEL.DEBUG,\n timestamp_format='%X'):\n \"\"\"\n Creates a new log printer from an existing Printer.\n\n :param printer: The underlying Printer where log messages\n shall be written to. If you inherit from\n LogPrinter, set it to self.\n :param log_level: The minimum log level, everything below will\n not be logged.\n :param timestamp_format: The format string for the\n datetime.today().strftime(format) method.\n \"\"\"\n self.logger = logging.getLogger()\n\n self._printer = printer\n self.log_level = log_level\n self.timestamp_format = timestamp_format\n\n @property\n def log_level(self):\n \"\"\"\n Returns current log_level used in logger.\n \"\"\"\n return self.logger.getEffectiveLevel()\n\n @log_level.setter\n def log_level(self, log_level):\n \"\"\"\n Sets log_level for logger.\n \"\"\"\n self.logger.setLevel(log_level)\n\n @property\n def printer(self):\n \"\"\"\n Returns the underlying printer where logs are printed to.\n \"\"\"\n return self._printer\n\n def log_message(self, log_message, **kwargs):\n if not isinstance(log_message, LogMessage):\n raise TypeError('log_message should be of type LogMessage.')\n self.logger.log(log_message.log_level, log_message.message)\n\n def __getstate__(self):\n # on Windows there are problems with serializing loggers, so omit it\n oldict = self.__dict__.copy()\n del oldict['logger']\n return oldict\n\n def __setstate__(self, newdict):\n self.__dict__.update(newdict)\n # restore logger by name\n self.logger = logging.getLogger()\n", "path": "coalib/output/printers/LogPrinter.py"}]}
| 2,039 | 124 |
gh_patches_debug_32548
|
rasdani/github-patches
|
git_diff
|
pypa__pip-6313
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pip processes corrupt packages containing using ../ chars placing files outside target directory
- Pip version: tested on pip versions 6.0.8 and 8.0.2,
- Python version: 2.7
- Operating System: Linux / Windows
### Description:
pip processes invalid packages .tar/.zip archives containing "../" directory traversal chars and places package files outside of target directories even after pip complains and throws errors and aborts the install.
Pip processes corrupt packages containing using ../ chars placing files outside target directory
- Pip version: tested on pip versions 6.0.8 and 8.0.2,
- Python version: 2.7
- Operating System: Linux / Windows
### Description:
pip processes invalid packages .tar/.zip archives containing "../" directory traversal chars and places package files outside of target directories even after pip complains and throws errors and aborts the install.
</issue>
<code>
[start of src/pip/_internal/utils/unpacking.py]
1 """Utilities related archives.
2 """
3
4 # The following comment should be removed at some point in the future.
5 # mypy: strict-optional=False
6
7 from __future__ import absolute_import
8
9 import logging
10 import os
11 import shutil
12 import stat
13 import tarfile
14 import zipfile
15
16 from pip._internal.exceptions import InstallationError
17 from pip._internal.utils.filetypes import (
18 BZ2_EXTENSIONS,
19 TAR_EXTENSIONS,
20 XZ_EXTENSIONS,
21 ZIP_EXTENSIONS,
22 )
23 from pip._internal.utils.misc import ensure_dir
24 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
25
26 if MYPY_CHECK_RUNNING:
27 from typing import Iterable, List, Optional, Text, Union
28
29
30 logger = logging.getLogger(__name__)
31
32
33 SUPPORTED_EXTENSIONS = ZIP_EXTENSIONS + TAR_EXTENSIONS
34
35 try:
36 import bz2 # noqa
37 SUPPORTED_EXTENSIONS += BZ2_EXTENSIONS
38 except ImportError:
39 logger.debug('bz2 module is not available')
40
41 try:
42 # Only for Python 3.3+
43 import lzma # noqa
44 SUPPORTED_EXTENSIONS += XZ_EXTENSIONS
45 except ImportError:
46 logger.debug('lzma module is not available')
47
48
49 def current_umask():
50 """Get the current umask which involves having to set it temporarily."""
51 mask = os.umask(0)
52 os.umask(mask)
53 return mask
54
55
56 def split_leading_dir(path):
57 # type: (Union[str, Text]) -> List[Union[str, Text]]
58 path = path.lstrip('/').lstrip('\\')
59 if (
60 '/' in path and (
61 ('\\' in path and path.find('/') < path.find('\\')) or
62 '\\' not in path
63 )
64 ):
65 return path.split('/', 1)
66 elif '\\' in path:
67 return path.split('\\', 1)
68 else:
69 return [path, '']
70
71
72 def has_leading_dir(paths):
73 # type: (Iterable[Union[str, Text]]) -> bool
74 """Returns true if all the paths have the same leading path name
75 (i.e., everything is in one subdirectory in an archive)"""
76 common_prefix = None
77 for path in paths:
78 prefix, rest = split_leading_dir(path)
79 if not prefix:
80 return False
81 elif common_prefix is None:
82 common_prefix = prefix
83 elif prefix != common_prefix:
84 return False
85 return True
86
87
88 def unzip_file(filename, location, flatten=True):
89 # type: (str, str, bool) -> None
90 """
91 Unzip the file (with path `filename`) to the destination `location`. All
92 files are written based on system defaults and umask (i.e. permissions are
93 not preserved), except that regular file members with any execute
94 permissions (user, group, or world) have "chmod +x" applied after being
95 written. Note that for windows, any execute changes using os.chmod are
96 no-ops per the python docs.
97 """
98 ensure_dir(location)
99 zipfp = open(filename, 'rb')
100 try:
101 zip = zipfile.ZipFile(zipfp, allowZip64=True)
102 leading = has_leading_dir(zip.namelist()) and flatten
103 for info in zip.infolist():
104 name = info.filename
105 fn = name
106 if leading:
107 fn = split_leading_dir(name)[1]
108 fn = os.path.join(location, fn)
109 dir = os.path.dirname(fn)
110 if fn.endswith('/') or fn.endswith('\\'):
111 # A directory
112 ensure_dir(fn)
113 else:
114 ensure_dir(dir)
115 # Don't use read() to avoid allocating an arbitrarily large
116 # chunk of memory for the file's content
117 fp = zip.open(name)
118 try:
119 with open(fn, 'wb') as destfp:
120 shutil.copyfileobj(fp, destfp)
121 finally:
122 fp.close()
123 mode = info.external_attr >> 16
124 # if mode and regular file and any execute permissions for
125 # user/group/world?
126 if mode and stat.S_ISREG(mode) and mode & 0o111:
127 # make dest file have execute for user/group/world
128 # (chmod +x) no-op on windows per python docs
129 os.chmod(fn, (0o777 - current_umask() | 0o111))
130 finally:
131 zipfp.close()
132
133
134 def untar_file(filename, location):
135 # type: (str, str) -> None
136 """
137 Untar the file (with path `filename`) to the destination `location`.
138 All files are written based on system defaults and umask (i.e. permissions
139 are not preserved), except that regular file members with any execute
140 permissions (user, group, or world) have "chmod +x" applied after being
141 written. Note that for windows, any execute changes using os.chmod are
142 no-ops per the python docs.
143 """
144 ensure_dir(location)
145 if filename.lower().endswith('.gz') or filename.lower().endswith('.tgz'):
146 mode = 'r:gz'
147 elif filename.lower().endswith(BZ2_EXTENSIONS):
148 mode = 'r:bz2'
149 elif filename.lower().endswith(XZ_EXTENSIONS):
150 mode = 'r:xz'
151 elif filename.lower().endswith('.tar'):
152 mode = 'r'
153 else:
154 logger.warning(
155 'Cannot determine compression type for file %s', filename,
156 )
157 mode = 'r:*'
158 tar = tarfile.open(filename, mode)
159 try:
160 leading = has_leading_dir([
161 member.name for member in tar.getmembers()
162 ])
163 for member in tar.getmembers():
164 fn = member.name
165 if leading:
166 # https://github.com/python/mypy/issues/1174
167 fn = split_leading_dir(fn)[1] # type: ignore
168 path = os.path.join(location, fn)
169 if member.isdir():
170 ensure_dir(path)
171 elif member.issym():
172 try:
173 # https://github.com/python/typeshed/issues/2673
174 tar._extract_member(member, path) # type: ignore
175 except Exception as exc:
176 # Some corrupt tar files seem to produce this
177 # (specifically bad symlinks)
178 logger.warning(
179 'In the tar file %s the member %s is invalid: %s',
180 filename, member.name, exc,
181 )
182 continue
183 else:
184 try:
185 fp = tar.extractfile(member)
186 except (KeyError, AttributeError) as exc:
187 # Some corrupt tar files seem to produce this
188 # (specifically bad symlinks)
189 logger.warning(
190 'In the tar file %s the member %s is invalid: %s',
191 filename, member.name, exc,
192 )
193 continue
194 ensure_dir(os.path.dirname(path))
195 with open(path, 'wb') as destfp:
196 shutil.copyfileobj(fp, destfp)
197 fp.close()
198 # Update the timestamp (useful for cython compiled files)
199 # https://github.com/python/typeshed/issues/2673
200 tar.utime(member, path) # type: ignore
201 # member have any execute permissions for user/group/world?
202 if member.mode & 0o111:
203 # make dest file have execute for user/group/world
204 # no-op on windows per python docs
205 os.chmod(path, (0o777 - current_umask() | 0o111))
206 finally:
207 tar.close()
208
209
210 def unpack_file(
211 filename, # type: str
212 location, # type: str
213 content_type=None, # type: Optional[str]
214 ):
215 # type: (...) -> None
216 filename = os.path.realpath(filename)
217 if (
218 content_type == 'application/zip' or
219 filename.lower().endswith(ZIP_EXTENSIONS) or
220 zipfile.is_zipfile(filename)
221 ):
222 unzip_file(
223 filename,
224 location,
225 flatten=not filename.endswith('.whl')
226 )
227 elif (
228 content_type == 'application/x-gzip' or
229 tarfile.is_tarfile(filename) or
230 filename.lower().endswith(
231 TAR_EXTENSIONS + BZ2_EXTENSIONS + XZ_EXTENSIONS
232 )
233 ):
234 untar_file(filename, location)
235 else:
236 # FIXME: handle?
237 # FIXME: magic signatures?
238 logger.critical(
239 'Cannot unpack file %s (downloaded from %s, content-type: %s); '
240 'cannot detect archive format',
241 filename, location, content_type,
242 )
243 raise InstallationError(
244 'Cannot determine archive format of {}'.format(location)
245 )
246
[end of src/pip/_internal/utils/unpacking.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pip/_internal/utils/unpacking.py b/src/pip/_internal/utils/unpacking.py
--- a/src/pip/_internal/utils/unpacking.py
+++ b/src/pip/_internal/utils/unpacking.py
@@ -85,6 +85,18 @@
return True
+def is_within_directory(directory, target):
+ # type: ((Union[str, Text]), (Union[str, Text])) -> bool
+ """
+ Return true if the absolute path of target is within the directory
+ """
+ abs_directory = os.path.abspath(directory)
+ abs_target = os.path.abspath(target)
+
+ prefix = os.path.commonprefix([abs_directory, abs_target])
+ return prefix == abs_directory
+
+
def unzip_file(filename, location, flatten=True):
# type: (str, str, bool) -> None
"""
@@ -107,6 +119,12 @@
fn = split_leading_dir(name)[1]
fn = os.path.join(location, fn)
dir = os.path.dirname(fn)
+ if not is_within_directory(location, fn):
+ message = (
+ 'The zip file ({}) has a file ({}) trying to install '
+ 'outside target directory ({})'
+ )
+ raise InstallationError(message.format(filename, fn, location))
if fn.endswith('/') or fn.endswith('\\'):
# A directory
ensure_dir(fn)
@@ -166,6 +184,14 @@
# https://github.com/python/mypy/issues/1174
fn = split_leading_dir(fn)[1] # type: ignore
path = os.path.join(location, fn)
+ if not is_within_directory(location, path):
+ message = (
+ 'The tar file ({}) has a file ({}) trying to install '
+ 'outside target directory ({})'
+ )
+ raise InstallationError(
+ message.format(filename, path, location)
+ )
if member.isdir():
ensure_dir(path)
elif member.issym():
|
{"golden_diff": "diff --git a/src/pip/_internal/utils/unpacking.py b/src/pip/_internal/utils/unpacking.py\n--- a/src/pip/_internal/utils/unpacking.py\n+++ b/src/pip/_internal/utils/unpacking.py\n@@ -85,6 +85,18 @@\n return True\n \n \n+def is_within_directory(directory, target):\n+ # type: ((Union[str, Text]), (Union[str, Text])) -> bool\n+ \"\"\"\n+ Return true if the absolute path of target is within the directory\n+ \"\"\"\n+ abs_directory = os.path.abspath(directory)\n+ abs_target = os.path.abspath(target)\n+\n+ prefix = os.path.commonprefix([abs_directory, abs_target])\n+ return prefix == abs_directory\n+\n+\n def unzip_file(filename, location, flatten=True):\n # type: (str, str, bool) -> None\n \"\"\"\n@@ -107,6 +119,12 @@\n fn = split_leading_dir(name)[1]\n fn = os.path.join(location, fn)\n dir = os.path.dirname(fn)\n+ if not is_within_directory(location, fn):\n+ message = (\n+ 'The zip file ({}) has a file ({}) trying to install '\n+ 'outside target directory ({})'\n+ )\n+ raise InstallationError(message.format(filename, fn, location))\n if fn.endswith('/') or fn.endswith('\\\\'):\n # A directory\n ensure_dir(fn)\n@@ -166,6 +184,14 @@\n # https://github.com/python/mypy/issues/1174\n fn = split_leading_dir(fn)[1] # type: ignore\n path = os.path.join(location, fn)\n+ if not is_within_directory(location, path):\n+ message = (\n+ 'The tar file ({}) has a file ({}) trying to install '\n+ 'outside target directory ({})'\n+ )\n+ raise InstallationError(\n+ message.format(filename, path, location)\n+ )\n if member.isdir():\n ensure_dir(path)\n elif member.issym():\n", "issue": "Pip processes corrupt packages containing using ../ chars placing files outside target directory\n- Pip version: tested on pip versions 6.0.8 and 8.0.2,\n- Python version: 2.7\n- Operating System: Linux / Windows\n### Description:\n\npip processes invalid packages .tar/.zip archives containing \"../\" directory traversal chars and places package files outside of target directories even after pip complains and throws errors and aborts the install.\n\nPip processes corrupt packages containing using ../ chars placing files outside target directory\n- Pip version: tested on pip versions 6.0.8 and 8.0.2,\n- Python version: 2.7\n- Operating System: Linux / Windows\n### Description:\n\npip processes invalid packages .tar/.zip archives containing \"../\" directory traversal chars and places package files outside of target directories even after pip complains and throws errors and aborts the install.\n\n", "before_files": [{"content": "\"\"\"Utilities related archives.\n\"\"\"\n\n# The following comment should be removed at some point in the future.\n# mypy: strict-optional=False\n\nfrom __future__ import absolute_import\n\nimport logging\nimport os\nimport shutil\nimport stat\nimport tarfile\nimport zipfile\n\nfrom pip._internal.exceptions import InstallationError\nfrom pip._internal.utils.filetypes import (\n BZ2_EXTENSIONS,\n TAR_EXTENSIONS,\n XZ_EXTENSIONS,\n ZIP_EXTENSIONS,\n)\nfrom pip._internal.utils.misc import ensure_dir\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from typing import Iterable, List, Optional, Text, Union\n\n\nlogger = logging.getLogger(__name__)\n\n\nSUPPORTED_EXTENSIONS = ZIP_EXTENSIONS + TAR_EXTENSIONS\n\ntry:\n import bz2 # noqa\n SUPPORTED_EXTENSIONS += BZ2_EXTENSIONS\nexcept ImportError:\n logger.debug('bz2 module is not available')\n\ntry:\n # Only for Python 3.3+\n import lzma # noqa\n SUPPORTED_EXTENSIONS += XZ_EXTENSIONS\nexcept ImportError:\n logger.debug('lzma module is not available')\n\n\ndef current_umask():\n \"\"\"Get the current umask which involves having to set it temporarily.\"\"\"\n mask = os.umask(0)\n os.umask(mask)\n return mask\n\n\ndef split_leading_dir(path):\n # type: (Union[str, Text]) -> List[Union[str, Text]]\n path = path.lstrip('/').lstrip('\\\\')\n if (\n '/' in path and (\n ('\\\\' in path and path.find('/') < path.find('\\\\')) or\n '\\\\' not in path\n )\n ):\n return path.split('/', 1)\n elif '\\\\' in path:\n return path.split('\\\\', 1)\n else:\n return [path, '']\n\n\ndef has_leading_dir(paths):\n # type: (Iterable[Union[str, Text]]) -> bool\n \"\"\"Returns true if all the paths have the same leading path name\n (i.e., everything is in one subdirectory in an archive)\"\"\"\n common_prefix = None\n for path in paths:\n prefix, rest = split_leading_dir(path)\n if not prefix:\n return False\n elif common_prefix is None:\n common_prefix = prefix\n elif prefix != common_prefix:\n return False\n return True\n\n\ndef unzip_file(filename, location, flatten=True):\n # type: (str, str, bool) -> None\n \"\"\"\n Unzip the file (with path `filename`) to the destination `location`. All\n files are written based on system defaults and umask (i.e. permissions are\n not preserved), except that regular file members with any execute\n permissions (user, group, or world) have \"chmod +x\" applied after being\n written. Note that for windows, any execute changes using os.chmod are\n no-ops per the python docs.\n \"\"\"\n ensure_dir(location)\n zipfp = open(filename, 'rb')\n try:\n zip = zipfile.ZipFile(zipfp, allowZip64=True)\n leading = has_leading_dir(zip.namelist()) and flatten\n for info in zip.infolist():\n name = info.filename\n fn = name\n if leading:\n fn = split_leading_dir(name)[1]\n fn = os.path.join(location, fn)\n dir = os.path.dirname(fn)\n if fn.endswith('/') or fn.endswith('\\\\'):\n # A directory\n ensure_dir(fn)\n else:\n ensure_dir(dir)\n # Don't use read() to avoid allocating an arbitrarily large\n # chunk of memory for the file's content\n fp = zip.open(name)\n try:\n with open(fn, 'wb') as destfp:\n shutil.copyfileobj(fp, destfp)\n finally:\n fp.close()\n mode = info.external_attr >> 16\n # if mode and regular file and any execute permissions for\n # user/group/world?\n if mode and stat.S_ISREG(mode) and mode & 0o111:\n # make dest file have execute for user/group/world\n # (chmod +x) no-op on windows per python docs\n os.chmod(fn, (0o777 - current_umask() | 0o111))\n finally:\n zipfp.close()\n\n\ndef untar_file(filename, location):\n # type: (str, str) -> None\n \"\"\"\n Untar the file (with path `filename`) to the destination `location`.\n All files are written based on system defaults and umask (i.e. permissions\n are not preserved), except that regular file members with any execute\n permissions (user, group, or world) have \"chmod +x\" applied after being\n written. Note that for windows, any execute changes using os.chmod are\n no-ops per the python docs.\n \"\"\"\n ensure_dir(location)\n if filename.lower().endswith('.gz') or filename.lower().endswith('.tgz'):\n mode = 'r:gz'\n elif filename.lower().endswith(BZ2_EXTENSIONS):\n mode = 'r:bz2'\n elif filename.lower().endswith(XZ_EXTENSIONS):\n mode = 'r:xz'\n elif filename.lower().endswith('.tar'):\n mode = 'r'\n else:\n logger.warning(\n 'Cannot determine compression type for file %s', filename,\n )\n mode = 'r:*'\n tar = tarfile.open(filename, mode)\n try:\n leading = has_leading_dir([\n member.name for member in tar.getmembers()\n ])\n for member in tar.getmembers():\n fn = member.name\n if leading:\n # https://github.com/python/mypy/issues/1174\n fn = split_leading_dir(fn)[1] # type: ignore\n path = os.path.join(location, fn)\n if member.isdir():\n ensure_dir(path)\n elif member.issym():\n try:\n # https://github.com/python/typeshed/issues/2673\n tar._extract_member(member, path) # type: ignore\n except Exception as exc:\n # Some corrupt tar files seem to produce this\n # (specifically bad symlinks)\n logger.warning(\n 'In the tar file %s the member %s is invalid: %s',\n filename, member.name, exc,\n )\n continue\n else:\n try:\n fp = tar.extractfile(member)\n except (KeyError, AttributeError) as exc:\n # Some corrupt tar files seem to produce this\n # (specifically bad symlinks)\n logger.warning(\n 'In the tar file %s the member %s is invalid: %s',\n filename, member.name, exc,\n )\n continue\n ensure_dir(os.path.dirname(path))\n with open(path, 'wb') as destfp:\n shutil.copyfileobj(fp, destfp)\n fp.close()\n # Update the timestamp (useful for cython compiled files)\n # https://github.com/python/typeshed/issues/2673\n tar.utime(member, path) # type: ignore\n # member have any execute permissions for user/group/world?\n if member.mode & 0o111:\n # make dest file have execute for user/group/world\n # no-op on windows per python docs\n os.chmod(path, (0o777 - current_umask() | 0o111))\n finally:\n tar.close()\n\n\ndef unpack_file(\n filename, # type: str\n location, # type: str\n content_type=None, # type: Optional[str]\n):\n # type: (...) -> None\n filename = os.path.realpath(filename)\n if (\n content_type == 'application/zip' or\n filename.lower().endswith(ZIP_EXTENSIONS) or\n zipfile.is_zipfile(filename)\n ):\n unzip_file(\n filename,\n location,\n flatten=not filename.endswith('.whl')\n )\n elif (\n content_type == 'application/x-gzip' or\n tarfile.is_tarfile(filename) or\n filename.lower().endswith(\n TAR_EXTENSIONS + BZ2_EXTENSIONS + XZ_EXTENSIONS\n )\n ):\n untar_file(filename, location)\n else:\n # FIXME: handle?\n # FIXME: magic signatures?\n logger.critical(\n 'Cannot unpack file %s (downloaded from %s, content-type: %s); '\n 'cannot detect archive format',\n filename, location, content_type,\n )\n raise InstallationError(\n 'Cannot determine archive format of {}'.format(location)\n )\n", "path": "src/pip/_internal/utils/unpacking.py"}]}
| 3,229 | 449 |
gh_patches_debug_9459
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-980
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Encoding error : `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 2: invalid continuation byte`
<!--
General questions should be asked on the mailing list [email protected].
Before submitting an issue, please fill out the following form.
-->
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: ubuntu 18.04
- **Modin installed from : pip install modin[ray]
- **Modin version**: 0.6.3
- **Python version**: 3.7.3
<!--
You can obtain the Modin version with
python -c "import modin; print(modin.__version__)"
-->
### Describe the problem
<!-- Describe the problem clearly here. -->
Hello,
i'm trying to use modin to reduce the memory peak due the volum of the data, so i change the pandas with modin.pandas, i try to do a simple read of a file but encoded in 'latin-1' (french) . With pandas all goes smoothly but using modin i got an error of encoding as follow :
`UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 2: invalid continuation byte`
the script used (which works fine on pandas but not in modin ) :
`caract = pd.read_csv(path, sep="\t", encoding = "ISO-8859-1")`
ps :: i tried other encoding and the same remark : works on pandas and not on modin (backed by ray) : ISO-8859-1, ISO-8859-9, latin-1
any solution ??
thanks
### Source code / logs
`RayTaskError: ray_worker (pid=10815, host=ubuntu)
File "pandas/_libs/parsers.pyx", line 1297, in pandas._libs.parsers.TextReader._string_convert
File "pandas/_libs/parsers.pyx", line 1520, in pandas._libs.parsers._string_box_utf8
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 2: invalid continuation byte
During handling of the above exception, another exception occurred:
ray_worker (pid=10815, host=ubuntu)
File "/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/modin/engines/ray/task_wrapper.py", line 8, in deploy_ray_func
return func(**args)
File "/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/modin/backends/pandas/parsers.py", line 69, in parse
pandas_df = pandas.read_csv(BytesIO(to_read), **kwargs)
File "/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/pandas/io/parsers.py", line 685, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/pandas/io/parsers.py", line 463, in _read
data = parser.read(nrows)
File "/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/pandas/io/parsers.py", line 1154, in read
ret = self._engine.read(nrows)
File "/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/pandas/io/parsers.py", line 2059, in read
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 881, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 896, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 973, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 1105, in pandas._libs.parsers.TextReader._convert_column_data
File "pandas/_libs/parsers.pyx", line 1158, in pandas._libs.parsers.TextReader._convert_tokens
File "pandas/_libs/parsers.pyx", line 1281, in pandas._libs.parsers.TextReader._convert_with_dtype
File "pandas/_libs/parsers.pyx", line 1297, in pandas._libs.parsers.TextReader._string_convert
File "pandas/_libs/parsers.pyx", line 1520, in pandas._libs.parsers._string_box_utf8
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 2: invalid continuation byte`
<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
</issue>
<code>
[start of modin/backends/pandas/parsers.py]
1 import numpy as np
2 import pandas
3 from pandas.core.dtypes.cast import find_common_type
4 from pandas.core.dtypes.concat import union_categoricals
5 from pandas.io.common import _infer_compression
6 from modin.engines.base.io import FileReader
7 from modin.data_management.utils import split_result_of_axis_func_pandas
8 from modin.error_message import ErrorMessage
9 from io import BytesIO
10
11
12 def _split_result_for_readers(axis, num_splits, df): # pragma: no cover
13 """Splits the DataFrame read into smaller DataFrames and handles all edge cases.
14
15 Args:
16 axis: Which axis to split over.
17 num_splits: The number of splits to create.
18 df: The DataFrame after it has been read.
19
20 Returns:
21 A list of pandas DataFrames.
22 """
23 splits = split_result_of_axis_func_pandas(axis, num_splits, df)
24 if not isinstance(splits, list):
25 splits = [splits]
26 return splits
27
28
29 def find_common_type_cat(types):
30 if all(isinstance(t, pandas.CategoricalDtype) for t in types):
31 if all(t.ordered for t in types):
32 return pandas.CategoricalDtype(
33 np.sort(np.unique([c for t in types for c in t.categories])[0]),
34 ordered=True,
35 )
36 return union_categoricals(
37 [pandas.Categorical([], dtype=t) for t in types],
38 sort_categories=all(t.ordered for t in types),
39 ).dtype
40 else:
41 return find_common_type(types)
42
43
44 class PandasParser(object):
45 @classmethod
46 def get_dtypes(cls, dtypes_ids):
47 return (
48 pandas.concat(cls.materialize(dtypes_ids), axis=1)
49 .apply(lambda row: find_common_type_cat(row.values), axis=1)
50 .squeeze(axis=0)
51 )
52
53 @classmethod
54 def single_worker_read(cls, fname, **kwargs):
55 ErrorMessage.default_to_pandas("Parameters provided")
56 # Use default args for everything
57 pandas_frame = cls.parse(fname, **kwargs)
58 if isinstance(pandas_frame, pandas.io.parsers.TextFileReader):
59 pd_read = pandas_frame.read
60 pandas_frame.read = lambda *args, **kwargs: cls.query_compiler_cls.from_pandas(
61 pd_read(*args, **kwargs), cls.frame_cls
62 )
63 return pandas_frame
64 return cls.query_compiler_cls.from_pandas(pandas_frame, cls.frame_cls)
65
66 infer_compression = _infer_compression
67
68
69 class PandasCSVParser(PandasParser):
70 @staticmethod
71 def parse(fname, **kwargs):
72 num_splits = kwargs.pop("num_splits", None)
73 start = kwargs.pop("start", None)
74 end = kwargs.pop("end", None)
75 index_col = kwargs.get("index_col", None)
76 if start is not None and end is not None:
77 # pop "compression" from kwargs because bio is uncompressed
78 bio = FileReader.file_open(fname, "rb", kwargs.pop("compression", "infer"))
79 if kwargs.pop("encoding", False):
80 header = b"" + bio.readline()
81 else:
82 header = b""
83 bio.seek(start)
84 to_read = header + bio.read(end - start)
85 bio.close()
86 pandas_df = pandas.read_csv(BytesIO(to_read), **kwargs)
87 else:
88 # This only happens when we are reading with only one worker (Default)
89 return pandas.read_csv(fname, **kwargs)
90 if index_col is not None:
91 index = pandas_df.index
92 else:
93 # The lengths will become the RangeIndex
94 index = len(pandas_df)
95 return _split_result_for_readers(1, num_splits, pandas_df) + [
96 index,
97 pandas_df.dtypes,
98 ]
99
100
101 class PandasJSONParser(PandasParser):
102 @staticmethod
103 def parse(fname, **kwargs):
104 num_splits = kwargs.pop("num_splits", None)
105 start = kwargs.pop("start", None)
106 end = kwargs.pop("end", None)
107 if start is not None and end is not None:
108 # pop "compression" from kwargs because bio is uncompressed
109 bio = FileReader.file_open(fname, "rb", kwargs.pop("compression", "infer"))
110 bio.seek(start)
111 to_read = b"" + bio.read(end - start)
112 bio.close()
113 columns = kwargs.pop("columns")
114 pandas_df = pandas.read_json(BytesIO(to_read), **kwargs)
115 else:
116 # This only happens when we are reading with only one worker (Default)
117 return pandas.read_json(fname, **kwargs)
118 if not pandas_df.columns.equals(columns):
119 raise NotImplementedError("Columns must be the same across all rows.")
120 partition_columns = pandas_df.columns
121 return _split_result_for_readers(1, num_splits, pandas_df) + [
122 len(pandas_df),
123 pandas_df.dtypes,
124 partition_columns,
125 ]
126
127
128 class PandasParquetParser(PandasParser):
129 @staticmethod
130 def parse(fname, **kwargs):
131 import pyarrow.parquet as pq
132
133 num_splits = kwargs.pop("num_splits", None)
134 columns = kwargs.get("columns", None)
135 if num_splits is None:
136 return pandas.read_parquet(fname, **kwargs)
137 kwargs["use_pandas_metadata"] = True
138 df = pq.read_table(fname, **kwargs).to_pandas()
139 if columns is not None:
140 df = df[columns]
141 # Append the length of the index here to build it externally
142 return _split_result_for_readers(0, num_splits, df) + [len(df.index), df.dtypes]
143
144
145 class PandasHDFParser(PandasParser): # pragma: no cover
146 @staticmethod
147 def parse(fname, **kwargs):
148 kwargs["key"] = kwargs.pop("_key", None)
149 num_splits = kwargs.pop("num_splits", None)
150 if num_splits is None:
151 return pandas.read_hdf(fname, **kwargs)
152 df = pandas.read_hdf(fname, **kwargs)
153 # Append the length of the index here to build it externally
154 return _split_result_for_readers(0, num_splits, df) + [len(df.index), df.dtypes]
155
156
157 class PandasFeatherParser(PandasParser):
158 @staticmethod
159 def parse(fname, **kwargs):
160 from pyarrow import feather
161
162 num_splits = kwargs.pop("num_splits", None)
163 if num_splits is None:
164 return pandas.read_feather(fname, **kwargs)
165 df = feather.read_feather(fname, **kwargs)
166 # Append the length of the index here to build it externally
167 return _split_result_for_readers(0, num_splits, df) + [len(df.index), df.dtypes]
168
169
170 class PandasSQLParser(PandasParser):
171 @staticmethod
172 def parse(sql, con, index_col, **kwargs):
173 num_splits = kwargs.pop("num_splits", None)
174 if num_splits is None:
175 return pandas.read_sql(sql, con, index_col=index_col, **kwargs)
176 df = pandas.read_sql(sql, con, index_col=index_col, **kwargs)
177 if index_col is None:
178 index = len(df)
179 else:
180 index = df.index
181 return _split_result_for_readers(1, num_splits, df) + [index, df.dtypes]
182
[end of modin/backends/pandas/parsers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/modin/backends/pandas/parsers.py b/modin/backends/pandas/parsers.py
--- a/modin/backends/pandas/parsers.py
+++ b/modin/backends/pandas/parsers.py
@@ -76,7 +76,7 @@
if start is not None and end is not None:
# pop "compression" from kwargs because bio is uncompressed
bio = FileReader.file_open(fname, "rb", kwargs.pop("compression", "infer"))
- if kwargs.pop("encoding", False):
+ if kwargs.get("encoding", None) is not None:
header = b"" + bio.readline()
else:
header = b""
|
{"golden_diff": "diff --git a/modin/backends/pandas/parsers.py b/modin/backends/pandas/parsers.py\n--- a/modin/backends/pandas/parsers.py\n+++ b/modin/backends/pandas/parsers.py\n@@ -76,7 +76,7 @@\n if start is not None and end is not None:\n # pop \"compression\" from kwargs because bio is uncompressed\n bio = FileReader.file_open(fname, \"rb\", kwargs.pop(\"compression\", \"infer\"))\n- if kwargs.pop(\"encoding\", False):\n+ if kwargs.get(\"encoding\", None) is not None:\n header = b\"\" + bio.readline()\n else:\n header = b\"\"\n", "issue": "Encoding error : `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 2: invalid continuation byte`\n<!--\r\nGeneral questions should be asked on the mailing list [email protected].\r\n\r\nBefore submitting an issue, please fill out the following form.\r\n-->\r\n\r\n### System information\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: ubuntu 18.04\r\n- **Modin installed from : pip install modin[ray]\r\n- **Modin version**: 0.6.3\r\n- **Python version**: 3.7.3\r\n<!--\r\nYou can obtain the Modin version with\r\n\r\npython -c \"import modin; print(modin.__version__)\"\r\n-->\r\n\r\n### Describe the problem\r\n<!-- Describe the problem clearly here. -->\r\nHello, \r\ni'm trying to use modin to reduce the memory peak due the volum of the data, so i change the pandas with modin.pandas, i try to do a simple read of a file but encoded in 'latin-1' (french) . With pandas all goes smoothly but using modin i got an error of encoding as follow : \r\n`UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 2: invalid continuation byte`\r\n\r\nthe script used (which works fine on pandas but not in modin ) : \r\n`caract = pd.read_csv(path, sep=\"\\t\", encoding = \"ISO-8859-1\")`\r\n\r\n\r\nps :: i tried other encoding and the same remark : works on pandas and not on modin (backed by ray) : ISO-8859-1, ISO-8859-9, latin-1\r\n\r\nany solution ??\r\n\r\nthanks \r\n### Source code / logs\r\n`RayTaskError: ray_worker (pid=10815, host=ubuntu)\r\n File \"pandas/_libs/parsers.pyx\", line 1297, in pandas._libs.parsers.TextReader._string_convert\r\n File \"pandas/_libs/parsers.pyx\", line 1520, in pandas._libs.parsers._string_box_utf8\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 2: invalid continuation byte\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nray_worker (pid=10815, host=ubuntu)\r\n File \"/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/modin/engines/ray/task_wrapper.py\", line 8, in deploy_ray_func\r\n return func(**args)\r\n File \"/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/modin/backends/pandas/parsers.py\", line 69, in parse\r\n pandas_df = pandas.read_csv(BytesIO(to_read), **kwargs)\r\n File \"/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/pandas/io/parsers.py\", line 685, in parser_f\r\n return _read(filepath_or_buffer, kwds)\r\n File \"/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/pandas/io/parsers.py\", line 463, in _read\r\n data = parser.read(nrows)\r\n File \"/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/pandas/io/parsers.py\", line 1154, in read\r\n ret = self._engine.read(nrows)\r\n File \"/home/lasngd/.conda/envs/pytorch/lib/python3.7/site-packages/pandas/io/parsers.py\", line 2059, in read\r\n data = self._reader.read(nrows)\r\n File \"pandas/_libs/parsers.pyx\", line 881, in pandas._libs.parsers.TextReader.read\r\n File \"pandas/_libs/parsers.pyx\", line 896, in pandas._libs.parsers.TextReader._read_low_memory\r\n File \"pandas/_libs/parsers.pyx\", line 973, in pandas._libs.parsers.TextReader._read_rows\r\n File \"pandas/_libs/parsers.pyx\", line 1105, in pandas._libs.parsers.TextReader._convert_column_data\r\n File \"pandas/_libs/parsers.pyx\", line 1158, in pandas._libs.parsers.TextReader._convert_tokens\r\n File \"pandas/_libs/parsers.pyx\", line 1281, in pandas._libs.parsers.TextReader._convert_with_dtype\r\n File \"pandas/_libs/parsers.pyx\", line 1297, in pandas._libs.parsers.TextReader._string_convert\r\n File \"pandas/_libs/parsers.pyx\", line 1520, in pandas._libs.parsers._string_box_utf8\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 2: invalid continuation byte`\r\n<!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->\r\n\n", "before_files": [{"content": "import numpy as np\nimport pandas\nfrom pandas.core.dtypes.cast import find_common_type\nfrom pandas.core.dtypes.concat import union_categoricals\nfrom pandas.io.common import _infer_compression\nfrom modin.engines.base.io import FileReader\nfrom modin.data_management.utils import split_result_of_axis_func_pandas\nfrom modin.error_message import ErrorMessage\nfrom io import BytesIO\n\n\ndef _split_result_for_readers(axis, num_splits, df): # pragma: no cover\n \"\"\"Splits the DataFrame read into smaller DataFrames and handles all edge cases.\n\n Args:\n axis: Which axis to split over.\n num_splits: The number of splits to create.\n df: The DataFrame after it has been read.\n\n Returns:\n A list of pandas DataFrames.\n \"\"\"\n splits = split_result_of_axis_func_pandas(axis, num_splits, df)\n if not isinstance(splits, list):\n splits = [splits]\n return splits\n\n\ndef find_common_type_cat(types):\n if all(isinstance(t, pandas.CategoricalDtype) for t in types):\n if all(t.ordered for t in types):\n return pandas.CategoricalDtype(\n np.sort(np.unique([c for t in types for c in t.categories])[0]),\n ordered=True,\n )\n return union_categoricals(\n [pandas.Categorical([], dtype=t) for t in types],\n sort_categories=all(t.ordered for t in types),\n ).dtype\n else:\n return find_common_type(types)\n\n\nclass PandasParser(object):\n @classmethod\n def get_dtypes(cls, dtypes_ids):\n return (\n pandas.concat(cls.materialize(dtypes_ids), axis=1)\n .apply(lambda row: find_common_type_cat(row.values), axis=1)\n .squeeze(axis=0)\n )\n\n @classmethod\n def single_worker_read(cls, fname, **kwargs):\n ErrorMessage.default_to_pandas(\"Parameters provided\")\n # Use default args for everything\n pandas_frame = cls.parse(fname, **kwargs)\n if isinstance(pandas_frame, pandas.io.parsers.TextFileReader):\n pd_read = pandas_frame.read\n pandas_frame.read = lambda *args, **kwargs: cls.query_compiler_cls.from_pandas(\n pd_read(*args, **kwargs), cls.frame_cls\n )\n return pandas_frame\n return cls.query_compiler_cls.from_pandas(pandas_frame, cls.frame_cls)\n\n infer_compression = _infer_compression\n\n\nclass PandasCSVParser(PandasParser):\n @staticmethod\n def parse(fname, **kwargs):\n num_splits = kwargs.pop(\"num_splits\", None)\n start = kwargs.pop(\"start\", None)\n end = kwargs.pop(\"end\", None)\n index_col = kwargs.get(\"index_col\", None)\n if start is not None and end is not None:\n # pop \"compression\" from kwargs because bio is uncompressed\n bio = FileReader.file_open(fname, \"rb\", kwargs.pop(\"compression\", \"infer\"))\n if kwargs.pop(\"encoding\", False):\n header = b\"\" + bio.readline()\n else:\n header = b\"\"\n bio.seek(start)\n to_read = header + bio.read(end - start)\n bio.close()\n pandas_df = pandas.read_csv(BytesIO(to_read), **kwargs)\n else:\n # This only happens when we are reading with only one worker (Default)\n return pandas.read_csv(fname, **kwargs)\n if index_col is not None:\n index = pandas_df.index\n else:\n # The lengths will become the RangeIndex\n index = len(pandas_df)\n return _split_result_for_readers(1, num_splits, pandas_df) + [\n index,\n pandas_df.dtypes,\n ]\n\n\nclass PandasJSONParser(PandasParser):\n @staticmethod\n def parse(fname, **kwargs):\n num_splits = kwargs.pop(\"num_splits\", None)\n start = kwargs.pop(\"start\", None)\n end = kwargs.pop(\"end\", None)\n if start is not None and end is not None:\n # pop \"compression\" from kwargs because bio is uncompressed\n bio = FileReader.file_open(fname, \"rb\", kwargs.pop(\"compression\", \"infer\"))\n bio.seek(start)\n to_read = b\"\" + bio.read(end - start)\n bio.close()\n columns = kwargs.pop(\"columns\")\n pandas_df = pandas.read_json(BytesIO(to_read), **kwargs)\n else:\n # This only happens when we are reading with only one worker (Default)\n return pandas.read_json(fname, **kwargs)\n if not pandas_df.columns.equals(columns):\n raise NotImplementedError(\"Columns must be the same across all rows.\")\n partition_columns = pandas_df.columns\n return _split_result_for_readers(1, num_splits, pandas_df) + [\n len(pandas_df),\n pandas_df.dtypes,\n partition_columns,\n ]\n\n\nclass PandasParquetParser(PandasParser):\n @staticmethod\n def parse(fname, **kwargs):\n import pyarrow.parquet as pq\n\n num_splits = kwargs.pop(\"num_splits\", None)\n columns = kwargs.get(\"columns\", None)\n if num_splits is None:\n return pandas.read_parquet(fname, **kwargs)\n kwargs[\"use_pandas_metadata\"] = True\n df = pq.read_table(fname, **kwargs).to_pandas()\n if columns is not None:\n df = df[columns]\n # Append the length of the index here to build it externally\n return _split_result_for_readers(0, num_splits, df) + [len(df.index), df.dtypes]\n\n\nclass PandasHDFParser(PandasParser): # pragma: no cover\n @staticmethod\n def parse(fname, **kwargs):\n kwargs[\"key\"] = kwargs.pop(\"_key\", None)\n num_splits = kwargs.pop(\"num_splits\", None)\n if num_splits is None:\n return pandas.read_hdf(fname, **kwargs)\n df = pandas.read_hdf(fname, **kwargs)\n # Append the length of the index here to build it externally\n return _split_result_for_readers(0, num_splits, df) + [len(df.index), df.dtypes]\n\n\nclass PandasFeatherParser(PandasParser):\n @staticmethod\n def parse(fname, **kwargs):\n from pyarrow import feather\n\n num_splits = kwargs.pop(\"num_splits\", None)\n if num_splits is None:\n return pandas.read_feather(fname, **kwargs)\n df = feather.read_feather(fname, **kwargs)\n # Append the length of the index here to build it externally\n return _split_result_for_readers(0, num_splits, df) + [len(df.index), df.dtypes]\n\n\nclass PandasSQLParser(PandasParser):\n @staticmethod\n def parse(sql, con, index_col, **kwargs):\n num_splits = kwargs.pop(\"num_splits\", None)\n if num_splits is None:\n return pandas.read_sql(sql, con, index_col=index_col, **kwargs)\n df = pandas.read_sql(sql, con, index_col=index_col, **kwargs)\n if index_col is None:\n index = len(df)\n else:\n index = df.index\n return _split_result_for_readers(1, num_splits, df) + [index, df.dtypes]\n", "path": "modin/backends/pandas/parsers.py"}]}
| 3,669 | 145 |
gh_patches_debug_5534
|
rasdani/github-patches
|
git_diff
|
cowrie__cowrie-1356
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unicode-objects must be encoded before hashing
Hello,
**Describe the bug**
When user type ssh user@server the honeypot produce an error.
**To Reproduce**
Steps to reproduce the behavior:
1. Lanch honeypot
2. Connect to fake system
3. type ssh [email protected]
4. See error in logs
**Server (please complete the following information):**
- OS: Ubutun 18.04
- Python: 3.6.9
**Additional context**
Traceback:
```
Traceback (most recent call last):
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/context.py", line 122, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/context.py", line 85, in callWithContext
return func(*args,**kw)
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/ssh/service.py", line 45, in packetReceived
return f(packet)
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/ssh/connection.py", line 258, in ssh_CHANNEL_DATA
log.callWithLogger(channel, channel.dataReceived, data)
--- <exception caught here> ---
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/log.py", line 103, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/log.py", line 86, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/context.py", line 122, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/context.py", line 85, in callWithContext
return func(*args,**kw)
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/ssh/session.py", line 112, in dataReceived
self.client.transport.write(data)
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/ssh/session.py", line 163, in write
self.proto.dataReceived(data)
File "/home/valentin/honeypot/cowrie/src/cowrie/insults/insults.py", line 98, in dataReceived
insults.ServerProtocol.dataReceived(self, data)
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/insults/insults.py", line 537, in dataReceived
self.terminalProtocol.keystrokeReceived(ch, None)
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/recvline.py", line 225, in keystrokeReceived
m()
File "/home/valentin/honeypot/cowrie/src/cowrie/shell/protocol.py", line 320, in handle_RETURN
return recvline.RecvLine.handle_RETURN(self)
File "/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/recvline.py", line 292, in handle_RETURN
self.lineReceived(line)
File "/home/valentin/honeypot/cowrie/src/cowrie/shell/protocol.py", line 182, in lineReceived
self.cmdstack[-1].lineReceived(line)
File "/home/valentin/honeypot/cowrie/src/cowrie/shell/honeypot.py", line 173, in lineReceived
self.runCommand()
File "/home/valentin/honeypot/cowrie/src/cowrie/shell/honeypot.py", line 289, in runCommand
self.protocol.call_command(pp, cmdclass, *cmd_array[0]['rargs'])
File "/home/valentin/honeypot/cowrie/src/cowrie/shell/protocol.py", line 301, in call_command
HoneyPotBaseProtocol.call_command(self, pp, cmd, *args)
File "/home/valentin/honeypot/cowrie/src/cowrie/shell/protocol.py", line 191, in call_command
obj.start()
File "/home/valentin/honeypot/cowrie/src/cowrie/commands/ssh.py", line 74, in start
s = hashlib.md5(host).hexdigest()
builtins.TypeError: Unicode-objects must be encoded before hashing
```
Thanks
</issue>
<code>
[start of src/cowrie/commands/ssh.py]
1 # Copyright (c) 2009 Upi Tamminen <[email protected]>
2 # See the COPYRIGHT file for more information
3
4 from __future__ import absolute_import, division
5
6 import getopt
7 import hashlib
8 import re
9 import socket
10 import time
11
12 from twisted.internet import reactor
13 from twisted.python import log
14
15 from cowrie.core.config import CowrieConfig
16 from cowrie.shell.command import HoneyPotCommand
17
18 commands = {}
19
20
21 OUTPUT = [
22 'usage: ssh [-46AaCfGgKkMNnqsTtVvXxYy] [-B bind_interface]',
23 ' [-b bind_address] [-c cipher_spec] [-D [bind_address:]port]',
24 ' [-E log_file] [-e escape_char] [-F configfile] [-I pkcs11]',
25 ' [-i identity_file] [-J [user@]host[:port]] [-L address]',
26 ' [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]', # noqa
27 ' [-Q query_option] [-R address] [-S ctl_path] [-W host:port]',
28 ' [-w local_tun[:remote_tun]] destination [command]'
29 ]
30
31
32 class command_ssh(HoneyPotCommand):
33
34 def valid_ip(self, address):
35 try:
36 socket.inet_aton(address)
37 return True
38 except Exception:
39 return False
40
41 def start(self):
42 try:
43 options = '-1246AaCfgKkMNnqsTtVvXxYb:c:D:e:F:i:L:l:m:O:o:p:R:S:w:'
44 optlist, args = getopt.getopt(self.args, options)
45 except getopt.GetoptError:
46 self.write('Unrecognized option\n')
47 self.exit()
48 for opt in optlist:
49 if opt[0] == '-V':
50 self.write(CowrieConfig().get('shell', 'ssh_version',
51 fallback="OpenSSH_7.9p1, OpenSSL 1.1.1a 20 Nov 2018")+"\n")
52 self.exit()
53 return
54 if not len(args):
55 for line in OUTPUT:
56 self.write('{0}\n'.format(line))
57 self.exit()
58 return
59 user, host = 'root', args[0]
60 for opt in optlist:
61 if opt[0] == '-l':
62 user = opt[1]
63 if args[0].count('@'):
64 user, host = args[0].split('@', 1)
65
66 if re.match('^[0-9.]+$', host):
67 if self.valid_ip(host):
68 self.ip = host
69 else:
70 self.write('ssh: Could not resolve hostname %s: \
71 Name or service not known\n' % (host,))
72 self.exit()
73 else:
74 s = hashlib.md5(host).hexdigest()
75 self.ip = '.'.join([str(int(x, 16)) for x in
76 (s[0:2], s[2:4], s[4:6], s[6:8])])
77
78 self.host = host
79 self.user = user
80
81 self.write('The authenticity of host \'%s (%s)\' \
82 can\'t be established.\n' % (self.host, self.ip))
83 self.write('RSA key fingerprint is \
84 9d:30:97:8a:9e:48:0d:de:04:8d:76:3a:7b:4b:30:f8.\n')
85 self.write('Are you sure you want to continue connecting (yes/no)? ')
86 self.callbacks = [self.yesno, self.wait]
87
88 def yesno(self, line):
89 self.write('Warning: Permanently added \'{}\' (RSA) to the \
90 list of known hosts.\n'.format(self.host))
91 self.write('%s@%s\'s password: ' % (self.user, self.host))
92 self.protocol.password_input = True
93
94 def wait(self, line):
95 reactor.callLater(2, self.finish, line)
96
97 def finish(self, line):
98 self.pause = False
99 rest, host = self.host, 'localhost'
100 rest = self.host.strip().split('.')
101 if len(rest) and rest[0].isalpha():
102 host = rest[0]
103 self.protocol.hostname = host
104 self.protocol.cwd = '/root'
105 if not self.fs.exists(self.protocol.cwd):
106 self.protocol.cwd = '/'
107 self.protocol.password_input = False
108 self.write('Linux {} 2.6.26-2-686 #1 SMP Wed Nov 4 20:45:37 \
109 UTC 2009 i686\n'.format(self.protocol.hostname))
110 self.write('Last login: %s from 192.168.9.4\n'
111 % (time.ctime(time.time() - 123123),))
112 self.exit()
113
114 def lineReceived(self, line):
115 log.msg('INPUT (ssh):', line)
116 if len(self.callbacks):
117 self.callbacks.pop(0)(line)
118
119
120 commands['/usr/bin/ssh'] = command_ssh
121 commands['ssh'] = command_ssh
122
[end of src/cowrie/commands/ssh.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cowrie/commands/ssh.py b/src/cowrie/commands/ssh.py
--- a/src/cowrie/commands/ssh.py
+++ b/src/cowrie/commands/ssh.py
@@ -71,7 +71,7 @@
Name or service not known\n' % (host,))
self.exit()
else:
- s = hashlib.md5(host).hexdigest()
+ s = hashlib.md5(host.encode()).hexdigest()
self.ip = '.'.join([str(int(x, 16)) for x in
(s[0:2], s[2:4], s[4:6], s[6:8])])
|
{"golden_diff": "diff --git a/src/cowrie/commands/ssh.py b/src/cowrie/commands/ssh.py\n--- a/src/cowrie/commands/ssh.py\n+++ b/src/cowrie/commands/ssh.py\n@@ -71,7 +71,7 @@\n Name or service not known\\n' % (host,))\n self.exit()\n else:\n- s = hashlib.md5(host).hexdigest()\n+ s = hashlib.md5(host.encode()).hexdigest()\n self.ip = '.'.join([str(int(x, 16)) for x in\n (s[0:2], s[2:4], s[4:6], s[6:8])])\n", "issue": "Unicode-objects must be encoded before hashing\nHello,\r\n\r\n**Describe the bug**\r\nWhen user type ssh user@server the honeypot produce an error.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Lanch honeypot\r\n2. Connect to fake system\r\n3. type ssh [email protected]\r\n4. See error in logs\r\n\r\n\r\n**Server (please complete the following information):**\r\n - OS: Ubutun 18.04\r\n - Python: 3.6.9\r\n\r\n**Additional context**\r\nTraceback:\r\n```\r\n\tTraceback (most recent call last):\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/context.py\", line 122, in callWithContext\r\n\t return self.currentContext().callWithContext(ctx, func, *args, **kw)\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/context.py\", line 85, in callWithContext\r\n\t return func(*args,**kw)\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/ssh/service.py\", line 45, in packetReceived\r\n\t return f(packet)\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/ssh/connection.py\", line 258, in ssh_CHANNEL_DATA\r\n\t log.callWithLogger(channel, channel.dataReceived, data)\r\n\t--- <exception caught here> ---\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/log.py\", line 103, in callWithLogger\r\n\t return callWithContext({\"system\": lp}, func, *args, **kw)\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/log.py\", line 86, in callWithContext\r\n\t return context.call({ILogContext: newCtx}, func, *args, **kw)\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/context.py\", line 122, in callWithContext\r\n\t return self.currentContext().callWithContext(ctx, func, *args, **kw)\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/python/context.py\", line 85, in callWithContext\r\n\t return func(*args,**kw)\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/ssh/session.py\", line 112, in dataReceived\r\n\t self.client.transport.write(data)\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/ssh/session.py\", line 163, in write\r\n\t self.proto.dataReceived(data)\r\n\t File \"/home/valentin/honeypot/cowrie/src/cowrie/insults/insults.py\", line 98, in dataReceived\r\n\t insults.ServerProtocol.dataReceived(self, data)\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/insults/insults.py\", line 537, in dataReceived\r\n\t self.terminalProtocol.keystrokeReceived(ch, None)\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/recvline.py\", line 225, in keystrokeReceived\r\n\t m()\r\n\t File \"/home/valentin/honeypot/cowrie/src/cowrie/shell/protocol.py\", line 320, in handle_RETURN\r\n\t return recvline.RecvLine.handle_RETURN(self)\r\n\t File \"/home/valentin/honeypot/cowrie/cowrie-env/lib/python3.6/site-packages/twisted/conch/recvline.py\", line 292, in handle_RETURN\r\n\t self.lineReceived(line)\r\n\t File \"/home/valentin/honeypot/cowrie/src/cowrie/shell/protocol.py\", line 182, in lineReceived\r\n\t self.cmdstack[-1].lineReceived(line)\r\n\t File \"/home/valentin/honeypot/cowrie/src/cowrie/shell/honeypot.py\", line 173, in lineReceived\r\n\t self.runCommand()\r\n\t File \"/home/valentin/honeypot/cowrie/src/cowrie/shell/honeypot.py\", line 289, in runCommand\r\n\t self.protocol.call_command(pp, cmdclass, *cmd_array[0]['rargs'])\r\n\t File \"/home/valentin/honeypot/cowrie/src/cowrie/shell/protocol.py\", line 301, in call_command\r\n\t HoneyPotBaseProtocol.call_command(self, pp, cmd, *args)\r\n\t File \"/home/valentin/honeypot/cowrie/src/cowrie/shell/protocol.py\", line 191, in call_command\r\n\t obj.start()\r\n\t File \"/home/valentin/honeypot/cowrie/src/cowrie/commands/ssh.py\", line 74, in start\r\n\t s = hashlib.md5(host).hexdigest()\r\n\tbuiltins.TypeError: Unicode-objects must be encoded before hashing\r\n\t\r\n\r\n```\r\nThanks\n", "before_files": [{"content": "# Copyright (c) 2009 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\nfrom __future__ import absolute_import, division\n\nimport getopt\nimport hashlib\nimport re\nimport socket\nimport time\n\nfrom twisted.internet import reactor\nfrom twisted.python import log\n\nfrom cowrie.core.config import CowrieConfig\nfrom cowrie.shell.command import HoneyPotCommand\n\ncommands = {}\n\n\nOUTPUT = [\n 'usage: ssh [-46AaCfGgKkMNnqsTtVvXxYy] [-B bind_interface]',\n ' [-b bind_address] [-c cipher_spec] [-D [bind_address:]port]',\n ' [-E log_file] [-e escape_char] [-F configfile] [-I pkcs11]',\n ' [-i identity_file] [-J [user@]host[:port]] [-L address]',\n ' [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]', # noqa\n ' [-Q query_option] [-R address] [-S ctl_path] [-W host:port]',\n ' [-w local_tun[:remote_tun]] destination [command]'\n]\n\n\nclass command_ssh(HoneyPotCommand):\n\n def valid_ip(self, address):\n try:\n socket.inet_aton(address)\n return True\n except Exception:\n return False\n\n def start(self):\n try:\n options = '-1246AaCfgKkMNnqsTtVvXxYb:c:D:e:F:i:L:l:m:O:o:p:R:S:w:'\n optlist, args = getopt.getopt(self.args, options)\n except getopt.GetoptError:\n self.write('Unrecognized option\\n')\n self.exit()\n for opt in optlist:\n if opt[0] == '-V':\n self.write(CowrieConfig().get('shell', 'ssh_version',\n fallback=\"OpenSSH_7.9p1, OpenSSL 1.1.1a 20 Nov 2018\")+\"\\n\")\n self.exit()\n return\n if not len(args):\n for line in OUTPUT:\n self.write('{0}\\n'.format(line))\n self.exit()\n return\n user, host = 'root', args[0]\n for opt in optlist:\n if opt[0] == '-l':\n user = opt[1]\n if args[0].count('@'):\n user, host = args[0].split('@', 1)\n\n if re.match('^[0-9.]+$', host):\n if self.valid_ip(host):\n self.ip = host\n else:\n self.write('ssh: Could not resolve hostname %s: \\\n Name or service not known\\n' % (host,))\n self.exit()\n else:\n s = hashlib.md5(host).hexdigest()\n self.ip = '.'.join([str(int(x, 16)) for x in\n (s[0:2], s[2:4], s[4:6], s[6:8])])\n\n self.host = host\n self.user = user\n\n self.write('The authenticity of host \\'%s (%s)\\' \\\n can\\'t be established.\\n' % (self.host, self.ip))\n self.write('RSA key fingerprint is \\\n 9d:30:97:8a:9e:48:0d:de:04:8d:76:3a:7b:4b:30:f8.\\n')\n self.write('Are you sure you want to continue connecting (yes/no)? ')\n self.callbacks = [self.yesno, self.wait]\n\n def yesno(self, line):\n self.write('Warning: Permanently added \\'{}\\' (RSA) to the \\\n list of known hosts.\\n'.format(self.host))\n self.write('%s@%s\\'s password: ' % (self.user, self.host))\n self.protocol.password_input = True\n\n def wait(self, line):\n reactor.callLater(2, self.finish, line)\n\n def finish(self, line):\n self.pause = False\n rest, host = self.host, 'localhost'\n rest = self.host.strip().split('.')\n if len(rest) and rest[0].isalpha():\n host = rest[0]\n self.protocol.hostname = host\n self.protocol.cwd = '/root'\n if not self.fs.exists(self.protocol.cwd):\n self.protocol.cwd = '/'\n self.protocol.password_input = False\n self.write('Linux {} 2.6.26-2-686 #1 SMP Wed Nov 4 20:45:37 \\\n UTC 2009 i686\\n'.format(self.protocol.hostname))\n self.write('Last login: %s from 192.168.9.4\\n'\n % (time.ctime(time.time() - 123123),))\n self.exit()\n\n def lineReceived(self, line):\n log.msg('INPUT (ssh):', line)\n if len(self.callbacks):\n self.callbacks.pop(0)(line)\n\n\ncommands['/usr/bin/ssh'] = command_ssh\ncommands['ssh'] = command_ssh\n", "path": "src/cowrie/commands/ssh.py"}]}
| 3,191 | 149 |
gh_patches_debug_32576
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1735
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Kinto onboarding experience (part 2)
These is a followup from #1733 with random feedback with onboarding when trying to use the [accounts plugin](http://docs.kinto-storage.org/en/stable/api/1.x/accounts.html).
Started with updating my `config/kinto.ini` with:
```
kinto.includes = kinto.plugins.default_bucket
kinto.plugins.admin
kinto.plugins.accounts
```
Restarting the server goes smoothly. The admin loads fine and renders a new entry for *Kinto Account Auth*. I never created any Account just yet, though out of curiosity I try to log in using silly:silly:

Tadaa:

Wait, what?
Oh. It seems it actually used the Basic Auth strategy instead of the account one for login. This is odd and confusing as fsck.
Actually, I didn't went further with toying around with the admin as it looks broken to me. This is a little sad.
Kinto onboarding experience (part 2)
These is a followup from #1733 with random feedback with onboarding when trying to use the [accounts plugin](http://docs.kinto-storage.org/en/stable/api/1.x/accounts.html).
Started with updating my `config/kinto.ini` with:
```
kinto.includes = kinto.plugins.default_bucket
kinto.plugins.admin
kinto.plugins.accounts
```
Restarting the server goes smoothly. The admin loads fine and renders a new entry for *Kinto Account Auth*. I never created any Account just yet, though out of curiosity I try to log in using silly:silly:

Tadaa:

Wait, what?
Oh. It seems it actually used the Basic Auth strategy instead of the account one for login. This is odd and confusing as fsck.
Actually, I didn't went further with toying around with the admin as it looks broken to me. This is a little sad.
</issue>
<code>
[start of kinto/plugins/accounts/__init__.py]
1 from kinto.authorization import PERMISSIONS_INHERITANCE_TREE
2 from pyramid.exceptions import ConfigurationError
3
4 ACCOUNT_CACHE_KEY = 'accounts:{}:verified'
5 ACCOUNT_POLICY_NAME = 'account'
6
7
8 def includeme(config):
9 config.add_api_capability(
10 'accounts',
11 description='Manage user accounts.',
12 url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')
13
14 config.scan('kinto.plugins.accounts.views')
15
16 PERMISSIONS_INHERITANCE_TREE['root'].update({
17 'account:create': {}
18 })
19 PERMISSIONS_INHERITANCE_TREE['account'] = {
20 'write': {'account': ['write']},
21 'read': {'account': ['write', 'read']}
22 }
23
24 # Add some safety to avoid weird behaviour with basicauth default policy.
25 settings = config.get_settings()
26 auth_policies = settings['multiauth.policies']
27 if 'basicauth' in auth_policies and 'account' in auth_policies:
28 if auth_policies.index('basicauth') < auth_policies.index('account'):
29 error_msg = ("'basicauth' should not be mentioned before 'account' "
30 "in 'multiauth.policies' setting.")
31 raise ConfigurationError(error_msg)
32
33 # We assume anyone in account_create_principals is to create
34 # accounts for other people.
35 # No one can create accounts for other people unless they are an
36 # "admin", defined as someone matching account_write_principals.
37 # Therefore any account that is in account_create_principals
38 # should be in account_write_principals too.
39 creators = set(settings.get('account_create_principals', '').split())
40 admins = set(settings.get('account_write_principals', '').split())
41 cant_create_anything = creators.difference(admins)
42 # system.Everyone isn't an account.
43 cant_create_anything.discard('system.Everyone')
44 if cant_create_anything:
45 message = ('Configuration has some principals in account_create_principals '
46 'but not in account_write_principals. These principals will only be '
47 'able to create their own accounts. This may not be what you want.\n'
48 'If you want these users to be able to create accounts for other users, '
49 'add them to account_write_principals.\n'
50 'Affected users: {}'.format(list(cant_create_anything)))
51
52 raise ConfigurationError(message)
53
[end of kinto/plugins/accounts/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/plugins/accounts/__init__.py b/kinto/plugins/accounts/__init__.py
--- a/kinto/plugins/accounts/__init__.py
+++ b/kinto/plugins/accounts/__init__.py
@@ -1,9 +1,13 @@
+import re
+
from kinto.authorization import PERMISSIONS_INHERITANCE_TREE
from pyramid.exceptions import ConfigurationError
ACCOUNT_CACHE_KEY = 'accounts:{}:verified'
ACCOUNT_POLICY_NAME = 'account'
+DOCS_URL = "https://kinto.readthedocs.io/en/stable/api/1.x/accounts.html"
+
def includeme(config):
config.add_api_capability(
@@ -21,13 +25,28 @@
'read': {'account': ['write', 'read']}
}
- # Add some safety to avoid weird behaviour with basicauth default policy.
settings = config.get_settings()
+
+ # Check that the account policy is mentioned in config if included.
+ accountClass = 'AccountsAuthenticationPolicy'
+ policy = None
+ for k, v in settings.items():
+ m = re.match('multiauth\.policy\.(.*)\.use', k)
+ if m:
+ if v.endswith(accountClass):
+ policy = m.group(1)
+
+ if not policy:
+ error_msg = ("Account policy missing the 'multiauth.policy.*.use' "
+ "setting. See {} in docs {}.").format(accountClass, DOCS_URL)
+ raise ConfigurationError(error_msg)
+
+ # Add some safety to avoid weird behaviour with basicauth default policy.
auth_policies = settings['multiauth.policies']
- if 'basicauth' in auth_policies and 'account' in auth_policies:
- if auth_policies.index('basicauth') < auth_policies.index('account'):
- error_msg = ("'basicauth' should not be mentioned before 'account' "
- "in 'multiauth.policies' setting.")
+ if 'basicauth' in auth_policies and policy in auth_policies:
+ if auth_policies.index('basicauth') < auth_policies.index(policy):
+ error_msg = ("'basicauth' should not be mentioned before '%s' "
+ "in 'multiauth.policies' setting.") % policy
raise ConfigurationError(error_msg)
# We assume anyone in account_create_principals is to create
|
{"golden_diff": "diff --git a/kinto/plugins/accounts/__init__.py b/kinto/plugins/accounts/__init__.py\n--- a/kinto/plugins/accounts/__init__.py\n+++ b/kinto/plugins/accounts/__init__.py\n@@ -1,9 +1,13 @@\n+import re\n+\n from kinto.authorization import PERMISSIONS_INHERITANCE_TREE\n from pyramid.exceptions import ConfigurationError\n \n ACCOUNT_CACHE_KEY = 'accounts:{}:verified'\n ACCOUNT_POLICY_NAME = 'account'\n \n+DOCS_URL = \"https://kinto.readthedocs.io/en/stable/api/1.x/accounts.html\"\n+\n \n def includeme(config):\n config.add_api_capability(\n@@ -21,13 +25,28 @@\n 'read': {'account': ['write', 'read']}\n }\n \n- # Add some safety to avoid weird behaviour with basicauth default policy.\n settings = config.get_settings()\n+\n+ # Check that the account policy is mentioned in config if included.\n+ accountClass = 'AccountsAuthenticationPolicy'\n+ policy = None\n+ for k, v in settings.items():\n+ m = re.match('multiauth\\.policy\\.(.*)\\.use', k)\n+ if m:\n+ if v.endswith(accountClass):\n+ policy = m.group(1)\n+\n+ if not policy:\n+ error_msg = (\"Account policy missing the 'multiauth.policy.*.use' \"\n+ \"setting. See {} in docs {}.\").format(accountClass, DOCS_URL)\n+ raise ConfigurationError(error_msg)\n+\n+ # Add some safety to avoid weird behaviour with basicauth default policy.\n auth_policies = settings['multiauth.policies']\n- if 'basicauth' in auth_policies and 'account' in auth_policies:\n- if auth_policies.index('basicauth') < auth_policies.index('account'):\n- error_msg = (\"'basicauth' should not be mentioned before 'account' \"\n- \"in 'multiauth.policies' setting.\")\n+ if 'basicauth' in auth_policies and policy in auth_policies:\n+ if auth_policies.index('basicauth') < auth_policies.index(policy):\n+ error_msg = (\"'basicauth' should not be mentioned before '%s' \"\n+ \"in 'multiauth.policies' setting.\") % policy\n raise ConfigurationError(error_msg)\n \n # We assume anyone in account_create_principals is to create\n", "issue": "Kinto onboarding experience (part 2)\nThese is a followup from #1733 with random feedback with onboarding when trying to use the [accounts plugin](http://docs.kinto-storage.org/en/stable/api/1.x/accounts.html).\r\n\r\n\r\n\r\nStarted with updating my `config/kinto.ini` with:\r\n\r\n```\r\nkinto.includes = kinto.plugins.default_bucket\r\n kinto.plugins.admin\r\n kinto.plugins.accounts\r\n```\r\n\r\nRestarting the server goes smoothly. The admin loads fine and renders a new entry for *Kinto Account Auth*. I never created any Account just yet, though out of curiosity I try to log in using silly:silly:\r\n\r\n\r\n\r\nTadaa:\r\n\r\n\r\n\r\nWait, what?\r\n\r\nOh. It seems it actually used the Basic Auth strategy instead of the account one for login. This is odd and confusing as fsck.\r\n\r\nActually, I didn't went further with toying around with the admin as it looks broken to me. This is a little sad.\nKinto onboarding experience (part 2)\nThese is a followup from #1733 with random feedback with onboarding when trying to use the [accounts plugin](http://docs.kinto-storage.org/en/stable/api/1.x/accounts.html).\r\n\r\n\r\n\r\nStarted with updating my `config/kinto.ini` with:\r\n\r\n```\r\nkinto.includes = kinto.plugins.default_bucket\r\n kinto.plugins.admin\r\n kinto.plugins.accounts\r\n```\r\n\r\nRestarting the server goes smoothly. The admin loads fine and renders a new entry for *Kinto Account Auth*. I never created any Account just yet, though out of curiosity I try to log in using silly:silly:\r\n\r\n\r\n\r\nTadaa:\r\n\r\n\r\n\r\nWait, what?\r\n\r\nOh. It seems it actually used the Basic Auth strategy instead of the account one for login. This is odd and confusing as fsck.\r\n\r\nActually, I didn't went further with toying around with the admin as it looks broken to me. This is a little sad.\n", "before_files": [{"content": "from kinto.authorization import PERMISSIONS_INHERITANCE_TREE\nfrom pyramid.exceptions import ConfigurationError\n\nACCOUNT_CACHE_KEY = 'accounts:{}:verified'\nACCOUNT_POLICY_NAME = 'account'\n\n\ndef includeme(config):\n config.add_api_capability(\n 'accounts',\n description='Manage user accounts.',\n url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')\n\n config.scan('kinto.plugins.accounts.views')\n\n PERMISSIONS_INHERITANCE_TREE['root'].update({\n 'account:create': {}\n })\n PERMISSIONS_INHERITANCE_TREE['account'] = {\n 'write': {'account': ['write']},\n 'read': {'account': ['write', 'read']}\n }\n\n # Add some safety to avoid weird behaviour with basicauth default policy.\n settings = config.get_settings()\n auth_policies = settings['multiauth.policies']\n if 'basicauth' in auth_policies and 'account' in auth_policies:\n if auth_policies.index('basicauth') < auth_policies.index('account'):\n error_msg = (\"'basicauth' should not be mentioned before 'account' \"\n \"in 'multiauth.policies' setting.\")\n raise ConfigurationError(error_msg)\n\n # We assume anyone in account_create_principals is to create\n # accounts for other people.\n # No one can create accounts for other people unless they are an\n # \"admin\", defined as someone matching account_write_principals.\n # Therefore any account that is in account_create_principals\n # should be in account_write_principals too.\n creators = set(settings.get('account_create_principals', '').split())\n admins = set(settings.get('account_write_principals', '').split())\n cant_create_anything = creators.difference(admins)\n # system.Everyone isn't an account.\n cant_create_anything.discard('system.Everyone')\n if cant_create_anything:\n message = ('Configuration has some principals in account_create_principals '\n 'but not in account_write_principals. These principals will only be '\n 'able to create their own accounts. This may not be what you want.\\n'\n 'If you want these users to be able to create accounts for other users, '\n 'add them to account_write_principals.\\n'\n 'Affected users: {}'.format(list(cant_create_anything)))\n\n raise ConfigurationError(message)\n", "path": "kinto/plugins/accounts/__init__.py"}]}
| 1,622 | 529 |
gh_patches_debug_22977
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-410
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve documentation of cms.apps (CmsConfig)
Explain what cms.apps is doing, what CmsConfig is for and add docstrings of the following format:
```
"""
[Summary]
:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]
:type [ParamName]: [ParamType](, optional)
...
:raises [ErrorType]: [ErrorDescription]
...
:return: [ReturnDescription]
:rtype: [ReturnType]
"""
```
</issue>
<code>
[start of src/cms/apps.py]
1 """
2 Django related class representing a config of an app
3 """
4 import logging
5 import sys
6 from django.conf import settings
7 from django.apps import AppConfig
8
9 logger = logging.getLogger(__name__)
10
11 class CmsConfig(AppConfig):
12 """
13 Class inheriting the django AppConfig
14 """
15
16 name = 'cms'
17
18 def ready(self):
19 if settings.SECRET_KEY == '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_' and not settings.DEBUG:
20 logger.error("You are running the Integreat CMS in production mode. Change the SECRET_KEY in the settings.py!")
21 sys.exit(1)
22
[end of src/cms/apps.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cms/apps.py b/src/cms/apps.py
--- a/src/cms/apps.py
+++ b/src/cms/apps.py
@@ -1,6 +1,3 @@
-"""
-Django related class representing a config of an app
-"""
import logging
import sys
from django.conf import settings
@@ -10,12 +7,23 @@
class CmsConfig(AppConfig):
"""
- Class inheriting the django AppConfig
+ This class represents the Django-configuration of the backend.
+
+ See :class:`django.apps.AppConfig` for more information.
+
+ :param name: The name of the app
+ :type name: str
"""
name = 'cms'
def ready(self):
+ """
+ This function gets executed exactly once each time the cms starts. We use it to check wether the secret key was
+ not changed in production mode and show an error message if this is the case.
+
+ See :meth:`django.apps.AppConfig.ready` for more information.
+ """
if settings.SECRET_KEY == '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_' and not settings.DEBUG:
logger.error("You are running the Integreat CMS in production mode. Change the SECRET_KEY in the settings.py!")
sys.exit(1)
|
{"golden_diff": "diff --git a/src/cms/apps.py b/src/cms/apps.py\n--- a/src/cms/apps.py\n+++ b/src/cms/apps.py\n@@ -1,6 +1,3 @@\n-\"\"\"\n-Django related class representing a config of an app\n-\"\"\"\n import logging\n import sys\n from django.conf import settings\n@@ -10,12 +7,23 @@\n \n class CmsConfig(AppConfig):\n \"\"\"\n- Class inheriting the django AppConfig\n+ This class represents the Django-configuration of the backend.\n+\n+ See :class:`django.apps.AppConfig` for more information.\n+\n+ :param name: The name of the app\n+ :type name: str\n \"\"\"\n \n name = 'cms'\n \n def ready(self):\n+ \"\"\"\n+ This function gets executed exactly once each time the cms starts. We use it to check wether the secret key was\n+ not changed in production mode and show an error message if this is the case.\n+\n+ See :meth:`django.apps.AppConfig.ready` for more information.\n+ \"\"\"\n if settings.SECRET_KEY == '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_' and not settings.DEBUG:\n logger.error(\"You are running the Integreat CMS in production mode. Change the SECRET_KEY in the settings.py!\")\n sys.exit(1)\n", "issue": "Improve documentation of cms.apps (CmsConfig)\nExplain what cms.apps is doing, what CmsConfig is for and add docstrings of the following format:\r\n```\r\n\"\"\"\r\n[Summary]\r\n\r\n:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]\r\n:type [ParamName]: [ParamType](, optional)\r\n...\r\n:raises [ErrorType]: [ErrorDescription]\r\n...\r\n:return: [ReturnDescription]\r\n:rtype: [ReturnType]\r\n\"\"\"\r\n```\n", "before_files": [{"content": "\"\"\"\nDjango related class representing a config of an app\n\"\"\"\nimport logging\nimport sys\nfrom django.conf import settings\nfrom django.apps import AppConfig\n\nlogger = logging.getLogger(__name__)\n\nclass CmsConfig(AppConfig):\n \"\"\"\n Class inheriting the django AppConfig\n \"\"\"\n\n name = 'cms'\n\n def ready(self):\n if settings.SECRET_KEY == '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_' and not settings.DEBUG:\n logger.error(\"You are running the Integreat CMS in production mode. Change the SECRET_KEY in the settings.py!\")\n sys.exit(1)\n", "path": "src/cms/apps.py"}]}
| 817 | 312 |
gh_patches_debug_19895
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-57
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Schema getter should return public, but not mathesar_types
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
If a user wants to create a table the `public` schema, they can't currently, because the logic in the `db.schemas.get_all_schemas` function ignores it. This means when they try, an error is thrown. This is especially a problem when they've imported a DB, since most tables are in the `public` schema in most installations of PostgreSQL in the wild.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The public schema should be available for holding mathesar tables.
**To Reproduce**
Please try to provide a [Minimal, Complete, and Verifiable](http://stackoverflow.com/help/mcve) example.
Start the webapp using the README. Try to upload a CSV to the `public` schema. See the error.
**Have a nice day!**
</issue>
<code>
[start of db/schemas.py]
1 from sqlalchemy.schema import CreateSchema
2 from sqlalchemy import inspect
3
4
5 def get_all_schemas(engine):
6 inspector = inspect(engine)
7 return [
8 schema
9 for schema in inspector.get_schema_names()
10 if schema not in ["public", "information_schema"]
11 ]
12
13
14 def schema_exists(schema, engine):
15 return schema in get_all_schemas(engine)
16
17
18 def create_schema(schema, engine):
19 """
20 This method creates a Postgres schema.
21 """
22 if not schema_exists(schema, engine):
23 with engine.begin() as connection:
24 connection.execute(CreateSchema(schema))
25
[end of db/schemas.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/db/schemas.py b/db/schemas.py
--- a/db/schemas.py
+++ b/db/schemas.py
@@ -1,24 +1,28 @@
from sqlalchemy.schema import CreateSchema
from sqlalchemy import inspect
+from db import types
-def get_all_schemas(engine):
- inspector = inspect(engine)
+TYPES_SCHEMA = types.base.SCHEMA
+
+
+def get_mathesar_schemas(engine):
return [
schema
- for schema in inspector.get_schema_names()
- if schema not in ["public", "information_schema"]
+ for schema in get_all_schemas(engine)
+ if schema not in [TYPES_SCHEMA, "information_schema"]
]
-def schema_exists(schema, engine):
- return schema in get_all_schemas(engine)
+def get_all_schemas(engine):
+ inspector = inspect(engine)
+ return inspector.get_schema_names()
def create_schema(schema, engine):
"""
This method creates a Postgres schema.
"""
- if not schema_exists(schema, engine):
+ if schema not in get_all_schemas(engine):
with engine.begin() as connection:
connection.execute(CreateSchema(schema))
|
{"golden_diff": "diff --git a/db/schemas.py b/db/schemas.py\n--- a/db/schemas.py\n+++ b/db/schemas.py\n@@ -1,24 +1,28 @@\n from sqlalchemy.schema import CreateSchema\n from sqlalchemy import inspect\n \n+from db import types\n \n-def get_all_schemas(engine):\n- inspector = inspect(engine)\n+TYPES_SCHEMA = types.base.SCHEMA\n+\n+\n+def get_mathesar_schemas(engine):\n return [\n schema\n- for schema in inspector.get_schema_names()\n- if schema not in [\"public\", \"information_schema\"]\n+ for schema in get_all_schemas(engine)\n+ if schema not in [TYPES_SCHEMA, \"information_schema\"]\n ]\n \n \n-def schema_exists(schema, engine):\n- return schema in get_all_schemas(engine)\n+def get_all_schemas(engine):\n+ inspector = inspect(engine)\n+ return inspector.get_schema_names()\n \n \n def create_schema(schema, engine):\n \"\"\"\n This method creates a Postgres schema.\n \"\"\"\n- if not schema_exists(schema, engine):\n+ if schema not in get_all_schemas(engine):\n with engine.begin() as connection:\n connection.execute(CreateSchema(schema))\n", "issue": "Schema getter should return public, but not mathesar_types\n**Describe the bug**\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nIf a user wants to create a table the `public` schema, they can't currently, because the logic in the `db.schemas.get_all_schemas` function ignores it. This means when they try, an error is thrown. This is especially a problem when they've imported a DB, since most tables are in the `public` schema in most installations of PostgreSQL in the wild.\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nThe public schema should be available for holding mathesar tables.\r\n\r\n**To Reproduce**\r\nPlease try to provide a [Minimal, Complete, and Verifiable](http://stackoverflow.com/help/mcve) example.\r\n\r\nStart the webapp using the README. Try to upload a CSV to the `public` schema. See the error.\r\n\r\n**Have a nice day!**\r\n\n", "before_files": [{"content": "from sqlalchemy.schema import CreateSchema\nfrom sqlalchemy import inspect\n\n\ndef get_all_schemas(engine):\n inspector = inspect(engine)\n return [\n schema\n for schema in inspector.get_schema_names()\n if schema not in [\"public\", \"information_schema\"]\n ]\n\n\ndef schema_exists(schema, engine):\n return schema in get_all_schemas(engine)\n\n\ndef create_schema(schema, engine):\n \"\"\"\n This method creates a Postgres schema.\n \"\"\"\n if not schema_exists(schema, engine):\n with engine.begin() as connection:\n connection.execute(CreateSchema(schema))\n", "path": "db/schemas.py"}]}
| 901 | 248 |
gh_patches_debug_30876
|
rasdani/github-patches
|
git_diff
|
google__jax-12203
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pure_callback passes jax.DeviceArray to the callback on CPU when not jitted
### Description
As title.
This only happens when the callback is executed outside of jit boundaries.
This is different from the documented behaviour.
```python
➜ python
Python 3.10.6 (main, Aug 23 2022, 11:35:18) [Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import jax
>>> def test(x):
... print(type(x))
... return x
...
>>> def f(x):
... return jax.pure_callback(test, x, x)
...
>>> x= jax.numpy.ones(3)
>>> f(x)
<class 'jaxlib.xla_extension.DeviceArray'>
DeviceArray([1., 1., 1.], dtype=float32)
>>> jax.jit(f)(x)
<class 'numpy.ndarray'>
DeviceArray([1., 1., 1.], dtype=float32)
```
### What jax/jaxlib version are you using?
jax 0.3.17
### Which accelerator(s) are you using?
CPU
### Additional System Info
MacOs
</issue>
<code>
[start of jax/_src/callback.py]
1 # Copyright 2022 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Module for JAX callbacks."""
15 from __future__ import annotations
16
17 from typing import Any, Callable, Sequence
18
19 from jax import core
20 from jax import tree_util
21 from jax._src import dtypes
22 from jax._src import lib as jaxlib
23 from jax._src import util
24 from jax.interpreters import ad
25 from jax.interpreters import batching
26 from jax.interpreters import mlir
27 import numpy as np
28
29 # `pure_callback_p` is the main primitive for staging out Python pure callbacks.
30 pure_callback_p = core.Primitive("pure_callback")
31 pure_callback_p.multiple_results = True
32
33 map, unsafe_map = util.safe_map, map
34
35
36 @pure_callback_p.def_impl
37 def pure_callback_impl(*args, result_avals, callback: Callable[..., Any],
38 vectorized: bool):
39 del vectorized, result_avals
40 return callback(*args)
41
42
43 @pure_callback_p.def_abstract_eval
44 def pure_callback_abstract_eval(*avals, callback: Callable[..., Any],
45 result_avals, vectorized: bool):
46 del avals, callback, vectorized
47 return result_avals
48
49
50 def pure_callback_jvp_rule(*args, **kwargs):
51 del args, kwargs
52 raise ValueError(
53 "Pure callbacks do not support JVP. "
54 "Please use `jax.custom_jvp` to use callbacks while taking gradients.")
55
56
57 ad.primitive_jvps[pure_callback_p] = pure_callback_jvp_rule
58
59
60 def pure_callback_transpose_rule(*args, **kwargs):
61 del args, kwargs
62 raise ValueError(
63 "Pure callbacks do not support transpose. "
64 "Please use `jax.custom_vjp` to use callbacks while taking gradients.")
65
66 ad.primitive_transposes[pure_callback_p] = pure_callback_transpose_rule
67
68
69 def pure_callback_batching_rule(args, dims, *, callback, vectorized: bool,
70 result_avals: Sequence[core.ShapedArray]):
71 axis_size = next(a.shape[0] for a, d in zip(args, dims)
72 if d is not batching.not_mapped)
73 new_args = [arg if dim is batching.not_mapped else
74 batching.moveaxis(arg, dim, 0) for arg, dim in zip(args, dims)]
75 if vectorized:
76 result_avals = tuple(
77 core.unmapped_aval(axis_size, core.no_axis_name, 0, aval) # type: ignore
78 for aval in result_avals)
79 outvals = pure_callback_p.bind(
80 *new_args, callback=callback, vectorized=vectorized,
81 result_avals=result_avals)
82 else:
83 is_batched = [d is not batching.not_mapped for d in dims]
84 unbatched_args, batched_args = util.partition_list(is_batched, new_args)
85 def _batch_fun(*batched_args):
86 merged_args = util.merge_lists(is_batched, unbatched_args, batched_args)
87 return pure_callback_p.bind(
88 *merged_args, callback=callback, result_avals=result_avals,
89 vectorized=vectorized)
90 from jax._src.lax.control_flow import map as lax_map
91 outvals = lax_map(_batch_fun, *batched_args)
92 return tuple(outvals), (0,) * len(outvals)
93
94
95 batching.primitive_batchers[pure_callback_p] = pure_callback_batching_rule
96
97
98 def pure_callback_lowering(ctx, *args, callback, **params):
99
100 if ctx.module_context.platform == "TPU" and jaxlib.version < (0, 3, 15):
101 raise NotImplementedError("Pure callbacks on TPU not supported. "
102 "Please upgrade to a jaxlib >= 0.3.15.")
103
104 def _callback(*flat_args):
105 return tuple(pure_callback_p.impl(*flat_args, callback=callback, **params))
106
107 result, _, keepalive = mlir.emit_python_callback(
108 ctx, _callback, None, list(args), ctx.avals_in, ctx.avals_out, False,
109 sharding=None)
110 ctx.module_context.add_keepalive(keepalive)
111 return result
112
113 mlir.register_lowering(pure_callback_p, pure_callback_lowering)
114
115 def _check_shape_dtype(shape_dtype):
116 dt = np.dtype(shape_dtype.dtype)
117 if dtypes.canonicalize_dtype(dt) != dt:
118 raise ValueError(
119 "Cannot return 64-bit values when `jax_enable_x64` is disabled")
120
121 def pure_callback(callback: Callable[..., Any], result_shape_dtypes: Any,
122 *args: Any, vectorized: bool = False, **kwargs: Any):
123 def _flat_callback(*flat_args):
124 args, kwargs = tree_util.tree_unflatten(in_tree, flat_args)
125 return tree_util.tree_leaves(callback(*args, **kwargs))
126
127 flat_args, in_tree = tree_util.tree_flatten((args, kwargs))
128 tree_util.tree_map(_check_shape_dtype, result_shape_dtypes)
129 result_avals = tree_util.tree_map(
130 lambda x: core.ShapedArray(x.shape, x.dtype), result_shape_dtypes)
131 flat_result_avals, out_tree = tree_util.tree_flatten(result_avals)
132 out_flat = pure_callback_p.bind(
133 *flat_args, callback=_flat_callback,
134 result_avals=tuple(flat_result_avals), vectorized=vectorized)
135 return tree_util.tree_unflatten(out_tree, out_flat)
136
[end of jax/_src/callback.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/jax/_src/callback.py b/jax/_src/callback.py
--- a/jax/_src/callback.py
+++ b/jax/_src/callback.py
@@ -14,6 +14,8 @@
"""Module for JAX callbacks."""
from __future__ import annotations
+import functools
+
from typing import Any, Callable, Sequence
from jax import core
@@ -21,6 +23,7 @@
from jax._src import dtypes
from jax._src import lib as jaxlib
from jax._src import util
+from jax._src import dispatch
from jax.interpreters import ad
from jax.interpreters import batching
from jax.interpreters import mlir
@@ -33,11 +36,12 @@
map, unsafe_map = util.safe_map, map
-@pure_callback_p.def_impl
def pure_callback_impl(*args, result_avals, callback: Callable[..., Any],
vectorized: bool):
del vectorized, result_avals
return callback(*args)
+pure_callback_p.def_impl(functools.partial(dispatch.apply_primitive,
+ pure_callback_p))
@pure_callback_p.def_abstract_eval
@@ -102,7 +106,7 @@
"Please upgrade to a jaxlib >= 0.3.15.")
def _callback(*flat_args):
- return tuple(pure_callback_p.impl(*flat_args, callback=callback, **params))
+ return tuple(pure_callback_impl(*flat_args, callback=callback, **params))
result, _, keepalive = mlir.emit_python_callback(
ctx, _callback, None, list(args), ctx.avals_in, ctx.avals_out, False,
|
{"golden_diff": "diff --git a/jax/_src/callback.py b/jax/_src/callback.py\n--- a/jax/_src/callback.py\n+++ b/jax/_src/callback.py\n@@ -14,6 +14,8 @@\n \"\"\"Module for JAX callbacks.\"\"\"\n from __future__ import annotations\n \n+import functools\n+\n from typing import Any, Callable, Sequence\n \n from jax import core\n@@ -21,6 +23,7 @@\n from jax._src import dtypes\n from jax._src import lib as jaxlib\n from jax._src import util\n+from jax._src import dispatch\n from jax.interpreters import ad\n from jax.interpreters import batching\n from jax.interpreters import mlir\n@@ -33,11 +36,12 @@\n map, unsafe_map = util.safe_map, map\n \n \n-@pure_callback_p.def_impl\n def pure_callback_impl(*args, result_avals, callback: Callable[..., Any],\n vectorized: bool):\n del vectorized, result_avals\n return callback(*args)\n+pure_callback_p.def_impl(functools.partial(dispatch.apply_primitive,\n+ pure_callback_p))\n \n \n @pure_callback_p.def_abstract_eval\n@@ -102,7 +106,7 @@\n \"Please upgrade to a jaxlib >= 0.3.15.\")\n \n def _callback(*flat_args):\n- return tuple(pure_callback_p.impl(*flat_args, callback=callback, **params))\n+ return tuple(pure_callback_impl(*flat_args, callback=callback, **params))\n \n result, _, keepalive = mlir.emit_python_callback(\n ctx, _callback, None, list(args), ctx.avals_in, ctx.avals_out, False,\n", "issue": "pure_callback passes jax.DeviceArray to the callback on CPU when not jitted\n### Description\r\n\r\nAs title. \r\nThis only happens when the callback is executed outside of jit boundaries.\r\n\r\nThis is different from the documented behaviour.\r\n\r\n```python\r\n\u279c python\r\nPython 3.10.6 (main, Aug 23 2022, 11:35:18) [Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import jax\r\n>>> def test(x):\r\n... print(type(x))\r\n... return x\r\n...\r\n>>> def f(x):\r\n... return jax.pure_callback(test, x, x)\r\n...\r\n>>> x= jax.numpy.ones(3)\r\n>>> f(x)\r\n<class 'jaxlib.xla_extension.DeviceArray'>\r\nDeviceArray([1., 1., 1.], dtype=float32)\r\n>>> jax.jit(f)(x)\r\n<class 'numpy.ndarray'>\r\nDeviceArray([1., 1., 1.], dtype=float32)\r\n```\r\n\r\n### What jax/jaxlib version are you using?\r\n\r\njax 0.3.17\r\n\r\n### Which accelerator(s) are you using?\r\n\r\nCPU\r\n\r\n### Additional System Info\r\n\r\nMacOs\n", "before_files": [{"content": "# Copyright 2022 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Module for JAX callbacks.\"\"\"\nfrom __future__ import annotations\n\nfrom typing import Any, Callable, Sequence\n\nfrom jax import core\nfrom jax import tree_util\nfrom jax._src import dtypes\nfrom jax._src import lib as jaxlib\nfrom jax._src import util\nfrom jax.interpreters import ad\nfrom jax.interpreters import batching\nfrom jax.interpreters import mlir\nimport numpy as np\n\n# `pure_callback_p` is the main primitive for staging out Python pure callbacks.\npure_callback_p = core.Primitive(\"pure_callback\")\npure_callback_p.multiple_results = True\n\nmap, unsafe_map = util.safe_map, map\n\n\n@pure_callback_p.def_impl\ndef pure_callback_impl(*args, result_avals, callback: Callable[..., Any],\n vectorized: bool):\n del vectorized, result_avals\n return callback(*args)\n\n\n@pure_callback_p.def_abstract_eval\ndef pure_callback_abstract_eval(*avals, callback: Callable[..., Any],\n result_avals, vectorized: bool):\n del avals, callback, vectorized\n return result_avals\n\n\ndef pure_callback_jvp_rule(*args, **kwargs):\n del args, kwargs\n raise ValueError(\n \"Pure callbacks do not support JVP. \"\n \"Please use `jax.custom_jvp` to use callbacks while taking gradients.\")\n\n\nad.primitive_jvps[pure_callback_p] = pure_callback_jvp_rule\n\n\ndef pure_callback_transpose_rule(*args, **kwargs):\n del args, kwargs\n raise ValueError(\n \"Pure callbacks do not support transpose. \"\n \"Please use `jax.custom_vjp` to use callbacks while taking gradients.\")\n\nad.primitive_transposes[pure_callback_p] = pure_callback_transpose_rule\n\n\ndef pure_callback_batching_rule(args, dims, *, callback, vectorized: bool,\n result_avals: Sequence[core.ShapedArray]):\n axis_size = next(a.shape[0] for a, d in zip(args, dims)\n if d is not batching.not_mapped)\n new_args = [arg if dim is batching.not_mapped else\n batching.moveaxis(arg, dim, 0) for arg, dim in zip(args, dims)]\n if vectorized:\n result_avals = tuple(\n core.unmapped_aval(axis_size, core.no_axis_name, 0, aval) # type: ignore\n for aval in result_avals)\n outvals = pure_callback_p.bind(\n *new_args, callback=callback, vectorized=vectorized,\n result_avals=result_avals)\n else:\n is_batched = [d is not batching.not_mapped for d in dims]\n unbatched_args, batched_args = util.partition_list(is_batched, new_args)\n def _batch_fun(*batched_args):\n merged_args = util.merge_lists(is_batched, unbatched_args, batched_args)\n return pure_callback_p.bind(\n *merged_args, callback=callback, result_avals=result_avals,\n vectorized=vectorized)\n from jax._src.lax.control_flow import map as lax_map\n outvals = lax_map(_batch_fun, *batched_args)\n return tuple(outvals), (0,) * len(outvals)\n\n\nbatching.primitive_batchers[pure_callback_p] = pure_callback_batching_rule\n\n\ndef pure_callback_lowering(ctx, *args, callback, **params):\n\n if ctx.module_context.platform == \"TPU\" and jaxlib.version < (0, 3, 15):\n raise NotImplementedError(\"Pure callbacks on TPU not supported. \"\n \"Please upgrade to a jaxlib >= 0.3.15.\")\n\n def _callback(*flat_args):\n return tuple(pure_callback_p.impl(*flat_args, callback=callback, **params))\n\n result, _, keepalive = mlir.emit_python_callback(\n ctx, _callback, None, list(args), ctx.avals_in, ctx.avals_out, False,\n sharding=None)\n ctx.module_context.add_keepalive(keepalive)\n return result\n\nmlir.register_lowering(pure_callback_p, pure_callback_lowering)\n\ndef _check_shape_dtype(shape_dtype):\n dt = np.dtype(shape_dtype.dtype)\n if dtypes.canonicalize_dtype(dt) != dt:\n raise ValueError(\n \"Cannot return 64-bit values when `jax_enable_x64` is disabled\")\n\ndef pure_callback(callback: Callable[..., Any], result_shape_dtypes: Any,\n *args: Any, vectorized: bool = False, **kwargs: Any):\n def _flat_callback(*flat_args):\n args, kwargs = tree_util.tree_unflatten(in_tree, flat_args)\n return tree_util.tree_leaves(callback(*args, **kwargs))\n\n flat_args, in_tree = tree_util.tree_flatten((args, kwargs))\n tree_util.tree_map(_check_shape_dtype, result_shape_dtypes)\n result_avals = tree_util.tree_map(\n lambda x: core.ShapedArray(x.shape, x.dtype), result_shape_dtypes)\n flat_result_avals, out_tree = tree_util.tree_flatten(result_avals)\n out_flat = pure_callback_p.bind(\n *flat_args, callback=_flat_callback,\n result_avals=tuple(flat_result_avals), vectorized=vectorized)\n return tree_util.tree_unflatten(out_tree, out_flat)\n", "path": "jax/_src/callback.py"}]}
| 2,418 | 378 |
gh_patches_debug_48322
|
rasdani/github-patches
|
git_diff
|
interlegis__sapl-1564
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pesquisa matérias legislativas
Ao preencher alguns dados no form de pesquisa de matéria legislativa e clicar em pesquisar, ocorre o seguinte:
No chrome o botão desabilita e a seta do mouse vira um sinal de proibido
no firefox ocorre o mesmo, porem mostra resultados
</issue>
<code>
[start of sapl/crispy_layout_mixin.py]
1 from math import ceil
2
3 import rtyaml
4 from crispy_forms.bootstrap import FormActions
5 from crispy_forms.helper import FormHelper
6 from crispy_forms.layout import HTML, Div, Fieldset, Layout, Submit
7 from django import template
8 from django.core.urlresolvers import reverse
9 from django.utils import formats
10 from django.utils.translation import ugettext as _
11
12
13 def heads_and_tails(list_of_lists):
14 for alist in list_of_lists:
15 yield alist[0], alist[1:]
16
17
18 def to_column(name_span):
19 fieldname, span = name_span
20 return Div(fieldname, css_class='col-md-%d' % span)
21
22
23 def to_row(names_spans):
24 return Div(*map(to_column, names_spans), css_class='row-fluid')
25
26
27 def to_fieldsets(fields):
28 for field in fields:
29 if isinstance(field, list):
30 legend, row_specs = field[0], field[1:]
31 rows = [to_row(name_span_list) for name_span_list in row_specs]
32 yield Fieldset(legend, *rows)
33 else:
34 yield field
35
36
37 def form_actions(more=[], save_label=_('Salvar')):
38 return FormActions(
39 Submit('salvar', save_label, css_class='pull-right',
40 # para impedir resubmissão do form
41 onclick='this.disabled=true;'),
42 *more)
43
44
45 class SaplFormLayout(Layout):
46
47 def __init__(self, *fields, cancel_label=_('Cancelar'),
48 save_label=_('Salvar'), actions=None):
49
50 buttons = actions
51 if not buttons:
52 buttons = form_actions(save_label=save_label, more=[
53 HTML('<a href="{{ view.cancel_url }}"'
54 ' class="btn btn-inverse">%s</a>' % cancel_label)
55 if cancel_label else None])
56
57 _fields = list(to_fieldsets(fields))
58 if buttons:
59 _fields += [to_row([(buttons, 12)])]
60 super(SaplFormLayout, self).__init__(*_fields)
61
62
63 def get_field_display(obj, fieldname):
64 field = ''
65 try:
66 field = obj._meta.get_field(fieldname)
67 except Exception as e:
68 """ nos casos que o fieldname não é um field_model,
69 ele pode ser um aggregate, annotate, um property, um manager,
70 ou mesmo uma método no model.
71 """
72 value = getattr(obj, fieldname)
73 try:
74 verbose_name = value.model._meta.verbose_name
75 except AttributeError:
76 verbose_name = ''
77
78 else:
79 verbose_name = str(field.verbose_name)\
80 if hasattr(field, 'verbose_name') else ''
81
82 if hasattr(field, 'choices') and field.choices:
83 value = getattr(obj, 'get_%s_display' % fieldname)()
84 else:
85 value = getattr(obj, fieldname)
86
87 str_type_from_value = str(type(value))
88 str_type_from_field = str(type(field))
89
90 if value is None:
91 display = ''
92 elif 'date' in str_type_from_value:
93 display = formats.date_format(value, "SHORT_DATE_FORMAT")
94 elif 'bool' in str_type_from_value:
95 display = _('Sim') if value else _('Não')
96 elif 'ImageFieldFile' in str(type(value)):
97 if value:
98 display = '<img src="{}" />'.format(value.url)
99 else:
100 display = ''
101 elif 'FieldFile' in str_type_from_value:
102 if value:
103 display = '<a href="{}">{}</a>'.format(
104 value.url,
105 value.name.split('/')[-1:][0])
106 else:
107 display = ''
108 elif 'ManyRelatedManager' in str_type_from_value\
109 or 'RelatedManager' in str_type_from_value\
110 or 'GenericRelatedObjectManager' in str_type_from_value:
111 display = '<ul>'
112 for v in value.all():
113 display += '<li>%s</li>' % str(v)
114 display += '</ul>'
115 if not verbose_name:
116 if hasattr(field, 'related_model'):
117 verbose_name = str(
118 field.related_model._meta.verbose_name_plural)
119 elif hasattr(field, 'model'):
120 verbose_name = str(field.model._meta.verbose_name_plural)
121 elif 'GenericForeignKey' in str_type_from_field:
122 display = '<a href="{}">{}</a>'.format(
123 reverse(
124 '%s:%s_detail' % (
125 value._meta.app_config.name, obj.content_type.model),
126 args=(value.id,)),
127 value)
128 else:
129 display = str(value)
130 return verbose_name, display
131
132
133 class CrispyLayoutFormMixin:
134
135 @property
136 def layout_key(self):
137 if hasattr(super(CrispyLayoutFormMixin, self), 'layout_key'):
138 return super(CrispyLayoutFormMixin, self).layout_key
139 else:
140 return self.model.__name__
141
142 @property
143 def layout_key_set(self):
144 if hasattr(super(CrispyLayoutFormMixin, self), 'layout_key_set'):
145 return super(CrispyLayoutFormMixin, self).layout_key_set
146 else:
147 obj = self.crud if hasattr(self, 'crud') else self
148 return getattr(obj.model,
149 obj.model_set).field.model.__name__
150
151 def get_layout(self):
152 yaml_layout = '%s/layouts.yaml' % self.model._meta.app_config.label
153 return read_layout_from_yaml(yaml_layout, self.layout_key)
154
155 def get_layout_set(self):
156 obj = self.crud if hasattr(self, 'crud') else self
157 yaml_layout = '%s/layouts.yaml' % getattr(
158 obj.model, obj.model_set).field.model._meta.app_config.label
159 return read_layout_from_yaml(yaml_layout, self.layout_key_set)
160
161 @property
162 def fields(self):
163 if hasattr(self, 'form_class') and self.form_class:
164 return None
165 else:
166 '''Returns all fields in the layout'''
167 return [fieldname for legend_rows in self.get_layout()
168 for row in legend_rows[1:]
169 for fieldname, span in row]
170
171 def get_form(self, form_class=None):
172 try:
173 form = super(CrispyLayoutFormMixin, self).get_form(form_class)
174 except AttributeError:
175 # simply return None if there is no get_form on super
176 pass
177 else:
178 if self.layout_key:
179 form.helper = FormHelper()
180 form.helper.layout = SaplFormLayout(*self.get_layout())
181 return form
182
183 @property
184 def list_field_names(self):
185 '''The list of field names to display on table
186
187 This base implementation returns the field names
188 in the first fieldset of the layout.
189 '''
190 obj = self.crud if hasattr(self, 'crud') else self
191 if hasattr(obj, 'list_field_names') and obj.list_field_names:
192 return obj.list_field_names
193 rows = self.get_layout()[0][1:]
194 return [fieldname for row in rows for fieldname, __ in row]
195
196 @property
197 def list_field_names_set(self):
198 '''The list of field names to display on table
199
200 This base implementation returns the field names
201 in the first fieldset of the layout.
202 '''
203 rows = self.get_layout_set()[0][1:]
204 return [fieldname for row in rows for fieldname, __ in row]
205
206 def get_column(self, fieldname, span):
207 obj = self.get_object()
208 verbose_name, text = get_field_display(obj, fieldname)
209 return {
210 'id': fieldname,
211 'span': span,
212 'verbose_name': verbose_name,
213 'text': text,
214 }
215
216 @property
217 def layout_display(self):
218
219 return [
220 {'legend': legend,
221 'rows': [[self.get_column(fieldname, span)
222 for fieldname, span in row]
223 for row in rows]
224 } for legend, rows in heads_and_tails(self.get_layout())]
225
226
227 def read_yaml_from_file(yaml_layout):
228 # TODO cache this at application level
229 t = template.loader.get_template(yaml_layout)
230 # aqui é importante converter para str pois, dependendo do ambiente,
231 # o rtyaml pode usar yaml.CSafeLoader, que exige str ou stream
232 rendered = str(t.render())
233 return rtyaml.load(rendered)
234
235
236 def read_layout_from_yaml(yaml_layout, key):
237 # TODO cache this at application level
238 yaml = read_yaml_from_file(yaml_layout)
239 base = yaml[key]
240
241 def line_to_namespans(line):
242 split = [cell.split(':') for cell in line.split()]
243 namespans = [[s[0], int(s[1]) if len(s) > 1 else 0] for s in split]
244 remaining = 12 - sum(s for n, s in namespans)
245 nondefined = [ns for ns in namespans if not ns[1]]
246 while nondefined:
247 span = ceil(remaining / len(nondefined))
248 namespan = nondefined.pop(0)
249 namespan[1] = span
250 remaining = remaining - span
251 return list(map(tuple, namespans))
252
253 return [[legend] + [line_to_namespans(l) for l in lines]
254 for legend, lines in base.items()]
255
[end of sapl/crispy_layout_mixin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sapl/crispy_layout_mixin.py b/sapl/crispy_layout_mixin.py
--- a/sapl/crispy_layout_mixin.py
+++ b/sapl/crispy_layout_mixin.py
@@ -38,7 +38,7 @@
return FormActions(
Submit('salvar', save_label, css_class='pull-right',
# para impedir resubmissão do form
- onclick='this.disabled=true;'),
+ onclick='this.form.submit();this.disabled=true;'),
*more)
|
{"golden_diff": "diff --git a/sapl/crispy_layout_mixin.py b/sapl/crispy_layout_mixin.py\n--- a/sapl/crispy_layout_mixin.py\n+++ b/sapl/crispy_layout_mixin.py\n@@ -38,7 +38,7 @@\n return FormActions(\n Submit('salvar', save_label, css_class='pull-right',\n # para impedir resubmiss\u00e3o do form\n- onclick='this.disabled=true;'),\n+ onclick='this.form.submit();this.disabled=true;'),\n *more)\n", "issue": "Pesquisa mat\u00e9rias legislativas\nAo preencher alguns dados no form de pesquisa de mat\u00e9ria legislativa e clicar em pesquisar, ocorre o seguinte:\r\nNo chrome o bot\u00e3o desabilita e a seta do mouse vira um sinal de proibido\r\nno firefox ocorre o mesmo, porem mostra resultados\n", "before_files": [{"content": "from math import ceil\n\nimport rtyaml\nfrom crispy_forms.bootstrap import FormActions\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import HTML, Div, Fieldset, Layout, Submit\nfrom django import template\nfrom django.core.urlresolvers import reverse\nfrom django.utils import formats\nfrom django.utils.translation import ugettext as _\n\n\ndef heads_and_tails(list_of_lists):\n for alist in list_of_lists:\n yield alist[0], alist[1:]\n\n\ndef to_column(name_span):\n fieldname, span = name_span\n return Div(fieldname, css_class='col-md-%d' % span)\n\n\ndef to_row(names_spans):\n return Div(*map(to_column, names_spans), css_class='row-fluid')\n\n\ndef to_fieldsets(fields):\n for field in fields:\n if isinstance(field, list):\n legend, row_specs = field[0], field[1:]\n rows = [to_row(name_span_list) for name_span_list in row_specs]\n yield Fieldset(legend, *rows)\n else:\n yield field\n\n\ndef form_actions(more=[], save_label=_('Salvar')):\n return FormActions(\n Submit('salvar', save_label, css_class='pull-right',\n # para impedir resubmiss\u00e3o do form\n onclick='this.disabled=true;'),\n *more)\n\n\nclass SaplFormLayout(Layout):\n\n def __init__(self, *fields, cancel_label=_('Cancelar'),\n save_label=_('Salvar'), actions=None):\n\n buttons = actions\n if not buttons:\n buttons = form_actions(save_label=save_label, more=[\n HTML('<a href=\"{{ view.cancel_url }}\"'\n ' class=\"btn btn-inverse\">%s</a>' % cancel_label)\n if cancel_label else None])\n\n _fields = list(to_fieldsets(fields))\n if buttons:\n _fields += [to_row([(buttons, 12)])]\n super(SaplFormLayout, self).__init__(*_fields)\n\n\ndef get_field_display(obj, fieldname):\n field = ''\n try:\n field = obj._meta.get_field(fieldname)\n except Exception as e:\n \"\"\" nos casos que o fieldname n\u00e3o \u00e9 um field_model,\n ele pode ser um aggregate, annotate, um property, um manager,\n ou mesmo uma m\u00e9todo no model.\n \"\"\"\n value = getattr(obj, fieldname)\n try:\n verbose_name = value.model._meta.verbose_name\n except AttributeError:\n verbose_name = ''\n\n else:\n verbose_name = str(field.verbose_name)\\\n if hasattr(field, 'verbose_name') else ''\n\n if hasattr(field, 'choices') and field.choices:\n value = getattr(obj, 'get_%s_display' % fieldname)()\n else:\n value = getattr(obj, fieldname)\n\n str_type_from_value = str(type(value))\n str_type_from_field = str(type(field))\n\n if value is None:\n display = ''\n elif 'date' in str_type_from_value:\n display = formats.date_format(value, \"SHORT_DATE_FORMAT\")\n elif 'bool' in str_type_from_value:\n display = _('Sim') if value else _('N\u00e3o')\n elif 'ImageFieldFile' in str(type(value)):\n if value:\n display = '<img src=\"{}\" />'.format(value.url)\n else:\n display = ''\n elif 'FieldFile' in str_type_from_value:\n if value:\n display = '<a href=\"{}\">{}</a>'.format(\n value.url,\n value.name.split('/')[-1:][0])\n else:\n display = ''\n elif 'ManyRelatedManager' in str_type_from_value\\\n or 'RelatedManager' in str_type_from_value\\\n or 'GenericRelatedObjectManager' in str_type_from_value:\n display = '<ul>'\n for v in value.all():\n display += '<li>%s</li>' % str(v)\n display += '</ul>'\n if not verbose_name:\n if hasattr(field, 'related_model'):\n verbose_name = str(\n field.related_model._meta.verbose_name_plural)\n elif hasattr(field, 'model'):\n verbose_name = str(field.model._meta.verbose_name_plural)\n elif 'GenericForeignKey' in str_type_from_field:\n display = '<a href=\"{}\">{}</a>'.format(\n reverse(\n '%s:%s_detail' % (\n value._meta.app_config.name, obj.content_type.model),\n args=(value.id,)),\n value)\n else:\n display = str(value)\n return verbose_name, display\n\n\nclass CrispyLayoutFormMixin:\n\n @property\n def layout_key(self):\n if hasattr(super(CrispyLayoutFormMixin, self), 'layout_key'):\n return super(CrispyLayoutFormMixin, self).layout_key\n else:\n return self.model.__name__\n\n @property\n def layout_key_set(self):\n if hasattr(super(CrispyLayoutFormMixin, self), 'layout_key_set'):\n return super(CrispyLayoutFormMixin, self).layout_key_set\n else:\n obj = self.crud if hasattr(self, 'crud') else self\n return getattr(obj.model,\n obj.model_set).field.model.__name__\n\n def get_layout(self):\n yaml_layout = '%s/layouts.yaml' % self.model._meta.app_config.label\n return read_layout_from_yaml(yaml_layout, self.layout_key)\n\n def get_layout_set(self):\n obj = self.crud if hasattr(self, 'crud') else self\n yaml_layout = '%s/layouts.yaml' % getattr(\n obj.model, obj.model_set).field.model._meta.app_config.label\n return read_layout_from_yaml(yaml_layout, self.layout_key_set)\n\n @property\n def fields(self):\n if hasattr(self, 'form_class') and self.form_class:\n return None\n else:\n '''Returns all fields in the layout'''\n return [fieldname for legend_rows in self.get_layout()\n for row in legend_rows[1:]\n for fieldname, span in row]\n\n def get_form(self, form_class=None):\n try:\n form = super(CrispyLayoutFormMixin, self).get_form(form_class)\n except AttributeError:\n # simply return None if there is no get_form on super\n pass\n else:\n if self.layout_key:\n form.helper = FormHelper()\n form.helper.layout = SaplFormLayout(*self.get_layout())\n return form\n\n @property\n def list_field_names(self):\n '''The list of field names to display on table\n\n This base implementation returns the field names\n in the first fieldset of the layout.\n '''\n obj = self.crud if hasattr(self, 'crud') else self\n if hasattr(obj, 'list_field_names') and obj.list_field_names:\n return obj.list_field_names\n rows = self.get_layout()[0][1:]\n return [fieldname for row in rows for fieldname, __ in row]\n\n @property\n def list_field_names_set(self):\n '''The list of field names to display on table\n\n This base implementation returns the field names\n in the first fieldset of the layout.\n '''\n rows = self.get_layout_set()[0][1:]\n return [fieldname for row in rows for fieldname, __ in row]\n\n def get_column(self, fieldname, span):\n obj = self.get_object()\n verbose_name, text = get_field_display(obj, fieldname)\n return {\n 'id': fieldname,\n 'span': span,\n 'verbose_name': verbose_name,\n 'text': text,\n }\n\n @property\n def layout_display(self):\n\n return [\n {'legend': legend,\n 'rows': [[self.get_column(fieldname, span)\n for fieldname, span in row]\n for row in rows]\n } for legend, rows in heads_and_tails(self.get_layout())]\n\n\ndef read_yaml_from_file(yaml_layout):\n # TODO cache this at application level\n t = template.loader.get_template(yaml_layout)\n # aqui \u00e9 importante converter para str pois, dependendo do ambiente,\n # o rtyaml pode usar yaml.CSafeLoader, que exige str ou stream\n rendered = str(t.render())\n return rtyaml.load(rendered)\n\n\ndef read_layout_from_yaml(yaml_layout, key):\n # TODO cache this at application level\n yaml = read_yaml_from_file(yaml_layout)\n base = yaml[key]\n\n def line_to_namespans(line):\n split = [cell.split(':') for cell in line.split()]\n namespans = [[s[0], int(s[1]) if len(s) > 1 else 0] for s in split]\n remaining = 12 - sum(s for n, s in namespans)\n nondefined = [ns for ns in namespans if not ns[1]]\n while nondefined:\n span = ceil(remaining / len(nondefined))\n namespan = nondefined.pop(0)\n namespan[1] = span\n remaining = remaining - span\n return list(map(tuple, namespans))\n\n return [[legend] + [line_to_namespans(l) for l in lines]\n for legend, lines in base.items()]\n", "path": "sapl/crispy_layout_mixin.py"}]}
| 3,255 | 117 |
gh_patches_debug_37488
|
rasdani/github-patches
|
git_diff
|
OCA__server-tools-478
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
base_name_search_improved - Portal Access Managment
@dreispt
This ocurrs in when click on customer --> More --> Portal Access Managment.
> Traceback (most recent call last):
> File "/opt/odoov1/OCB/openerp/http.py", line 539, in _handle_exception
> return super(JsonRequest, self)._handle_exception(exception)
> File "/opt/odoov1/OCB/openerp/http.py", line 576, in dispatch
> result = self._call_function(*_self.params)
> File "/opt/odoov1/OCB/openerp/http.py", line 312, in _call_function
> return checked_call(self.db, *args, *_kwargs)
> File "/opt/odoov1/OCB/openerp/service/model.py", line 118, in wrapper
> return f(dbname, _args, *_kwargs)
> File "/opt/odoov1/OCB/openerp/http.py", line 309, in checked_call
> return self.endpoint(_a, *_kw)
> File "/opt/odoov1/OCB/openerp/http.py", line 805, in __call__
> return self.method(_args, *_kw)
> File "/opt/odoov1/OCB/openerp/http.py", line 405, in response_wrap
> response = f(_args, *_kw)
> File "/opt/odoov1/OCB/addons/web/controllers/main.py", line 944, in call_kw
> return self._call_kw(model, method, args, kwargs)
> File "/opt/odoov1/OCB/addons/web/controllers/main.py", line 936, in _call_kw
> return getattr(request.registry.get(model), method)(request.cr, request.uid, _args, *_kwargs)
> File "/opt/odoov1/OCB/openerp/api.py", line 268, in wrapper
> return old_api(self, _args, *_kwargs)
> File "/opt/odoov1/OCB/openerp/api.py", line 372, in old_api
> result = method(recs, _args, *_kwargs)
> File "/opt/odoov1/extra-addons/base_name_search_improved/models/ir_model.py", line 61, in name_search
> res = _extend_name_results(self, domain, res, limit)
> File "/opt/odoov1/extra-addons/base_name_search_improved/models/ir_model.py", line 27, in _extend_name_results
> recs = self.search(domain, limit=limit - result_count)
> File "/opt/odoov1/OCB/openerp/api.py", line 266, in wrapper
> return new_api(self, _args, *_kwargs)
> File "/opt/odoov1/OCB/openerp/api.py", line 508, in new_api
> result = method(self._model, cr, uid, _args, *_old_kwargs)
> File "/opt/odoov1/OCB/openerp/addons/base/res/res_users.py", line 121, in search
> return super(res_groups, self).search(cr, uid, args, offset, limit, order, context, count)
> File "/opt/odoov1/OCB/openerp/api.py", line 268, in wrapper
> return old_api(self, _args, *_kwargs)
> File "/opt/odoov1/OCB/openerp/models.py", line 1646, in search
> return self._search(cr, user, args, offset=offset, limit=limit, order=order, context=context, count=count)
> File "/opt/odoov1/OCB/openerp/api.py", line 268, in wrapper
> return old_api(self, _args, *_kwargs)
> File "/opt/odoov1/OCB/openerp/models.py", line 4673, in _search
> query = self._where_calc(cr, user, args, context=context)
> File "/opt/odoov1/OCB/openerp/api.py", line 268, in wrapper
> return old_api(self, _args, *_kwargs)
> File "/opt/odoov1/OCB/openerp/models.py", line 4484, in _where_calc
> e = expression.expression(cr, user, domain, self, context)
> File "/opt/odoov1/OCB/openerp/osv/expression.py", line 662, in __init__
> self.parse(cr, uid, context=context)
> File "/opt/odoov1/OCB/openerp/osv/expression.py", line 921, in parse
> fct_domain = column.search(cr, uid, model, left, [leaf.leaf], context=context)
> File "/opt/odoov1/OCB/openerp/osv/fields.py", line 1423, in search
> return self._fnct_search(obj, cr, uid, obj, name, args, context=context)
> File "/opt/odoov1/OCB/openerp/addons/base/res/res_users.py", line 80, in _search_group
> group_name = values.pop().strip()
> IndexError: pop from empty list
</issue>
<code>
[start of base_name_search_improved/models/ir_model.py]
1 # -*- coding: utf-8 -*-
2 # © 2016 Daniel Reis
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
4
5 from openerp import models, fields, api
6 from openerp import SUPERUSER_ID
7 from openerp import tools
8
9
10 # Extended name search is only used on some operators
11 ALLOWED_OPS = set(['ilike', 'like'])
12
13
14 @tools.ormcache(skiparg=0)
15 def _get_rec_names(self):
16 model = self.env['ir.model'].search(
17 [('model', '=', str(self._model))])
18 rec_name = [self._rec_name] or []
19 other_names = model.name_search_ids.mapped('name')
20 return rec_name + other_names
21
22
23 def _extend_name_results(self, domain, results, limit):
24 result_count = len(results)
25 if result_count < limit:
26 domain += [('id', 'not in', [x[0] for x in results])]
27 recs = self.search(domain, limit=limit - result_count)
28 results.extend(recs.name_get())
29 return results
30
31
32 class ModelExtended(models.Model):
33 _inherit = 'ir.model'
34
35 name_search_ids = fields.Many2many(
36 'ir.model.fields',
37 string='Name Search Fields')
38
39 def _register_hook(self, cr, ids=None):
40
41 def make_name_search():
42
43 @api.model
44 def name_search(self, name='', args=None,
45 operator='ilike', limit=100):
46 # Perform standard name search
47 res = name_search.origin(
48 self, name=name, args=args, operator=operator, limit=limit)
49 enabled = self.env.context.get('name_search_extended', True)
50 # Perform extended name search
51 if enabled and operator in ALLOWED_OPS:
52 # Support a list of fields to search on
53 all_names = _get_rec_names(self)
54 # Try regular search on each additional search field
55 for rec_name in all_names[1:]:
56 domain = [(rec_name, operator, name)]
57 res = _extend_name_results(self, domain, res, limit)
58 # Try ordered word search on each of the search fields
59 for rec_name in all_names:
60 domain = [(rec_name, operator, name.replace(' ', '%'))]
61 res = _extend_name_results(self, domain, res, limit)
62 # Try unordered word search on each of the search fields
63 for rec_name in all_names:
64 domain = [(rec_name, operator, x)
65 for x in name.split() if x]
66 res = _extend_name_results(self, domain, res, limit)
67 return res
68 return name_search
69
70 if ids is None:
71 ids = self.search(cr, SUPERUSER_ID, [])
72 for model in self.browse(cr, SUPERUSER_ID, ids):
73 Model = self.pool.get(model.model)
74 if Model:
75 Model._patch_method('name_search', make_name_search())
76 return super(ModelExtended, self)._register_hook(cr)
77
[end of base_name_search_improved/models/ir_model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/base_name_search_improved/models/ir_model.py b/base_name_search_improved/models/ir_model.py
--- a/base_name_search_improved/models/ir_model.py
+++ b/base_name_search_improved/models/ir_model.py
@@ -13,6 +13,7 @@
@tools.ormcache(skiparg=0)
def _get_rec_names(self):
+ "List of fields to search into"
model = self.env['ir.model'].search(
[('model', '=', str(self._model))])
rec_name = [self._rec_name] or []
@@ -48,22 +49,28 @@
self, name=name, args=args, operator=operator, limit=limit)
enabled = self.env.context.get('name_search_extended', True)
# Perform extended name search
- if enabled and operator in ALLOWED_OPS:
+ # Note: Empty name causes error on
+ # Customer->More->Portal Access Management
+ if name and enabled and operator in ALLOWED_OPS:
# Support a list of fields to search on
all_names = _get_rec_names(self)
+ base_domain = args or []
# Try regular search on each additional search field
for rec_name in all_names[1:]:
domain = [(rec_name, operator, name)]
- res = _extend_name_results(self, domain, res, limit)
+ res = _extend_name_results(
+ self, base_domain + domain, res, limit)
# Try ordered word search on each of the search fields
for rec_name in all_names:
domain = [(rec_name, operator, name.replace(' ', '%'))]
- res = _extend_name_results(self, domain, res, limit)
+ res = _extend_name_results(
+ self, base_domain + domain, res, limit)
# Try unordered word search on each of the search fields
for rec_name in all_names:
domain = [(rec_name, operator, x)
for x in name.split() if x]
- res = _extend_name_results(self, domain, res, limit)
+ res = _extend_name_results(
+ self, base_domain + domain, res, limit)
return res
return name_search
|
{"golden_diff": "diff --git a/base_name_search_improved/models/ir_model.py b/base_name_search_improved/models/ir_model.py\n--- a/base_name_search_improved/models/ir_model.py\n+++ b/base_name_search_improved/models/ir_model.py\n@@ -13,6 +13,7 @@\n \n @tools.ormcache(skiparg=0)\n def _get_rec_names(self):\n+ \"List of fields to search into\"\n model = self.env['ir.model'].search(\n [('model', '=', str(self._model))])\n rec_name = [self._rec_name] or []\n@@ -48,22 +49,28 @@\n self, name=name, args=args, operator=operator, limit=limit)\n enabled = self.env.context.get('name_search_extended', True)\n # Perform extended name search\n- if enabled and operator in ALLOWED_OPS:\n+ # Note: Empty name causes error on\n+ # Customer->More->Portal Access Management\n+ if name and enabled and operator in ALLOWED_OPS:\n # Support a list of fields to search on\n all_names = _get_rec_names(self)\n+ base_domain = args or []\n # Try regular search on each additional search field\n for rec_name in all_names[1:]:\n domain = [(rec_name, operator, name)]\n- res = _extend_name_results(self, domain, res, limit)\n+ res = _extend_name_results(\n+ self, base_domain + domain, res, limit)\n # Try ordered word search on each of the search fields\n for rec_name in all_names:\n domain = [(rec_name, operator, name.replace(' ', '%'))]\n- res = _extend_name_results(self, domain, res, limit)\n+ res = _extend_name_results(\n+ self, base_domain + domain, res, limit)\n # Try unordered word search on each of the search fields\n for rec_name in all_names:\n domain = [(rec_name, operator, x)\n for x in name.split() if x]\n- res = _extend_name_results(self, domain, res, limit)\n+ res = _extend_name_results(\n+ self, base_domain + domain, res, limit)\n return res\n return name_search\n", "issue": "base_name_search_improved - Portal Access Managment\n@dreispt \nThis ocurrs in when click on customer --> More --> Portal Access Managment.\n\n> Traceback (most recent call last):\n> File \"/opt/odoov1/OCB/openerp/http.py\", line 539, in _handle_exception\n> return super(JsonRequest, self)._handle_exception(exception)\n> File \"/opt/odoov1/OCB/openerp/http.py\", line 576, in dispatch\n> result = self._call_function(*_self.params)\n> File \"/opt/odoov1/OCB/openerp/http.py\", line 312, in _call_function\n> return checked_call(self.db, *args, *_kwargs)\n> File \"/opt/odoov1/OCB/openerp/service/model.py\", line 118, in wrapper\n> return f(dbname, _args, *_kwargs)\n> File \"/opt/odoov1/OCB/openerp/http.py\", line 309, in checked_call\n> return self.endpoint(_a, *_kw)\n> File \"/opt/odoov1/OCB/openerp/http.py\", line 805, in __call__\n> return self.method(_args, *_kw)\n> File \"/opt/odoov1/OCB/openerp/http.py\", line 405, in response_wrap\n> response = f(_args, *_kw)\n> File \"/opt/odoov1/OCB/addons/web/controllers/main.py\", line 944, in call_kw\n> return self._call_kw(model, method, args, kwargs)\n> File \"/opt/odoov1/OCB/addons/web/controllers/main.py\", line 936, in _call_kw\n> return getattr(request.registry.get(model), method)(request.cr, request.uid, _args, *_kwargs)\n> File \"/opt/odoov1/OCB/openerp/api.py\", line 268, in wrapper\n> return old_api(self, _args, *_kwargs)\n> File \"/opt/odoov1/OCB/openerp/api.py\", line 372, in old_api\n> result = method(recs, _args, *_kwargs)\n> File \"/opt/odoov1/extra-addons/base_name_search_improved/models/ir_model.py\", line 61, in name_search\n> res = _extend_name_results(self, domain, res, limit)\n> File \"/opt/odoov1/extra-addons/base_name_search_improved/models/ir_model.py\", line 27, in _extend_name_results\n> recs = self.search(domain, limit=limit - result_count)\n> File \"/opt/odoov1/OCB/openerp/api.py\", line 266, in wrapper\n> return new_api(self, _args, *_kwargs)\n> File \"/opt/odoov1/OCB/openerp/api.py\", line 508, in new_api\n> result = method(self._model, cr, uid, _args, *_old_kwargs)\n> File \"/opt/odoov1/OCB/openerp/addons/base/res/res_users.py\", line 121, in search\n> return super(res_groups, self).search(cr, uid, args, offset, limit, order, context, count)\n> File \"/opt/odoov1/OCB/openerp/api.py\", line 268, in wrapper\n> return old_api(self, _args, *_kwargs)\n> File \"/opt/odoov1/OCB/openerp/models.py\", line 1646, in search\n> return self._search(cr, user, args, offset=offset, limit=limit, order=order, context=context, count=count)\n> File \"/opt/odoov1/OCB/openerp/api.py\", line 268, in wrapper\n> return old_api(self, _args, *_kwargs)\n> File \"/opt/odoov1/OCB/openerp/models.py\", line 4673, in _search\n> query = self._where_calc(cr, user, args, context=context)\n> File \"/opt/odoov1/OCB/openerp/api.py\", line 268, in wrapper\n> return old_api(self, _args, *_kwargs)\n> File \"/opt/odoov1/OCB/openerp/models.py\", line 4484, in _where_calc\n> e = expression.expression(cr, user, domain, self, context)\n> File \"/opt/odoov1/OCB/openerp/osv/expression.py\", line 662, in __init__\n> self.parse(cr, uid, context=context)\n> File \"/opt/odoov1/OCB/openerp/osv/expression.py\", line 921, in parse\n> fct_domain = column.search(cr, uid, model, left, [leaf.leaf], context=context)\n> File \"/opt/odoov1/OCB/openerp/osv/fields.py\", line 1423, in search\n> return self._fnct_search(obj, cr, uid, obj, name, args, context=context)\n> File \"/opt/odoov1/OCB/openerp/addons/base/res/res_users.py\", line 80, in _search_group\n> group_name = values.pop().strip()\n> IndexError: pop from empty list\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# \u00a9 2016 Daniel Reis\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).\n\nfrom openerp import models, fields, api\nfrom openerp import SUPERUSER_ID\nfrom openerp import tools\n\n\n# Extended name search is only used on some operators\nALLOWED_OPS = set(['ilike', 'like'])\n\n\[email protected](skiparg=0)\ndef _get_rec_names(self):\n model = self.env['ir.model'].search(\n [('model', '=', str(self._model))])\n rec_name = [self._rec_name] or []\n other_names = model.name_search_ids.mapped('name')\n return rec_name + other_names\n\n\ndef _extend_name_results(self, domain, results, limit):\n result_count = len(results)\n if result_count < limit:\n domain += [('id', 'not in', [x[0] for x in results])]\n recs = self.search(domain, limit=limit - result_count)\n results.extend(recs.name_get())\n return results\n\n\nclass ModelExtended(models.Model):\n _inherit = 'ir.model'\n\n name_search_ids = fields.Many2many(\n 'ir.model.fields',\n string='Name Search Fields')\n\n def _register_hook(self, cr, ids=None):\n\n def make_name_search():\n\n @api.model\n def name_search(self, name='', args=None,\n operator='ilike', limit=100):\n # Perform standard name search\n res = name_search.origin(\n self, name=name, args=args, operator=operator, limit=limit)\n enabled = self.env.context.get('name_search_extended', True)\n # Perform extended name search\n if enabled and operator in ALLOWED_OPS:\n # Support a list of fields to search on\n all_names = _get_rec_names(self)\n # Try regular search on each additional search field\n for rec_name in all_names[1:]:\n domain = [(rec_name, operator, name)]\n res = _extend_name_results(self, domain, res, limit)\n # Try ordered word search on each of the search fields\n for rec_name in all_names:\n domain = [(rec_name, operator, name.replace(' ', '%'))]\n res = _extend_name_results(self, domain, res, limit)\n # Try unordered word search on each of the search fields\n for rec_name in all_names:\n domain = [(rec_name, operator, x)\n for x in name.split() if x]\n res = _extend_name_results(self, domain, res, limit)\n return res\n return name_search\n\n if ids is None:\n ids = self.search(cr, SUPERUSER_ID, [])\n for model in self.browse(cr, SUPERUSER_ID, ids):\n Model = self.pool.get(model.model)\n if Model:\n Model._patch_method('name_search', make_name_search())\n return super(ModelExtended, self)._register_hook(cr)\n", "path": "base_name_search_improved/models/ir_model.py"}]}
| 2,558 | 489 |
gh_patches_debug_25834
|
rasdani/github-patches
|
git_diff
|
fedora-infra__bodhi-2899
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Delete bodhi.server.views.admin
Bodhi has a strange view that tells admin users what their username and principals are, but does not allow non-admin users to use it:
https://github.com/fedora-infra/bodhi/blob/3.0.0/bodhi/server/views/admin.py
When I visit https://bodhi.fedoraproject.org/admin/ I see:
```
{"principals": ["system.Everyone", "system.Authenticated", "bowlofeggs", "group:packager", "group:infra-sig", "group:bodhiadmin"], "user": "bowlofeggs"}
```
I don't know what the purpose of this view was, but I'm pretty sure we can delete it.
</issue>
<code>
[start of bodhi/server/views/admin.py]
1 # Copyright © 2014-2017 Red Hat, Inc. and others
2 #
3 # This file is part of Bodhi.
4 #
5 # This program is free software; you can redistribute it and/or
6 # modify it under the terms of the GNU General Public License
7 # as published by the Free Software Foundation; either version 2
8 # of the License, or (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with this program; if not, write to the Free Software
17 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
18 """Define the admin view."""
19
20 from cornice import Service
21
22 from bodhi.server import log
23 from bodhi.server import security
24
25
26 admin_service = Service(name='admin', path='/admin/',
27 description='Administrator view',
28 factory=security.AdminACLFactory)
29
30
31 @admin_service.get(permission='admin')
32 def admin(request):
33 """
34 Return a dictionary with keys "user" and "principals".
35
36 "user" indexes the current user's name, and "principals" indexes the user's effective
37 principals.
38
39 Args:
40 request (pyramid.request): The current request.
41 Returns:
42 dict: A dictionary as described above.
43 """
44 user = request.user
45 log.info('%s logged into admin panel' % user.name)
46 principals = request.effective_principals
47 return {'user': user.name, 'principals': principals}
48
[end of bodhi/server/views/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bodhi/server/views/admin.py b/bodhi/server/views/admin.py
deleted file mode 100644
--- a/bodhi/server/views/admin.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# Copyright © 2014-2017 Red Hat, Inc. and others
-#
-# This file is part of Bodhi.
-#
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2
-# of the License, or (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
-"""Define the admin view."""
-
-from cornice import Service
-
-from bodhi.server import log
-from bodhi.server import security
-
-
-admin_service = Service(name='admin', path='/admin/',
- description='Administrator view',
- factory=security.AdminACLFactory)
-
-
-@admin_service.get(permission='admin')
-def admin(request):
- """
- Return a dictionary with keys "user" and "principals".
-
- "user" indexes the current user's name, and "principals" indexes the user's effective
- principals.
-
- Args:
- request (pyramid.request): The current request.
- Returns:
- dict: A dictionary as described above.
- """
- user = request.user
- log.info('%s logged into admin panel' % user.name)
- principals = request.effective_principals
- return {'user': user.name, 'principals': principals}
|
{"golden_diff": "diff --git a/bodhi/server/views/admin.py b/bodhi/server/views/admin.py\ndeleted file mode 100644\n--- a/bodhi/server/views/admin.py\n+++ /dev/null\n@@ -1,47 +0,0 @@\n-# Copyright \u00a9 2014-2017 Red Hat, Inc. and others\n-#\n-# This file is part of Bodhi.\n-#\n-# This program is free software; you can redistribute it and/or\n-# modify it under the terms of the GNU General Public License\n-# as published by the Free Software Foundation; either version 2\n-# of the License, or (at your option) any later version.\n-#\n-# This program is distributed in the hope that it will be useful,\n-# but WITHOUT ANY WARRANTY; without even the implied warranty of\n-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n-# GNU General Public License for more details.\n-#\n-# You should have received a copy of the GNU General Public License\n-# along with this program; if not, write to the Free Software\n-# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n-\"\"\"Define the admin view.\"\"\"\n-\n-from cornice import Service\n-\n-from bodhi.server import log\n-from bodhi.server import security\n-\n-\n-admin_service = Service(name='admin', path='/admin/',\n- description='Administrator view',\n- factory=security.AdminACLFactory)\n-\n-\n-@admin_service.get(permission='admin')\n-def admin(request):\n- \"\"\"\n- Return a dictionary with keys \"user\" and \"principals\".\n-\n- \"user\" indexes the current user's name, and \"principals\" indexes the user's effective\n- principals.\n-\n- Args:\n- request (pyramid.request): The current request.\n- Returns:\n- dict: A dictionary as described above.\n- \"\"\"\n- user = request.user\n- log.info('%s logged into admin panel' % user.name)\n- principals = request.effective_principals\n- return {'user': user.name, 'principals': principals}\n", "issue": "Delete bodhi.server.views.admin\nBodhi has a strange view that tells admin users what their username and principals are, but does not allow non-admin users to use it:\r\n\r\nhttps://github.com/fedora-infra/bodhi/blob/3.0.0/bodhi/server/views/admin.py\r\n\r\nWhen I visit https://bodhi.fedoraproject.org/admin/ I see:\r\n\r\n```\r\n{\"principals\": [\"system.Everyone\", \"system.Authenticated\", \"bowlofeggs\", \"group:packager\", \"group:infra-sig\", \"group:bodhiadmin\"], \"user\": \"bowlofeggs\"}\r\n```\r\n\r\nI don't know what the purpose of this view was, but I'm pretty sure we can delete it.\n", "before_files": [{"content": "# Copyright \u00a9 2014-2017 Red Hat, Inc. and others\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"Define the admin view.\"\"\"\n\nfrom cornice import Service\n\nfrom bodhi.server import log\nfrom bodhi.server import security\n\n\nadmin_service = Service(name='admin', path='/admin/',\n description='Administrator view',\n factory=security.AdminACLFactory)\n\n\n@admin_service.get(permission='admin')\ndef admin(request):\n \"\"\"\n Return a dictionary with keys \"user\" and \"principals\".\n\n \"user\" indexes the current user's name, and \"principals\" indexes the user's effective\n principals.\n\n Args:\n request (pyramid.request): The current request.\n Returns:\n dict: A dictionary as described above.\n \"\"\"\n user = request.user\n log.info('%s logged into admin panel' % user.name)\n principals = request.effective_principals\n return {'user': user.name, 'principals': principals}\n", "path": "bodhi/server/views/admin.py"}]}
| 1,173 | 479 |
gh_patches_debug_12896
|
rasdani/github-patches
|
git_diff
|
spotify__luigi-2572
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HdfsTarget is not workging in luigi > 2.6.1
This code runs correct under luigi <= 2.6.1:
```python
from luigi.contrib import hdfs
import luigi.format
# just to be sure that realy the right version is used...
import pkg_resources
print "luigi ==", pkg_resources.get_distribution("luigi").version
print "snakebite ==", pkg_resources.get_distribution("snakebite").version
destination_file = '/tmp/test/file.gz'
target = hdfs.HdfsTarget(path=destination_file, format=luigi.format.Gzip)
if target.exists():
target.remove(skip_trash=False)
fsobj = target.open('w')
fsobj.write('lol3\n')
fsobj.close()
```
with luigi 2.6.2 or 2.7.0 it breaks:
```python
luigi == 2.7.0
snakebite == 2.11.0
Traceback (most recent call last):
File "/opt/tests/hdfs_target.py", line 18, in <module>
fsobj.close()
File "/opt/python-2.7.10/lib/python2.7/site-packages/luigi/format.py", line 224, in close
self._output_pipe.close()
File "/opt/python-2.7.10/lib/python2.7/site-packages/luigi/contrib/hdfs/format.py", line 51, in close
remove(self.path)
File "/opt/python-2.7.10/lib/python2.7/site-packages/luigi/contrib/hdfs/clients.py", line 62, in result
return getattr(get_autoconfig_client(), method_name)(*args, **kwargs)
File "/opt/python-2.7.10/lib/python2.7/site-packages/luigi/contrib/hdfs/snakebite_client.py", line 140, in remove
return list(self.get_bite().delete(self.list_path(path), recurse=recursive))
File "/opt/python-2.7.10/lib/python2.7/site-packages/snakebite/client.py", line 1540, in wrapped
yield results.next()
File "/opt/python-2.7.10/lib/python2.7/site-packages/snakebite/client.py", line 508, in delete
for item in self._find_items(paths, processor, include_toplevel=True):
File "/opt/python-2.7.10/lib/python2.7/site-packages/snakebite/client.py", line 1216, in _find_items
raise FileNotFoundException("`%s': No such file or directory" % path)
snakebite.errors.FileNotFoundException: `/tmp/test/file.gz': No such file or directory
```
client.cfg:
```
[hdfs]
client = snakebite
snakebite_autoconfig = True
tmp_dir: /x/tmp
```
The file is correct in form and content written in "tmp_dir" (/x/tmp/username/test/file.gz-luigitemp-951771388).
</issue>
<code>
[start of luigi/contrib/hdfs/format.py]
1 import logging
2 import os
3
4 import luigi.format
5 from luigi.contrib.hdfs.config import load_hadoop_cmd
6 from luigi.contrib.hdfs import config as hdfs_config
7 from luigi.contrib.hdfs.clients import remove, rename, mkdir, listdir, exists
8 from luigi.contrib.hdfs.error import HDFSCliError
9
10 logger = logging.getLogger('luigi-interface')
11
12
13 class HdfsAtomicWriteError(IOError):
14 pass
15
16
17 class HdfsReadPipe(luigi.format.InputPipeProcessWrapper):
18
19 def __init__(self, path):
20 super(HdfsReadPipe, self).__init__(load_hadoop_cmd() + ['fs', '-cat', path])
21
22
23 class HdfsAtomicWritePipe(luigi.format.OutputPipeProcessWrapper):
24 """
25 File like object for writing to HDFS
26
27 The referenced file is first written to a temporary location and then
28 renamed to final location on close(). If close() isn't called
29 the temporary file will be cleaned up when this object is
30 garbage collected
31
32 TODO: if this is buggy, change it so it first writes to a
33 local temporary file and then uploads it on completion
34 """
35
36 def __init__(self, path):
37 self.path = path
38 self.tmppath = hdfs_config.tmppath(self.path)
39 parent_dir = os.path.dirname(self.tmppath)
40 mkdir(parent_dir, parents=True, raise_if_exists=False)
41 super(HdfsAtomicWritePipe, self).__init__(load_hadoop_cmd() + ['fs', '-put', '-', self.tmppath])
42
43 def abort(self):
44 logger.info("Aborting %s('%s'). Removing temporary file '%s'",
45 self.__class__.__name__, self.path, self.tmppath)
46 super(HdfsAtomicWritePipe, self).abort()
47 remove(self.tmppath, skip_trash=True)
48
49 def close(self):
50 super(HdfsAtomicWritePipe, self).close()
51 try:
52 remove(self.path)
53 except HDFSCliError:
54 pass
55 if not all(result['result'] for result in rename(self.tmppath, self.path) or []):
56 raise HdfsAtomicWriteError('Atomic write to {} failed'.format(self.path))
57
58
59 class HdfsAtomicWriteDirPipe(luigi.format.OutputPipeProcessWrapper):
60 """
61 Writes a data<data_extension> file to a directory at <path>.
62 """
63
64 def __init__(self, path, data_extension=""):
65 self.path = path
66 self.tmppath = hdfs_config.tmppath(self.path)
67 self.datapath = self.tmppath + ("/data%s" % data_extension)
68 super(HdfsAtomicWriteDirPipe, self).__init__(load_hadoop_cmd() + ['fs', '-put', '-', self.datapath])
69
70 def abort(self):
71 logger.info("Aborting %s('%s'). Removing temporary dir '%s'",
72 self.__class__.__name__, self.path, self.tmppath)
73 super(HdfsAtomicWriteDirPipe, self).abort()
74 remove(self.tmppath, skip_trash=True)
75
76 def close(self):
77 super(HdfsAtomicWriteDirPipe, self).close()
78 try:
79 if exists(self.path):
80 remove(self.path)
81 except Exception as ex:
82 if isinstance(ex, HDFSCliError) or ex.args[0].contains("FileNotFoundException"):
83 pass
84 else:
85 raise ex
86
87 # it's unlikely to fail in this way but better safe than sorry
88 if not all(result['result'] for result in rename(self.tmppath, self.path) or []):
89 raise HdfsAtomicWriteError('Atomic write to {} failed'.format(self.path))
90
91 if os.path.basename(self.tmppath) in map(os.path.basename, listdir(self.path)):
92 remove(self.path)
93 raise HdfsAtomicWriteError('Atomic write to {} failed'.format(self.path))
94
95
96 class PlainFormat(luigi.format.Format):
97
98 input = 'bytes'
99 output = 'hdfs'
100
101 def hdfs_writer(self, path):
102 return self.pipe_writer(path)
103
104 def hdfs_reader(self, path):
105 return self.pipe_reader(path)
106
107 def pipe_reader(self, path):
108 return HdfsReadPipe(path)
109
110 def pipe_writer(self, output_pipe):
111 return HdfsAtomicWritePipe(output_pipe)
112
113
114 class PlainDirFormat(luigi.format.Format):
115
116 input = 'bytes'
117 output = 'hdfs'
118
119 def hdfs_writer(self, path):
120 return self.pipe_writer(path)
121
122 def hdfs_reader(self, path):
123 return self.pipe_reader(path)
124
125 def pipe_reader(self, path):
126 # exclude underscore-prefixedfiles/folders (created by MapReduce)
127 return HdfsReadPipe("%s/[^_]*" % path)
128
129 def pipe_writer(self, path):
130 return HdfsAtomicWriteDirPipe(path)
131
132
133 Plain = PlainFormat()
134 PlainDir = PlainDirFormat()
135
136
137 class CompatibleHdfsFormat(luigi.format.Format):
138
139 output = 'hdfs'
140
141 def __init__(self, writer, reader, input=None):
142 if input is not None:
143 self.input = input
144
145 self.reader = reader
146 self.writer = writer
147
148 def pipe_writer(self, output):
149 return self.writer(output)
150
151 def pipe_reader(self, input):
152 return self.reader(input)
153
154 def hdfs_writer(self, output):
155 return self.writer(output)
156
157 def hdfs_reader(self, input):
158 return self.reader(input)
159
160 # __getstate__/__setstate__ needed for pickling, because self.reader and
161 # self.writer may be unpickleable instance methods of another format class.
162 # This was mainly to support pickling of standard HdfsTarget instances.
163
164 def __getstate__(self):
165 d = self.__dict__.copy()
166 for attr in ('reader', 'writer'):
167 method = getattr(self, attr)
168 try:
169 # if instance method, pickle instance and method name
170 d[attr] = method.__self__, method.__func__.__name__
171 except AttributeError:
172 pass # not an instance method
173 return d
174
175 def __setstate__(self, d):
176 self.__dict__ = d
177 for attr in ('reader', 'writer'):
178 try:
179 method_self, method_name = d[attr]
180 except ValueError:
181 continue
182 method = getattr(method_self, method_name)
183 setattr(self, attr, method)
184
[end of luigi/contrib/hdfs/format.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/luigi/contrib/hdfs/format.py b/luigi/contrib/hdfs/format.py
--- a/luigi/contrib/hdfs/format.py
+++ b/luigi/contrib/hdfs/format.py
@@ -49,9 +49,13 @@
def close(self):
super(HdfsAtomicWritePipe, self).close()
try:
- remove(self.path)
- except HDFSCliError:
- pass
+ if exists(self.path):
+ remove(self.path)
+ except Exception as ex:
+ if isinstance(ex, HDFSCliError) or ex.args[0].contains("FileNotFoundException"):
+ pass
+ else:
+ raise ex
if not all(result['result'] for result in rename(self.tmppath, self.path) or []):
raise HdfsAtomicWriteError('Atomic write to {} failed'.format(self.path))
|
{"golden_diff": "diff --git a/luigi/contrib/hdfs/format.py b/luigi/contrib/hdfs/format.py\n--- a/luigi/contrib/hdfs/format.py\n+++ b/luigi/contrib/hdfs/format.py\n@@ -49,9 +49,13 @@\n def close(self):\n super(HdfsAtomicWritePipe, self).close()\n try:\n- remove(self.path)\n- except HDFSCliError:\n- pass\n+ if exists(self.path):\n+ remove(self.path)\n+ except Exception as ex:\n+ if isinstance(ex, HDFSCliError) or ex.args[0].contains(\"FileNotFoundException\"):\n+ pass\n+ else:\n+ raise ex\n if not all(result['result'] for result in rename(self.tmppath, self.path) or []):\n raise HdfsAtomicWriteError('Atomic write to {} failed'.format(self.path))\n", "issue": "HdfsTarget is not workging in luigi > 2.6.1\nThis code runs correct under luigi <= 2.6.1:\r\n\r\n```python\r\nfrom luigi.contrib import hdfs\r\nimport luigi.format\r\n\r\n# just to be sure that realy the right version is used...\r\nimport pkg_resources\r\nprint \"luigi ==\", pkg_resources.get_distribution(\"luigi\").version\r\nprint \"snakebite ==\", pkg_resources.get_distribution(\"snakebite\").version\r\n\r\ndestination_file = '/tmp/test/file.gz'\r\n\r\ntarget = hdfs.HdfsTarget(path=destination_file, format=luigi.format.Gzip)\r\n\r\nif target.exists():\r\n target.remove(skip_trash=False)\r\n\r\nfsobj = target.open('w')\r\nfsobj.write('lol3\\n')\r\nfsobj.close()\r\n```\r\n\r\nwith luigi 2.6.2 or 2.7.0 it breaks:\r\n```python\r\nluigi == 2.7.0\r\nsnakebite == 2.11.0\r\nTraceback (most recent call last):\r\n File \"/opt/tests/hdfs_target.py\", line 18, in <module>\r\n fsobj.close()\r\n File \"/opt/python-2.7.10/lib/python2.7/site-packages/luigi/format.py\", line 224, in close\r\n self._output_pipe.close()\r\n File \"/opt/python-2.7.10/lib/python2.7/site-packages/luigi/contrib/hdfs/format.py\", line 51, in close\r\n remove(self.path)\r\n File \"/opt/python-2.7.10/lib/python2.7/site-packages/luigi/contrib/hdfs/clients.py\", line 62, in result\r\n return getattr(get_autoconfig_client(), method_name)(*args, **kwargs)\r\n File \"/opt/python-2.7.10/lib/python2.7/site-packages/luigi/contrib/hdfs/snakebite_client.py\", line 140, in remove\r\n return list(self.get_bite().delete(self.list_path(path), recurse=recursive))\r\n File \"/opt/python-2.7.10/lib/python2.7/site-packages/snakebite/client.py\", line 1540, in wrapped\r\n yield results.next()\r\n File \"/opt/python-2.7.10/lib/python2.7/site-packages/snakebite/client.py\", line 508, in delete\r\n for item in self._find_items(paths, processor, include_toplevel=True):\r\n File \"/opt/python-2.7.10/lib/python2.7/site-packages/snakebite/client.py\", line 1216, in _find_items\r\n raise FileNotFoundException(\"`%s': No such file or directory\" % path)\r\nsnakebite.errors.FileNotFoundException: `/tmp/test/file.gz': No such file or directory\r\n```\r\n\r\nclient.cfg:\r\n```\r\n[hdfs]\r\nclient = snakebite\r\nsnakebite_autoconfig = True\r\ntmp_dir: /x/tmp\r\n```\r\n\r\nThe file is correct in form and content written in \"tmp_dir\" (/x/tmp/username/test/file.gz-luigitemp-951771388).\n", "before_files": [{"content": "import logging\nimport os\n\nimport luigi.format\nfrom luigi.contrib.hdfs.config import load_hadoop_cmd\nfrom luigi.contrib.hdfs import config as hdfs_config\nfrom luigi.contrib.hdfs.clients import remove, rename, mkdir, listdir, exists\nfrom luigi.contrib.hdfs.error import HDFSCliError\n\nlogger = logging.getLogger('luigi-interface')\n\n\nclass HdfsAtomicWriteError(IOError):\n pass\n\n\nclass HdfsReadPipe(luigi.format.InputPipeProcessWrapper):\n\n def __init__(self, path):\n super(HdfsReadPipe, self).__init__(load_hadoop_cmd() + ['fs', '-cat', path])\n\n\nclass HdfsAtomicWritePipe(luigi.format.OutputPipeProcessWrapper):\n \"\"\"\n File like object for writing to HDFS\n\n The referenced file is first written to a temporary location and then\n renamed to final location on close(). If close() isn't called\n the temporary file will be cleaned up when this object is\n garbage collected\n\n TODO: if this is buggy, change it so it first writes to a\n local temporary file and then uploads it on completion\n \"\"\"\n\n def __init__(self, path):\n self.path = path\n self.tmppath = hdfs_config.tmppath(self.path)\n parent_dir = os.path.dirname(self.tmppath)\n mkdir(parent_dir, parents=True, raise_if_exists=False)\n super(HdfsAtomicWritePipe, self).__init__(load_hadoop_cmd() + ['fs', '-put', '-', self.tmppath])\n\n def abort(self):\n logger.info(\"Aborting %s('%s'). Removing temporary file '%s'\",\n self.__class__.__name__, self.path, self.tmppath)\n super(HdfsAtomicWritePipe, self).abort()\n remove(self.tmppath, skip_trash=True)\n\n def close(self):\n super(HdfsAtomicWritePipe, self).close()\n try:\n remove(self.path)\n except HDFSCliError:\n pass\n if not all(result['result'] for result in rename(self.tmppath, self.path) or []):\n raise HdfsAtomicWriteError('Atomic write to {} failed'.format(self.path))\n\n\nclass HdfsAtomicWriteDirPipe(luigi.format.OutputPipeProcessWrapper):\n \"\"\"\n Writes a data<data_extension> file to a directory at <path>.\n \"\"\"\n\n def __init__(self, path, data_extension=\"\"):\n self.path = path\n self.tmppath = hdfs_config.tmppath(self.path)\n self.datapath = self.tmppath + (\"/data%s\" % data_extension)\n super(HdfsAtomicWriteDirPipe, self).__init__(load_hadoop_cmd() + ['fs', '-put', '-', self.datapath])\n\n def abort(self):\n logger.info(\"Aborting %s('%s'). Removing temporary dir '%s'\",\n self.__class__.__name__, self.path, self.tmppath)\n super(HdfsAtomicWriteDirPipe, self).abort()\n remove(self.tmppath, skip_trash=True)\n\n def close(self):\n super(HdfsAtomicWriteDirPipe, self).close()\n try:\n if exists(self.path):\n remove(self.path)\n except Exception as ex:\n if isinstance(ex, HDFSCliError) or ex.args[0].contains(\"FileNotFoundException\"):\n pass\n else:\n raise ex\n\n # it's unlikely to fail in this way but better safe than sorry\n if not all(result['result'] for result in rename(self.tmppath, self.path) or []):\n raise HdfsAtomicWriteError('Atomic write to {} failed'.format(self.path))\n\n if os.path.basename(self.tmppath) in map(os.path.basename, listdir(self.path)):\n remove(self.path)\n raise HdfsAtomicWriteError('Atomic write to {} failed'.format(self.path))\n\n\nclass PlainFormat(luigi.format.Format):\n\n input = 'bytes'\n output = 'hdfs'\n\n def hdfs_writer(self, path):\n return self.pipe_writer(path)\n\n def hdfs_reader(self, path):\n return self.pipe_reader(path)\n\n def pipe_reader(self, path):\n return HdfsReadPipe(path)\n\n def pipe_writer(self, output_pipe):\n return HdfsAtomicWritePipe(output_pipe)\n\n\nclass PlainDirFormat(luigi.format.Format):\n\n input = 'bytes'\n output = 'hdfs'\n\n def hdfs_writer(self, path):\n return self.pipe_writer(path)\n\n def hdfs_reader(self, path):\n return self.pipe_reader(path)\n\n def pipe_reader(self, path):\n # exclude underscore-prefixedfiles/folders (created by MapReduce)\n return HdfsReadPipe(\"%s/[^_]*\" % path)\n\n def pipe_writer(self, path):\n return HdfsAtomicWriteDirPipe(path)\n\n\nPlain = PlainFormat()\nPlainDir = PlainDirFormat()\n\n\nclass CompatibleHdfsFormat(luigi.format.Format):\n\n output = 'hdfs'\n\n def __init__(self, writer, reader, input=None):\n if input is not None:\n self.input = input\n\n self.reader = reader\n self.writer = writer\n\n def pipe_writer(self, output):\n return self.writer(output)\n\n def pipe_reader(self, input):\n return self.reader(input)\n\n def hdfs_writer(self, output):\n return self.writer(output)\n\n def hdfs_reader(self, input):\n return self.reader(input)\n\n # __getstate__/__setstate__ needed for pickling, because self.reader and\n # self.writer may be unpickleable instance methods of another format class.\n # This was mainly to support pickling of standard HdfsTarget instances.\n\n def __getstate__(self):\n d = self.__dict__.copy()\n for attr in ('reader', 'writer'):\n method = getattr(self, attr)\n try:\n # if instance method, pickle instance and method name\n d[attr] = method.__self__, method.__func__.__name__\n except AttributeError:\n pass # not an instance method\n return d\n\n def __setstate__(self, d):\n self.__dict__ = d\n for attr in ('reader', 'writer'):\n try:\n method_self, method_name = d[attr]\n except ValueError:\n continue\n method = getattr(method_self, method_name)\n setattr(self, attr, method)\n", "path": "luigi/contrib/hdfs/format.py"}]}
| 3,039 | 194 |
gh_patches_debug_67390
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-4675
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Proxy Provider not working
Hello
Please help me, I updated the authentik server to 23.1.2, it worked perfectly until now, now the Proxy Provider is not working because of the following error
this is in the server log
{"error":"Post \"https://auth.xxx.com/application/o/token/\": dial tcp 192.168.10.240:443: connect: connection refused","event":"failed to redeem code","level":"warning","logger":"authentik.outpost.proxyv2.application","name":"Kuma","timestamp":"2023-01-24T13:01:34Z"}
The IP in the log is the IP of the nginx reverse proxy manager. The proxy works properly, I don't see any errors. Anyone have any ideas?
</issue>
<code>
[start of authentik/core/tasks.py]
1 """authentik core tasks"""
2 from datetime import datetime, timedelta
3
4 from django.contrib.sessions.backends.cache import KEY_PREFIX
5 from django.core.cache import cache
6 from django.utils.timezone import now
7 from structlog.stdlib import get_logger
8
9 from authentik.core.models import (
10 USER_ATTRIBUTE_EXPIRES,
11 USER_ATTRIBUTE_GENERATED,
12 AuthenticatedSession,
13 ExpiringModel,
14 User,
15 )
16 from authentik.events.monitored_tasks import (
17 MonitoredTask,
18 TaskResult,
19 TaskResultStatus,
20 prefill_task,
21 )
22 from authentik.root.celery import CELERY_APP
23
24 LOGGER = get_logger()
25
26
27 @CELERY_APP.task(bind=True, base=MonitoredTask)
28 @prefill_task
29 def clean_expired_models(self: MonitoredTask):
30 """Remove expired objects"""
31 messages = []
32 for cls in ExpiringModel.__subclasses__():
33 cls: ExpiringModel
34 objects = (
35 cls.objects.all().exclude(expiring=False).exclude(expiring=True, expires__gt=now())
36 )
37 amount = objects.count()
38 for obj in objects:
39 obj.expire_action()
40 LOGGER.debug("Expired models", model=cls, amount=amount)
41 messages.append(f"Expired {amount} {cls._meta.verbose_name_plural}")
42 # Special case
43 amount = 0
44 for session in AuthenticatedSession.objects.all():
45 cache_key = f"{KEY_PREFIX}{session.session_key}"
46 try:
47 value = cache.get(cache_key)
48 # pylint: disable=broad-except
49 except Exception as exc:
50 LOGGER.debug("Failed to get session from cache", exc=exc)
51 if not value:
52 session.delete()
53 amount += 1
54 LOGGER.debug("Expired sessions", model=AuthenticatedSession, amount=amount)
55 messages.append(f"Expired {amount} {AuthenticatedSession._meta.verbose_name_plural}")
56 self.set_status(TaskResult(TaskResultStatus.SUCCESSFUL, messages))
57
58
59 @CELERY_APP.task(bind=True, base=MonitoredTask)
60 @prefill_task
61 def clean_temporary_users(self: MonitoredTask):
62 """Remove temporary users created by SAML Sources"""
63 _now = datetime.now()
64 messages = []
65 deleted_users = 0
66 for user in User.objects.filter(**{f"attributes__{USER_ATTRIBUTE_GENERATED}": True}):
67 if not user.attributes.get(USER_ATTRIBUTE_EXPIRES):
68 continue
69 delta: timedelta = _now - datetime.fromtimestamp(
70 user.attributes.get(USER_ATTRIBUTE_EXPIRES)
71 )
72 if delta.total_seconds() > 0:
73 LOGGER.debug("User is expired and will be deleted.", user=user, delta=delta)
74 user.delete()
75 deleted_users += 1
76 messages.append(f"Successfully deleted {deleted_users} users.")
77 self.set_status(TaskResult(TaskResultStatus.SUCCESSFUL, messages))
78
[end of authentik/core/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/core/tasks.py b/authentik/core/tasks.py
--- a/authentik/core/tasks.py
+++ b/authentik/core/tasks.py
@@ -43,6 +43,7 @@
amount = 0
for session in AuthenticatedSession.objects.all():
cache_key = f"{KEY_PREFIX}{session.session_key}"
+ value = None
try:
value = cache.get(cache_key)
# pylint: disable=broad-except
|
{"golden_diff": "diff --git a/authentik/core/tasks.py b/authentik/core/tasks.py\n--- a/authentik/core/tasks.py\n+++ b/authentik/core/tasks.py\n@@ -43,6 +43,7 @@\n amount = 0\n for session in AuthenticatedSession.objects.all():\n cache_key = f\"{KEY_PREFIX}{session.session_key}\"\n+ value = None\n try:\n value = cache.get(cache_key)\n # pylint: disable=broad-except\n", "issue": "Proxy Provider not working \nHello\r\n\r\nPlease help me, I updated the authentik server to 23.1.2, it worked perfectly until now, now the Proxy Provider is not working because of the following error\r\n\r\nthis is in the server log\r\n\r\n{\"error\":\"Post \\\"https://auth.xxx.com/application/o/token/\\\": dial tcp 192.168.10.240:443: connect: connection refused\",\"event\":\"failed to redeem code\",\"level\":\"warning\",\"logger\":\"authentik.outpost.proxyv2.application\",\"name\":\"Kuma\",\"timestamp\":\"2023-01-24T13:01:34Z\"}\r\n\r\nThe IP in the log is the IP of the nginx reverse proxy manager. The proxy works properly, I don't see any errors. Anyone have any ideas?\r\n\n", "before_files": [{"content": "\"\"\"authentik core tasks\"\"\"\nfrom datetime import datetime, timedelta\n\nfrom django.contrib.sessions.backends.cache import KEY_PREFIX\nfrom django.core.cache import cache\nfrom django.utils.timezone import now\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.models import (\n USER_ATTRIBUTE_EXPIRES,\n USER_ATTRIBUTE_GENERATED,\n AuthenticatedSession,\n ExpiringModel,\n User,\n)\nfrom authentik.events.monitored_tasks import (\n MonitoredTask,\n TaskResult,\n TaskResultStatus,\n prefill_task,\n)\nfrom authentik.root.celery import CELERY_APP\n\nLOGGER = get_logger()\n\n\n@CELERY_APP.task(bind=True, base=MonitoredTask)\n@prefill_task\ndef clean_expired_models(self: MonitoredTask):\n \"\"\"Remove expired objects\"\"\"\n messages = []\n for cls in ExpiringModel.__subclasses__():\n cls: ExpiringModel\n objects = (\n cls.objects.all().exclude(expiring=False).exclude(expiring=True, expires__gt=now())\n )\n amount = objects.count()\n for obj in objects:\n obj.expire_action()\n LOGGER.debug(\"Expired models\", model=cls, amount=amount)\n messages.append(f\"Expired {amount} {cls._meta.verbose_name_plural}\")\n # Special case\n amount = 0\n for session in AuthenticatedSession.objects.all():\n cache_key = f\"{KEY_PREFIX}{session.session_key}\"\n try:\n value = cache.get(cache_key)\n # pylint: disable=broad-except\n except Exception as exc:\n LOGGER.debug(\"Failed to get session from cache\", exc=exc)\n if not value:\n session.delete()\n amount += 1\n LOGGER.debug(\"Expired sessions\", model=AuthenticatedSession, amount=amount)\n messages.append(f\"Expired {amount} {AuthenticatedSession._meta.verbose_name_plural}\")\n self.set_status(TaskResult(TaskResultStatus.SUCCESSFUL, messages))\n\n\n@CELERY_APP.task(bind=True, base=MonitoredTask)\n@prefill_task\ndef clean_temporary_users(self: MonitoredTask):\n \"\"\"Remove temporary users created by SAML Sources\"\"\"\n _now = datetime.now()\n messages = []\n deleted_users = 0\n for user in User.objects.filter(**{f\"attributes__{USER_ATTRIBUTE_GENERATED}\": True}):\n if not user.attributes.get(USER_ATTRIBUTE_EXPIRES):\n continue\n delta: timedelta = _now - datetime.fromtimestamp(\n user.attributes.get(USER_ATTRIBUTE_EXPIRES)\n )\n if delta.total_seconds() > 0:\n LOGGER.debug(\"User is expired and will be deleted.\", user=user, delta=delta)\n user.delete()\n deleted_users += 1\n messages.append(f\"Successfully deleted {deleted_users} users.\")\n self.set_status(TaskResult(TaskResultStatus.SUCCESSFUL, messages))\n", "path": "authentik/core/tasks.py"}]}
| 1,461 | 104 |
gh_patches_debug_3477
|
rasdani/github-patches
|
git_diff
|
opentensor__bittensor-969
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bit 590 backward fix
- Keep count of remote loss on server, saving model based on most 20 recent average loss.
- When doing tokenization remap, make sure the input size and output size is the same.
- Ensure encode_forward_causallmnext deterministic by set seed.
- When both local_train and remote_train are on: do local_train only when the server is free.
- Validator default to do backward train
Minor fixes
1) Removes the parser generation on the config with messes with --help when using a parser
2) Turns off console-rich local logging (which sucks)
</issue>
<code>
[start of bittensor/__init__.py]
1 # The MIT License (MIT)
2 # Copyright © 2021 Yuma Rao
3
4 # Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
5 # documentation files (the “Software”), to deal in the Software without restriction, including without limitation
6 # the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
7 # and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
8
9 # The above copyright notice and this permission notice shall be included in all copies or substantial portions of
10 # the Software.
11
12 # THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
13 # THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
14 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
15 # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
16 # DEALINGS IN THE SOFTWARE.
17
18 from rich.console import Console
19 from rich.traceback import install
20 from prometheus_client import Info
21
22 import nest_asyncio
23 nest_asyncio.apply()
24
25 # Bittensor code and protocol version.
26 __version__ = '3.4.1'
27 version_split = __version__.split(".")
28 __version_as_int__ = (100 * int(version_split[0])) + (10 * int(version_split[1])) + (1 * int(version_split[2]))
29
30
31 # Turn off rich console locals trace.
32 from rich.traceback import install
33 install(show_locals=False)
34
35 # Rich console.
36 __console__ = Console()
37 __use_console__ = True
38
39 # Remove overdue locals in debug training.
40 install(show_locals=False)
41
42 def turn_console_off():
43 from io import StringIO
44 __use_console__ = False
45 __console__ = Console(file=StringIO(), stderr=False)
46
47
48
49 # Vocabulary dimension.
50 #__vocab_size__ = len( tokenizer ) + len( tokenizer.additional_special_tokens) + 100 # Plus 100 for eventual token size increase.
51 __vocab_size__ = 50258
52
53 # Tensor dimension.
54 # NOTE (const): if/when this increases peers must be responsible for trimming or expanding output to this size.
55 __network_dim__ = 1024 # All network responses have shape = [ __batch_size__, __sequence_dim__, __network_dim__ ]
56
57 # Substrate chain block time (seconds).
58 __blocktime__ = 12
59
60 # Pip address for versioning
61 __pipaddress__ = 'https://pypi.org/pypi/bittensor/json'
62
63 # Substrate ss58_format
64 __ss58_format__ = 42
65
66 # Wallet ss58 address length
67 __ss58_address_length__ = 48
68
69 __networks__ = [ 'local', 'bellagene', 'nobunaga', 'nakamoto']
70
71 __datasets__ = ['ArXiv', 'BookCorpus2', 'Books3', 'DMMathematics', 'EnronEmails', 'EuroParl', 'Gutenberg_PG', 'HackerNews', 'NIHExPorter', 'OpenSubtitles', 'PhilPapers', 'UbuntuIRC', 'YoutubeSubtitles']
72
73 __nakamoto_entrypoint__ = "AtreusLB-2c6154f73e6429a9.elb.us-east-2.amazonaws.com:9944"
74
75
76 __nobunaga_entrypoint__ = "staging.nobunaga.opentensor.ai:9944"
77
78 # Needs to use wss://
79 __bellagene_entrypoint__ = "wss://parachain.opentensor.ai:443"
80
81
82 __local_entrypoint__ = "127.0.0.1:9944"
83
84
85 # Avoid collisions with other processes
86 from .utils.test_utils import get_random_unused_port
87 mock_subtensor_port = get_random_unused_port()
88 __mock_entrypoint__ = f"localhost:{mock_subtensor_port}"
89
90
91 # --- Prometheus ---
92 __prometheus_version__ = "0.1.0"
93 prometheus_version__split = __prometheus_version__.split(".")
94 __prometheus_version__as_int__ = (100 * int(prometheus_version__split[0])) + (10 * int(prometheus_version__split[1])) + (1 * int(prometheus_version__split[2]))
95 try:
96 bt_promo_info = Info("bittensor_info", "Information about the installed bittensor package.")
97 bt_promo_info.info (
98 {
99 '__version__': str(__version__),
100 '__version_as_int__': str(__version_as_int__),
101 '__vocab_size__': str(__vocab_size__),
102 '__network_dim__': str(__network_dim__),
103 '__blocktime__': str(__blocktime__),
104 '__prometheus_version__': str(__prometheus_version__),
105 '__prometheus_version__as_int__': str(__prometheus_version__as_int__),
106 }
107 )
108 except ValueError:
109 # This can silently fail if we import bittensor twice in the same process.
110 # We simply pass over this error.
111 pass
112
113 # ---- Config ----
114 from bittensor._config import config as config
115
116 # ---- LOGGING ----
117 from bittensor._logging import logging as logging
118
119 # ---- Protos ----
120 import bittensor._proto.bittensor_pb2 as proto
121 import bittensor._proto.bittensor_pb2_grpc as grpc
122
123 # ---- Neurons ----
124 import bittensor._neuron as neurons
125
126 # ---- Utils ----
127 from bittensor.utils import unbiased_topk as unbiased_topk
128
129 # ---- Factories -----
130 from bittensor.utils.balance import Balance as Balance
131 from bittensor._cli import cli as cli
132 from bittensor._axon import axon as axon
133 from bittensor._wallet import wallet as wallet
134 from bittensor._keyfile import keyfile as keyfile
135 from bittensor._receptor import receptor as receptor
136 from bittensor._endpoint import endpoint as endpoint
137 from bittensor._dendrite import dendrite as dendrite
138 from bittensor._metagraph import metagraph as metagraph
139 from bittensor._prometheus import prometheus as prometheus
140 from bittensor._subtensor import subtensor as subtensor
141 from bittensor._tokenizer import tokenizer as tokenizer
142 from bittensor._serializer import serializer as serializer
143 from bittensor._synapse import synapse as synapse
144 from bittensor._dataset import dataset as dataset
145 from bittensor._receptor import receptor_pool as receptor_pool
146 from bittensor._wandb import wandb as wandb
147 from bittensor._threadpool import prioritythreadpool as prioritythreadpool
148
149 # ---- Classes -----
150 from bittensor._cli.cli_impl import CLI as CLI
151 from bittensor._axon.axon_impl import Axon as Axon
152 from bittensor._config.config_impl import Config as Config
153 from bittensor._wallet.wallet_impl import Wallet as Wallet
154 from bittensor._keyfile.keyfile_impl import Keyfile as Keyfile
155 from bittensor._receptor.receptor_impl import Receptor as Receptor
156 from bittensor._endpoint.endpoint_impl import Endpoint as Endpoint
157 from bittensor._dendrite.dendrite_impl import Dendrite as Dendrite
158 from bittensor._metagraph.metagraph_impl import Metagraph as Metagraph
159 from bittensor._subtensor.subtensor_impl import Subtensor as Subtensor
160 from bittensor._serializer.serializer_impl import Serializer as Serializer
161 from bittensor._dataset.dataset_impl import Dataset as Dataset
162 from bittensor._receptor.receptor_pool_impl import ReceptorPool as ReceptorPool
163 from bittensor._threadpool.priority_thread_pool_impl import PriorityThreadPoolExecutor as PriorityThreadPoolExecutor
164 from bittensor._ipfs.ipfs_impl import Ipfs as Ipfs
165 from bittensor._synapse.synapse_impl import Synapse as Synapse
166 from bittensor._synapse.text_causallm_impl import TextCausalLM as TextCausalLM
167 from bittensor._synapse.text_causallmnext_impl import TextCausalLMNext as TextCausalLMNext
168 from bittensor._synapse.text_lasthiddenstate_impl import TextLastHiddenState as TextLastHiddenState
169 from bittensor._synapse.text_seq2seq_impl import TextSeq2Seq as TextSeq2Seq
170
171 # DEFAULTS
172 defaults = Config()
173 subtensor.add_defaults( defaults )
174 dendrite.add_defaults( defaults )
175 axon.add_defaults( defaults )
176 prometheus.add_defaults( defaults )
177 wallet.add_defaults( defaults )
178 dataset.add_defaults( defaults )
179 wandb.add_defaults( defaults )
180 logging.add_defaults( defaults )
181
182 from substrateinterface import Keypair as Keypair
183
[end of bittensor/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bittensor/__init__.py b/bittensor/__init__.py
--- a/bittensor/__init__.py
+++ b/bittensor/__init__.py
@@ -23,7 +23,7 @@
nest_asyncio.apply()
# Bittensor code and protocol version.
-__version__ = '3.4.1'
+__version__ = '3.4.2'
version_split = __version__.split(".")
__version_as_int__ = (100 * int(version_split[0])) + (10 * int(version_split[1])) + (1 * int(version_split[2]))
|
{"golden_diff": "diff --git a/bittensor/__init__.py b/bittensor/__init__.py\n--- a/bittensor/__init__.py\n+++ b/bittensor/__init__.py\n@@ -23,7 +23,7 @@\n nest_asyncio.apply()\n \n # Bittensor code and protocol version.\n-__version__ = '3.4.1'\n+__version__ = '3.4.2'\n version_split = __version__.split(\".\")\n __version_as_int__ = (100 * int(version_split[0])) + (10 * int(version_split[1])) + (1 * int(version_split[2]))\n", "issue": "Bit 590 backward fix\n- Keep count of remote loss on server, saving model based on most 20 recent average loss.\r\n- When doing tokenization remap, make sure the input size and output size is the same.\r\n- Ensure encode_forward_causallmnext deterministic by set seed.\r\n- When both local_train and remote_train are on: do local_train only when the server is free.\r\n- Validator default to do backward train\nMinor fixes\n1) Removes the parser generation on the config with messes with --help when using a parser\r\n2) Turns off console-rich local logging (which sucks)\n", "before_files": [{"content": "# The MIT License (MIT)\n# Copyright \u00a9 2021 Yuma Rao\n\n# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated \n# documentation files (the \u201cSoftware\u201d), to deal in the Software without restriction, including without limitation \n# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, \n# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\n# The above copyright notice and this permission notice shall be included in all copies or substantial portions of \n# the Software.\n\n# THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO\n# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL \n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION \n# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER \n# DEALINGS IN THE SOFTWARE.\n\nfrom rich.console import Console\nfrom rich.traceback import install\nfrom prometheus_client import Info\n\nimport nest_asyncio\nnest_asyncio.apply()\n\n# Bittensor code and protocol version.\n__version__ = '3.4.1'\nversion_split = __version__.split(\".\")\n__version_as_int__ = (100 * int(version_split[0])) + (10 * int(version_split[1])) + (1 * int(version_split[2]))\n\n\n# Turn off rich console locals trace.\nfrom rich.traceback import install\ninstall(show_locals=False)\n\n# Rich console.\n__console__ = Console()\n__use_console__ = True\n\n# Remove overdue locals in debug training.\ninstall(show_locals=False)\n\ndef turn_console_off():\n from io import StringIO\n __use_console__ = False\n __console__ = Console(file=StringIO(), stderr=False)\n\n\n\n# Vocabulary dimension.\n#__vocab_size__ = len( tokenizer ) + len( tokenizer.additional_special_tokens) + 100 # Plus 100 for eventual token size increase.\n__vocab_size__ = 50258\n\n# Tensor dimension.\n# NOTE (const): if/when this increases peers must be responsible for trimming or expanding output to this size.\n__network_dim__ = 1024 # All network responses have shape = [ __batch_size__, __sequence_dim__, __network_dim__ ]\n\n# Substrate chain block time (seconds).\n__blocktime__ = 12\n\n# Pip address for versioning\n__pipaddress__ = 'https://pypi.org/pypi/bittensor/json'\n\n# Substrate ss58_format\n__ss58_format__ = 42\n\n# Wallet ss58 address length\n__ss58_address_length__ = 48\n\n__networks__ = [ 'local', 'bellagene', 'nobunaga', 'nakamoto']\n\n__datasets__ = ['ArXiv', 'BookCorpus2', 'Books3', 'DMMathematics', 'EnronEmails', 'EuroParl', 'Gutenberg_PG', 'HackerNews', 'NIHExPorter', 'OpenSubtitles', 'PhilPapers', 'UbuntuIRC', 'YoutubeSubtitles']\n\n__nakamoto_entrypoint__ = \"AtreusLB-2c6154f73e6429a9.elb.us-east-2.amazonaws.com:9944\"\n\n\n__nobunaga_entrypoint__ = \"staging.nobunaga.opentensor.ai:9944\"\n\n# Needs to use wss://\n__bellagene_entrypoint__ = \"wss://parachain.opentensor.ai:443\"\n\n\n__local_entrypoint__ = \"127.0.0.1:9944\"\n\n\n# Avoid collisions with other processes\nfrom .utils.test_utils import get_random_unused_port\nmock_subtensor_port = get_random_unused_port()\n__mock_entrypoint__ = f\"localhost:{mock_subtensor_port}\"\n\n\n# --- Prometheus ---\n__prometheus_version__ = \"0.1.0\"\nprometheus_version__split = __prometheus_version__.split(\".\")\n__prometheus_version__as_int__ = (100 * int(prometheus_version__split[0])) + (10 * int(prometheus_version__split[1])) + (1 * int(prometheus_version__split[2]))\ntry:\n bt_promo_info = Info(\"bittensor_info\", \"Information about the installed bittensor package.\")\n bt_promo_info.info ( \n {\n '__version__': str(__version__),\n '__version_as_int__': str(__version_as_int__),\n '__vocab_size__': str(__vocab_size__),\n '__network_dim__': str(__network_dim__),\n '__blocktime__': str(__blocktime__),\n '__prometheus_version__': str(__prometheus_version__),\n '__prometheus_version__as_int__': str(__prometheus_version__as_int__),\n } \n )\nexcept ValueError: \n # This can silently fail if we import bittensor twice in the same process.\n # We simply pass over this error. \n pass\n\n# ---- Config ----\nfrom bittensor._config import config as config\n\n# ---- LOGGING ----\nfrom bittensor._logging import logging as logging\n\n# ---- Protos ----\nimport bittensor._proto.bittensor_pb2 as proto\nimport bittensor._proto.bittensor_pb2_grpc as grpc\n\n# ---- Neurons ----\nimport bittensor._neuron as neurons\n\n# ---- Utils ----\nfrom bittensor.utils import unbiased_topk as unbiased_topk\n\n# ---- Factories -----\nfrom bittensor.utils.balance import Balance as Balance\nfrom bittensor._cli import cli as cli\nfrom bittensor._axon import axon as axon\nfrom bittensor._wallet import wallet as wallet\nfrom bittensor._keyfile import keyfile as keyfile\nfrom bittensor._receptor import receptor as receptor\nfrom bittensor._endpoint import endpoint as endpoint\nfrom bittensor._dendrite import dendrite as dendrite\nfrom bittensor._metagraph import metagraph as metagraph\nfrom bittensor._prometheus import prometheus as prometheus\nfrom bittensor._subtensor import subtensor as subtensor\nfrom bittensor._tokenizer import tokenizer as tokenizer\nfrom bittensor._serializer import serializer as serializer\nfrom bittensor._synapse import synapse as synapse \nfrom bittensor._dataset import dataset as dataset\nfrom bittensor._receptor import receptor_pool as receptor_pool\nfrom bittensor._wandb import wandb as wandb\nfrom bittensor._threadpool import prioritythreadpool as prioritythreadpool\n\n# ---- Classes -----\nfrom bittensor._cli.cli_impl import CLI as CLI\nfrom bittensor._axon.axon_impl import Axon as Axon\nfrom bittensor._config.config_impl import Config as Config\nfrom bittensor._wallet.wallet_impl import Wallet as Wallet\nfrom bittensor._keyfile.keyfile_impl import Keyfile as Keyfile\nfrom bittensor._receptor.receptor_impl import Receptor as Receptor\nfrom bittensor._endpoint.endpoint_impl import Endpoint as Endpoint\nfrom bittensor._dendrite.dendrite_impl import Dendrite as Dendrite\nfrom bittensor._metagraph.metagraph_impl import Metagraph as Metagraph\nfrom bittensor._subtensor.subtensor_impl import Subtensor as Subtensor\nfrom bittensor._serializer.serializer_impl import Serializer as Serializer\nfrom bittensor._dataset.dataset_impl import Dataset as Dataset\nfrom bittensor._receptor.receptor_pool_impl import ReceptorPool as ReceptorPool\nfrom bittensor._threadpool.priority_thread_pool_impl import PriorityThreadPoolExecutor as PriorityThreadPoolExecutor\nfrom bittensor._ipfs.ipfs_impl import Ipfs as Ipfs\nfrom bittensor._synapse.synapse_impl import Synapse as Synapse\nfrom bittensor._synapse.text_causallm_impl import TextCausalLM as TextCausalLM\nfrom bittensor._synapse.text_causallmnext_impl import TextCausalLMNext as TextCausalLMNext\nfrom bittensor._synapse.text_lasthiddenstate_impl import TextLastHiddenState as TextLastHiddenState\nfrom bittensor._synapse.text_seq2seq_impl import TextSeq2Seq as TextSeq2Seq\n\n# DEFAULTS\ndefaults = Config()\nsubtensor.add_defaults( defaults )\ndendrite.add_defaults( defaults )\naxon.add_defaults( defaults )\nprometheus.add_defaults( defaults )\nwallet.add_defaults( defaults )\ndataset.add_defaults( defaults )\nwandb.add_defaults( defaults )\nlogging.add_defaults( defaults )\n\nfrom substrateinterface import Keypair as Keypair\n", "path": "bittensor/__init__.py"}]}
| 3,029 | 138 |
gh_patches_debug_27082
|
rasdani/github-patches
|
git_diff
|
apache__airflow-25524
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API server /plugin crashes
### Apache Airflow version
2.3.3 (latest released)
### What happened
The `/plugins` endpoint returned a 500 http status code.
```
curl -X GET http://localhost:8080/api/v1/plugins\?limit\=1 \
-H 'Cache-Control: no-cache' \
--user "admin:admin"
{
"detail": "\"{'name': 'Test View', 'category': 'Test Plugin', 'view': 'test.appbuilder_views.TestAppBuilderBaseView'}\" is not of type 'object'\n\nFailed validating 'type' in schema['allOf'][0]['properties']['plugins']['items']['properties']['appbuilder_views']['items']:\n {'nullable': True, 'type': 'object'}\n\nOn instance['plugins'][0]['appbuilder_views'][0]:\n (\"{'name': 'Test View', 'category': 'Test Plugin', 'view': \"\n \"'test.appbuilder_views.TestAppBuilderBaseView'}\")",
"status": 500,
"title": "Response body does not conform to specification",
"type": "http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/Unknown"
}
```
The error message in the webserver is as followed
```
[2022-08-03 17:07:57,705] {validation.py:244} ERROR - http://localhost:8080/api/v1/plugins?limit=1 validation error: "{'name': 'Test View', 'category': 'Test Plugin', 'view': 'test.appbuilder_views.TestAppBuilderBaseView'}" is not of type 'object'
Failed validating 'type' in schema['allOf'][0]['properties']['plugins']['items']['properties']['appbuilder_views']['items']:
{'nullable': True, 'type': 'object'}
On instance['plugins'][0]['appbuilder_views'][0]:
("{'name': 'Test View', 'category': 'Test Plugin', 'view': "
"'test.appbuilder_views.TestAppBuilderBaseView'}")
172.18.0.1 - admin [03/Aug/2022:17:10:17 +0000] "GET /api/v1/plugins?limit=1 HTTP/1.1" 500 733 "-" "curl/7.79.1"
```
### What you think should happen instead
The response should contain all the plugins integrated with Airflow.
### How to reproduce
Create a simple plugin in the plugin directory.
`appbuilder_views.py`
```
from flask_appbuilder import expose, BaseView as AppBuilderBaseView
# Creating a flask appbuilder BaseView
class TestAppBuilderBaseView(AppBuilderBaseView):
@expose("/")
def test(self):
return self.render_template("test_plugin/test.html", content="Hello galaxy!")
```
`plugin.py`
```
from airflow.plugins_manager import AirflowPlugin
from test.appbuilder_views import TestAppBuilderBaseView
class TestPlugin(AirflowPlugin):
name = "test"
appbuilder_views = [
{
"name": "Test View",
"category": "Test Plugin",
"view": TestAppBuilderBaseView()
}
]
```
Call the `/plugin` endpoint.
```
curl -X GET http://localhost:8080/api/v1/plugins\?limit\=1 \
-H 'Cache-Control: no-cache' \
--user "admin:admin"
```
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</issue>
<code>
[start of airflow/api_connexion/endpoints/plugin_endpoint.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 from airflow.api_connexion import security
18 from airflow.api_connexion.parameters import check_limit, format_parameters
19 from airflow.api_connexion.schemas.plugin_schema import PluginCollection, plugin_collection_schema
20 from airflow.api_connexion.types import APIResponse
21 from airflow.plugins_manager import get_plugin_info
22 from airflow.security import permissions
23
24
25 @security.requires_access([(permissions.ACTION_CAN_READ, permissions.RESOURCE_PLUGIN)])
26 @format_parameters({"limit": check_limit})
27 def get_plugins(*, limit: int, offset: int = 0) -> APIResponse:
28 """Get plugins endpoint"""
29 plugins_info = get_plugin_info()
30 total_entries = len(plugins_info)
31 plugins_info = plugins_info[offset:]
32 plugins_info = plugins_info[:limit]
33 return plugin_collection_schema.dump(PluginCollection(plugins=plugins_info, total_entries=total_entries))
34
[end of airflow/api_connexion/endpoints/plugin_endpoint.py]
[start of airflow/api_connexion/schemas/plugin_schema.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17
18 from typing import List, NamedTuple
19
20 from marshmallow import Schema, fields
21
22
23 class PluginSchema(Schema):
24 """Plugin schema"""
25
26 number = fields.Int()
27 name = fields.String()
28 hooks = fields.List(fields.String())
29 executors = fields.List(fields.String())
30 macros = fields.List(fields.String())
31 flask_blueprints = fields.List(fields.String())
32 appbuilder_views = fields.List(fields.String())
33 appbuilder_menu_items = fields.List(fields.Dict())
34 global_operator_extra_links = fields.List(fields.String())
35 operator_extra_links = fields.List(fields.String())
36 source = fields.String()
37
38
39 class PluginCollection(NamedTuple):
40 """Plugin List"""
41
42 plugins: List
43 total_entries: int
44
45
46 class PluginCollectionSchema(Schema):
47 """Plugin Collection List"""
48
49 plugins = fields.List(fields.Nested(PluginSchema))
50 total_entries = fields.Int()
51
52
53 plugin_schema = PluginSchema()
54 plugin_collection_schema = PluginCollectionSchema()
55
[end of airflow/api_connexion/schemas/plugin_schema.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/airflow/api_connexion/endpoints/plugin_endpoint.py b/airflow/api_connexion/endpoints/plugin_endpoint.py
--- a/airflow/api_connexion/endpoints/plugin_endpoint.py
+++ b/airflow/api_connexion/endpoints/plugin_endpoint.py
@@ -27,7 +27,5 @@
def get_plugins(*, limit: int, offset: int = 0) -> APIResponse:
"""Get plugins endpoint"""
plugins_info = get_plugin_info()
- total_entries = len(plugins_info)
- plugins_info = plugins_info[offset:]
- plugins_info = plugins_info[:limit]
- return plugin_collection_schema.dump(PluginCollection(plugins=plugins_info, total_entries=total_entries))
+ collection = PluginCollection(plugins=plugins_info[offset:][:limit], total_entries=len(plugins_info))
+ return plugin_collection_schema.dump(collection)
diff --git a/airflow/api_connexion/schemas/plugin_schema.py b/airflow/api_connexion/schemas/plugin_schema.py
--- a/airflow/api_connexion/schemas/plugin_schema.py
+++ b/airflow/api_connexion/schemas/plugin_schema.py
@@ -23,16 +23,15 @@
class PluginSchema(Schema):
"""Plugin schema"""
- number = fields.Int()
name = fields.String()
hooks = fields.List(fields.String())
executors = fields.List(fields.String())
- macros = fields.List(fields.String())
- flask_blueprints = fields.List(fields.String())
- appbuilder_views = fields.List(fields.String())
+ macros = fields.List(fields.Dict())
+ flask_blueprints = fields.List(fields.Dict())
+ appbuilder_views = fields.List(fields.Dict())
appbuilder_menu_items = fields.List(fields.Dict())
- global_operator_extra_links = fields.List(fields.String())
- operator_extra_links = fields.List(fields.String())
+ global_operator_extra_links = fields.List(fields.Dict())
+ operator_extra_links = fields.List(fields.Dict())
source = fields.String()
|
{"golden_diff": "diff --git a/airflow/api_connexion/endpoints/plugin_endpoint.py b/airflow/api_connexion/endpoints/plugin_endpoint.py\n--- a/airflow/api_connexion/endpoints/plugin_endpoint.py\n+++ b/airflow/api_connexion/endpoints/plugin_endpoint.py\n@@ -27,7 +27,5 @@\n def get_plugins(*, limit: int, offset: int = 0) -> APIResponse:\n \"\"\"Get plugins endpoint\"\"\"\n plugins_info = get_plugin_info()\n- total_entries = len(plugins_info)\n- plugins_info = plugins_info[offset:]\n- plugins_info = plugins_info[:limit]\n- return plugin_collection_schema.dump(PluginCollection(plugins=plugins_info, total_entries=total_entries))\n+ collection = PluginCollection(plugins=plugins_info[offset:][:limit], total_entries=len(plugins_info))\n+ return plugin_collection_schema.dump(collection)\ndiff --git a/airflow/api_connexion/schemas/plugin_schema.py b/airflow/api_connexion/schemas/plugin_schema.py\n--- a/airflow/api_connexion/schemas/plugin_schema.py\n+++ b/airflow/api_connexion/schemas/plugin_schema.py\n@@ -23,16 +23,15 @@\n class PluginSchema(Schema):\n \"\"\"Plugin schema\"\"\"\n \n- number = fields.Int()\n name = fields.String()\n hooks = fields.List(fields.String())\n executors = fields.List(fields.String())\n- macros = fields.List(fields.String())\n- flask_blueprints = fields.List(fields.String())\n- appbuilder_views = fields.List(fields.String())\n+ macros = fields.List(fields.Dict())\n+ flask_blueprints = fields.List(fields.Dict())\n+ appbuilder_views = fields.List(fields.Dict())\n appbuilder_menu_items = fields.List(fields.Dict())\n- global_operator_extra_links = fields.List(fields.String())\n- operator_extra_links = fields.List(fields.String())\n+ global_operator_extra_links = fields.List(fields.Dict())\n+ operator_extra_links = fields.List(fields.Dict())\n source = fields.String()\n", "issue": "API server /plugin crashes\n### Apache Airflow version\r\n\r\n2.3.3 (latest released)\r\n\r\n### What happened\r\n\r\nThe `/plugins` endpoint returned a 500 http status code.\r\n\r\n```\r\ncurl -X GET http://localhost:8080/api/v1/plugins\\?limit\\=1 \\\r\n -H 'Cache-Control: no-cache' \\\r\n --user \"admin:admin\"\r\n{\r\n \"detail\": \"\\\"{'name': 'Test View', 'category': 'Test Plugin', 'view': 'test.appbuilder_views.TestAppBuilderBaseView'}\\\" is not of type 'object'\\n\\nFailed validating 'type' in schema['allOf'][0]['properties']['plugins']['items']['properties']['appbuilder_views']['items']:\\n {'nullable': True, 'type': 'object'}\\n\\nOn instance['plugins'][0]['appbuilder_views'][0]:\\n (\\\"{'name': 'Test View', 'category': 'Test Plugin', 'view': \\\"\\n \\\"'test.appbuilder_views.TestAppBuilderBaseView'}\\\")\",\r\n \"status\": 500,\r\n \"title\": \"Response body does not conform to specification\",\r\n \"type\": \"http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/Unknown\"\r\n}\r\n```\r\n\r\nThe error message in the webserver is as followed\r\n\r\n```\r\n[2022-08-03 17:07:57,705] {validation.py:244} ERROR - http://localhost:8080/api/v1/plugins?limit=1 validation error: \"{'name': 'Test View', 'category': 'Test Plugin', 'view': 'test.appbuilder_views.TestAppBuilderBaseView'}\" is not of type 'object'\r\n\r\nFailed validating 'type' in schema['allOf'][0]['properties']['plugins']['items']['properties']['appbuilder_views']['items']:\r\n {'nullable': True, 'type': 'object'}\r\n\r\nOn instance['plugins'][0]['appbuilder_views'][0]:\r\n (\"{'name': 'Test View', 'category': 'Test Plugin', 'view': \"\r\n \"'test.appbuilder_views.TestAppBuilderBaseView'}\")\r\n172.18.0.1 - admin [03/Aug/2022:17:10:17 +0000] \"GET /api/v1/plugins?limit=1 HTTP/1.1\" 500 733 \"-\" \"curl/7.79.1\"\r\n```\r\n\r\n### What you think should happen instead\r\n\r\nThe response should contain all the plugins integrated with Airflow.\r\n\r\n### How to reproduce\r\n\r\nCreate a simple plugin in the plugin directory.\r\n\r\n`appbuilder_views.py`\r\n\r\n```\r\nfrom flask_appbuilder import expose, BaseView as AppBuilderBaseView\r\n\r\n\r\n# Creating a flask appbuilder BaseView\r\nclass TestAppBuilderBaseView(AppBuilderBaseView):\r\n @expose(\"/\")\r\n def test(self):\r\n return self.render_template(\"test_plugin/test.html\", content=\"Hello galaxy!\")\r\n\r\n```\r\n\r\n`plugin.py`\r\n\r\n```\r\nfrom airflow.plugins_manager import AirflowPlugin\r\nfrom test.appbuilder_views import TestAppBuilderBaseView\r\n\r\n\r\nclass TestPlugin(AirflowPlugin):\r\n name = \"test\"\r\n\r\n appbuilder_views = [\r\n {\r\n \"name\": \"Test View\",\r\n \"category\": \"Test Plugin\",\r\n \"view\": TestAppBuilderBaseView()\r\n }\r\n ]\r\n\r\n```\r\n\r\nCall the `/plugin` endpoint.\r\n\r\n```\r\ncurl -X GET http://localhost:8080/api/v1/plugins\\?limit\\=1 \\\r\n -H 'Cache-Control: no-cache' \\\r\n --user \"admin:admin\"\r\n```\r\n\r\n### Operating System\r\n\r\nN/A\r\n\r\n### Versions of Apache Airflow Providers\r\n\r\n_No response_\r\n\r\n### Deployment\r\n\r\nAstronomer\r\n\r\n### Deployment details\r\n\r\n_No response_\r\n\r\n### Anything else\r\n\r\n_No response_\r\n\r\n### Are you willing to submit PR?\r\n\r\n- [X] Yes I am willing to submit a PR!\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\nfrom airflow.api_connexion import security\nfrom airflow.api_connexion.parameters import check_limit, format_parameters\nfrom airflow.api_connexion.schemas.plugin_schema import PluginCollection, plugin_collection_schema\nfrom airflow.api_connexion.types import APIResponse\nfrom airflow.plugins_manager import get_plugin_info\nfrom airflow.security import permissions\n\n\[email protected]_access([(permissions.ACTION_CAN_READ, permissions.RESOURCE_PLUGIN)])\n@format_parameters({\"limit\": check_limit})\ndef get_plugins(*, limit: int, offset: int = 0) -> APIResponse:\n \"\"\"Get plugins endpoint\"\"\"\n plugins_info = get_plugin_info()\n total_entries = len(plugins_info)\n plugins_info = plugins_info[offset:]\n plugins_info = plugins_info[:limit]\n return plugin_collection_schema.dump(PluginCollection(plugins=plugins_info, total_entries=total_entries))\n", "path": "airflow/api_connexion/endpoints/plugin_endpoint.py"}, {"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nfrom typing import List, NamedTuple\n\nfrom marshmallow import Schema, fields\n\n\nclass PluginSchema(Schema):\n \"\"\"Plugin schema\"\"\"\n\n number = fields.Int()\n name = fields.String()\n hooks = fields.List(fields.String())\n executors = fields.List(fields.String())\n macros = fields.List(fields.String())\n flask_blueprints = fields.List(fields.String())\n appbuilder_views = fields.List(fields.String())\n appbuilder_menu_items = fields.List(fields.Dict())\n global_operator_extra_links = fields.List(fields.String())\n operator_extra_links = fields.List(fields.String())\n source = fields.String()\n\n\nclass PluginCollection(NamedTuple):\n \"\"\"Plugin List\"\"\"\n\n plugins: List\n total_entries: int\n\n\nclass PluginCollectionSchema(Schema):\n \"\"\"Plugin Collection List\"\"\"\n\n plugins = fields.List(fields.Nested(PluginSchema))\n total_entries = fields.Int()\n\n\nplugin_schema = PluginSchema()\nplugin_collection_schema = PluginCollectionSchema()\n", "path": "airflow/api_connexion/schemas/plugin_schema.py"}]}
| 2,355 | 422 |
gh_patches_debug_37428
|
rasdani/github-patches
|
git_diff
|
spack__spack-15179
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove 'spack bootstrap' from the commands
As a Spack maintainer I want to remove the `spack bootstrap` command (outdated since #14062) so that I could reduce the amount of boilerplate code in the project.
### Rationale
The `spack bootstrap` command was used to "Bootstrap packages needed for spack to run smoothly" and in reality it has always just installed `environment-modules~X`. Since #14062 shell integration doesn't require `environment-modules` anymore making the command outdated. I would therefore remove that command from the code base.
### Description
Just remove the command and any test / package associated only with it.
### Additional information
Opening the issue to check what is the consensus towards this.
</issue>
<code>
[start of lib/spack/spack/cmd/bootstrap.py]
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 import llnl.util.cpu
7 import llnl.util.tty as tty
8
9 import spack.repo
10 import spack.spec
11 import spack.cmd.common.arguments as arguments
12
13 description = "Bootstrap packages needed for spack to run smoothly"
14 section = "admin"
15 level = "long"
16
17
18 def setup_parser(subparser):
19 arguments.add_common_arguments(subparser, ['jobs'])
20 subparser.add_argument(
21 '--keep-prefix', action='store_true', dest='keep_prefix',
22 help="don't remove the install prefix if installation fails")
23 subparser.add_argument(
24 '--keep-stage', action='store_true', dest='keep_stage',
25 help="don't remove the build stage if installation succeeds")
26 arguments.add_common_arguments(subparser, ['no_checksum'])
27 subparser.add_argument(
28 '-v', '--verbose', action='store_true', dest='verbose',
29 help="display verbose build output while installing")
30
31 cache_group = subparser.add_mutually_exclusive_group()
32 cache_group.add_argument(
33 '--use-cache', action='store_true', dest='use_cache', default=True,
34 help="check for pre-built Spack packages in mirrors (default)")
35 cache_group.add_argument(
36 '--no-cache', action='store_false', dest='use_cache', default=True,
37 help="do not check for pre-built Spack packages in mirrors")
38 cache_group.add_argument(
39 '--cache-only', action='store_true', dest='cache_only', default=False,
40 help="only install package from binary mirrors")
41
42 cd_group = subparser.add_mutually_exclusive_group()
43 arguments.add_common_arguments(cd_group, ['clean', 'dirty'])
44
45
46 def bootstrap(parser, args, **kwargs):
47 kwargs.update({
48 'keep_prefix': args.keep_prefix,
49 'keep_stage': args.keep_stage,
50 'install_deps': 'dependencies',
51 'verbose': args.verbose,
52 'dirty': args.dirty,
53 'use_cache': args.use_cache,
54 'cache_only': args.cache_only
55 })
56
57 # Define requirement dictionary defining general specs which need
58 # to be satisfied, and the specs to install when the general spec
59 # isn't satisfied.
60 requirement_dict = {
61 # Install environment-modules with generic optimizations
62 'environment-modules': 'environment-modules~X target={0}'.format(
63 llnl.util.cpu.host().family
64 )
65 }
66
67 for requirement in requirement_dict:
68 installed_specs = spack.store.db.query(requirement)
69 if(len(installed_specs) > 0):
70 tty.msg("Requirement %s is satisfied with installed "
71 "package %s" % (requirement, installed_specs[0]))
72 else:
73 # Install requirement
74 spec_to_install = spack.spec.Spec(requirement_dict[requirement])
75 spec_to_install.concretize()
76 tty.msg("Installing %s to satisfy requirement for %s" %
77 (spec_to_install, requirement))
78 kwargs['explicit'] = True
79 package = spack.repo.get(spec_to_install)
80 package.do_install(**kwargs)
81
[end of lib/spack/spack/cmd/bootstrap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/spack/spack/cmd/bootstrap.py b/lib/spack/spack/cmd/bootstrap.py
deleted file mode 100644
--- a/lib/spack/spack/cmd/bootstrap.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
-# Spack Project Developers. See the top-level COPYRIGHT file for details.
-#
-# SPDX-License-Identifier: (Apache-2.0 OR MIT)
-
-import llnl.util.cpu
-import llnl.util.tty as tty
-
-import spack.repo
-import spack.spec
-import spack.cmd.common.arguments as arguments
-
-description = "Bootstrap packages needed for spack to run smoothly"
-section = "admin"
-level = "long"
-
-
-def setup_parser(subparser):
- arguments.add_common_arguments(subparser, ['jobs'])
- subparser.add_argument(
- '--keep-prefix', action='store_true', dest='keep_prefix',
- help="don't remove the install prefix if installation fails")
- subparser.add_argument(
- '--keep-stage', action='store_true', dest='keep_stage',
- help="don't remove the build stage if installation succeeds")
- arguments.add_common_arguments(subparser, ['no_checksum'])
- subparser.add_argument(
- '-v', '--verbose', action='store_true', dest='verbose',
- help="display verbose build output while installing")
-
- cache_group = subparser.add_mutually_exclusive_group()
- cache_group.add_argument(
- '--use-cache', action='store_true', dest='use_cache', default=True,
- help="check for pre-built Spack packages in mirrors (default)")
- cache_group.add_argument(
- '--no-cache', action='store_false', dest='use_cache', default=True,
- help="do not check for pre-built Spack packages in mirrors")
- cache_group.add_argument(
- '--cache-only', action='store_true', dest='cache_only', default=False,
- help="only install package from binary mirrors")
-
- cd_group = subparser.add_mutually_exclusive_group()
- arguments.add_common_arguments(cd_group, ['clean', 'dirty'])
-
-
-def bootstrap(parser, args, **kwargs):
- kwargs.update({
- 'keep_prefix': args.keep_prefix,
- 'keep_stage': args.keep_stage,
- 'install_deps': 'dependencies',
- 'verbose': args.verbose,
- 'dirty': args.dirty,
- 'use_cache': args.use_cache,
- 'cache_only': args.cache_only
- })
-
- # Define requirement dictionary defining general specs which need
- # to be satisfied, and the specs to install when the general spec
- # isn't satisfied.
- requirement_dict = {
- # Install environment-modules with generic optimizations
- 'environment-modules': 'environment-modules~X target={0}'.format(
- llnl.util.cpu.host().family
- )
- }
-
- for requirement in requirement_dict:
- installed_specs = spack.store.db.query(requirement)
- if(len(installed_specs) > 0):
- tty.msg("Requirement %s is satisfied with installed "
- "package %s" % (requirement, installed_specs[0]))
- else:
- # Install requirement
- spec_to_install = spack.spec.Spec(requirement_dict[requirement])
- spec_to_install.concretize()
- tty.msg("Installing %s to satisfy requirement for %s" %
- (spec_to_install, requirement))
- kwargs['explicit'] = True
- package = spack.repo.get(spec_to_install)
- package.do_install(**kwargs)
|
{"golden_diff": "diff --git a/lib/spack/spack/cmd/bootstrap.py b/lib/spack/spack/cmd/bootstrap.py\ndeleted file mode 100644\n--- a/lib/spack/spack/cmd/bootstrap.py\n+++ /dev/null\n@@ -1,80 +0,0 @@\n-# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n-# Spack Project Developers. See the top-level COPYRIGHT file for details.\n-#\n-# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n-\n-import llnl.util.cpu\n-import llnl.util.tty as tty\n-\n-import spack.repo\n-import spack.spec\n-import spack.cmd.common.arguments as arguments\n-\n-description = \"Bootstrap packages needed for spack to run smoothly\"\n-section = \"admin\"\n-level = \"long\"\n-\n-\n-def setup_parser(subparser):\n- arguments.add_common_arguments(subparser, ['jobs'])\n- subparser.add_argument(\n- '--keep-prefix', action='store_true', dest='keep_prefix',\n- help=\"don't remove the install prefix if installation fails\")\n- subparser.add_argument(\n- '--keep-stage', action='store_true', dest='keep_stage',\n- help=\"don't remove the build stage if installation succeeds\")\n- arguments.add_common_arguments(subparser, ['no_checksum'])\n- subparser.add_argument(\n- '-v', '--verbose', action='store_true', dest='verbose',\n- help=\"display verbose build output while installing\")\n-\n- cache_group = subparser.add_mutually_exclusive_group()\n- cache_group.add_argument(\n- '--use-cache', action='store_true', dest='use_cache', default=True,\n- help=\"check for pre-built Spack packages in mirrors (default)\")\n- cache_group.add_argument(\n- '--no-cache', action='store_false', dest='use_cache', default=True,\n- help=\"do not check for pre-built Spack packages in mirrors\")\n- cache_group.add_argument(\n- '--cache-only', action='store_true', dest='cache_only', default=False,\n- help=\"only install package from binary mirrors\")\n-\n- cd_group = subparser.add_mutually_exclusive_group()\n- arguments.add_common_arguments(cd_group, ['clean', 'dirty'])\n-\n-\n-def bootstrap(parser, args, **kwargs):\n- kwargs.update({\n- 'keep_prefix': args.keep_prefix,\n- 'keep_stage': args.keep_stage,\n- 'install_deps': 'dependencies',\n- 'verbose': args.verbose,\n- 'dirty': args.dirty,\n- 'use_cache': args.use_cache,\n- 'cache_only': args.cache_only\n- })\n-\n- # Define requirement dictionary defining general specs which need\n- # to be satisfied, and the specs to install when the general spec\n- # isn't satisfied.\n- requirement_dict = {\n- # Install environment-modules with generic optimizations\n- 'environment-modules': 'environment-modules~X target={0}'.format(\n- llnl.util.cpu.host().family\n- )\n- }\n-\n- for requirement in requirement_dict:\n- installed_specs = spack.store.db.query(requirement)\n- if(len(installed_specs) > 0):\n- tty.msg(\"Requirement %s is satisfied with installed \"\n- \"package %s\" % (requirement, installed_specs[0]))\n- else:\n- # Install requirement\n- spec_to_install = spack.spec.Spec(requirement_dict[requirement])\n- spec_to_install.concretize()\n- tty.msg(\"Installing %s to satisfy requirement for %s\" %\n- (spec_to_install, requirement))\n- kwargs['explicit'] = True\n- package = spack.repo.get(spec_to_install)\n- package.do_install(**kwargs)\n", "issue": "Remove 'spack bootstrap' from the commands\nAs a Spack maintainer I want to remove the `spack bootstrap` command (outdated since #14062) so that I could reduce the amount of boilerplate code in the project.\r\n\r\n### Rationale\r\n\r\nThe `spack bootstrap` command was used to \"Bootstrap packages needed for spack to run smoothly\" and in reality it has always just installed `environment-modules~X`. Since #14062 shell integration doesn't require `environment-modules` anymore making the command outdated. I would therefore remove that command from the code base.\r\n\r\n### Description\r\n\r\nJust remove the command and any test / package associated only with it.\r\n\r\n\r\n### Additional information\r\n\r\nOpening the issue to check what is the consensus towards this.\r\n\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nimport llnl.util.cpu\nimport llnl.util.tty as tty\n\nimport spack.repo\nimport spack.spec\nimport spack.cmd.common.arguments as arguments\n\ndescription = \"Bootstrap packages needed for spack to run smoothly\"\nsection = \"admin\"\nlevel = \"long\"\n\n\ndef setup_parser(subparser):\n arguments.add_common_arguments(subparser, ['jobs'])\n subparser.add_argument(\n '--keep-prefix', action='store_true', dest='keep_prefix',\n help=\"don't remove the install prefix if installation fails\")\n subparser.add_argument(\n '--keep-stage', action='store_true', dest='keep_stage',\n help=\"don't remove the build stage if installation succeeds\")\n arguments.add_common_arguments(subparser, ['no_checksum'])\n subparser.add_argument(\n '-v', '--verbose', action='store_true', dest='verbose',\n help=\"display verbose build output while installing\")\n\n cache_group = subparser.add_mutually_exclusive_group()\n cache_group.add_argument(\n '--use-cache', action='store_true', dest='use_cache', default=True,\n help=\"check for pre-built Spack packages in mirrors (default)\")\n cache_group.add_argument(\n '--no-cache', action='store_false', dest='use_cache', default=True,\n help=\"do not check for pre-built Spack packages in mirrors\")\n cache_group.add_argument(\n '--cache-only', action='store_true', dest='cache_only', default=False,\n help=\"only install package from binary mirrors\")\n\n cd_group = subparser.add_mutually_exclusive_group()\n arguments.add_common_arguments(cd_group, ['clean', 'dirty'])\n\n\ndef bootstrap(parser, args, **kwargs):\n kwargs.update({\n 'keep_prefix': args.keep_prefix,\n 'keep_stage': args.keep_stage,\n 'install_deps': 'dependencies',\n 'verbose': args.verbose,\n 'dirty': args.dirty,\n 'use_cache': args.use_cache,\n 'cache_only': args.cache_only\n })\n\n # Define requirement dictionary defining general specs which need\n # to be satisfied, and the specs to install when the general spec\n # isn't satisfied.\n requirement_dict = {\n # Install environment-modules with generic optimizations\n 'environment-modules': 'environment-modules~X target={0}'.format(\n llnl.util.cpu.host().family\n )\n }\n\n for requirement in requirement_dict:\n installed_specs = spack.store.db.query(requirement)\n if(len(installed_specs) > 0):\n tty.msg(\"Requirement %s is satisfied with installed \"\n \"package %s\" % (requirement, installed_specs[0]))\n else:\n # Install requirement\n spec_to_install = spack.spec.Spec(requirement_dict[requirement])\n spec_to_install.concretize()\n tty.msg(\"Installing %s to satisfy requirement for %s\" %\n (spec_to_install, requirement))\n kwargs['explicit'] = True\n package = spack.repo.get(spec_to_install)\n package.do_install(**kwargs)\n", "path": "lib/spack/spack/cmd/bootstrap.py"}]}
| 1,549 | 811 |
gh_patches_debug_29186
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6368
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`meltano lock` -> variant 'original' is not known to Meltano
I'm getting the following error when I'm trying to run a `meltano lock --all` on [the squared project](https://github.com/meltano/squared/blob/4d0151cf714d9158617d6d76757d21a29c3a38f7/data/extract/extractors.meltano.yml#L63) (note, I did first manually updated to remove executable and namespace which were defined in my project). When I commented out the `inherit_from` children it works, the plugin is locked, then uncommenting the children again makes it fail again so I'm pretty sure its related to the inheriting.
```
File "/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/project_plugins_service.py", line 521, in get_parent
parent, source = self.find_parent(plugin)
File "/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/project_plugins_service.py", line 510, in find_parent
raise error
File "/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/project_plugins_service.py", line 505, in find_parent
return (self._get_parent_from_hub(plugin), DefinitionSource.HUB)
File "/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/project_plugins_service.py", line 441, in _get_parent_from_hub
return self.hub_service.get_base_plugin(plugin, variant_name=plugin.variant)
File "/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/plugin_discovery_service.py", line 136, in get_base_plugin
plugin = project_plugin.custom_definition or self.find_definition(
File "/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/hub/client.py", line 189, in find_definition
raise HubPluginVariantNotFound(
meltano.core.hub.client.HubPluginVariantNotFound: Extractor 'tap-github' variant 'original' is not known to Meltano. Variants: ['singer-io (default)', 'meltanolabs']
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/cli/__init__.py", line 74, in _run_cli
raise CliError(str(err)) from err
meltano.cli.utils.CliError: Extractor 'tap-github' variant 'original' is not known to Meltano. Variants: ['singer-io (default)', 'meltanolabs']
Extractor 'tap-github' variant 'original' is not known to Meltano. Variants: ['singer-io (default)', 'meltanolabs']
```
</issue>
<code>
[start of src/meltano/cli/lock.py]
1 """Lock command."""
2
3 from __future__ import annotations
4
5 from typing import TYPE_CHECKING
6
7 import click
8 import structlog
9
10 from meltano.core.plugin import PluginType
11 from meltano.core.plugin_lock_service import (
12 LockfileAlreadyExistsError,
13 PluginLockService,
14 )
15 from meltano.core.project_plugins_service import DefinitionSource, ProjectPluginsService
16 from meltano.core.tracking import CliEvent, PluginsTrackingContext
17
18 from . import CliError, cli
19 from .params import pass_project
20 from .utils import InstrumentedCmd
21
22 if TYPE_CHECKING:
23 from meltano.core.project import Project
24
25
26 __all__ = ["lock"]
27 logger = structlog.get_logger(__name__)
28
29
30 @cli.command(cls=InstrumentedCmd, short_help="Lock plugin definitions.")
31 @click.option(
32 "--all",
33 "all_plugins",
34 is_flag=True,
35 help="Lock all the plugins of the project.",
36 )
37 @click.option(
38 "--plugin-type",
39 type=click.Choice(PluginType.cli_arguments()),
40 help="Lock only the plugins of the given type.",
41 )
42 @click.argument("plugin_name", nargs=-1, required=False)
43 @click.option("--update", "-u", is_flag=True, help="Update the lock file.")
44 @click.pass_context
45 @pass_project()
46 def lock(
47 project: Project,
48 ctx: click.Context,
49 all_plugins: bool,
50 plugin_type: str | None,
51 plugin_name: tuple[str, ...],
52 update: bool,
53 ):
54 """Lock plugin definitions.
55
56 \b\nRead more at https://docs.meltano.com/reference/command-line-interface#lock
57 """
58 tracker = ctx.obj["tracker"]
59
60 lock_service = PluginLockService(project)
61 plugins_service = ProjectPluginsService(project)
62
63 if (all_plugins and plugin_name) or not (all_plugins or plugin_name):
64 tracker.track_command_event(CliEvent.aborted)
65 raise CliError("Exactly one of --all or plugin name must be specified.")
66
67 with plugins_service.use_preferred_source(DefinitionSource.HUB):
68 try:
69 # Make it a list so source preference is not lazily evaluated.
70 plugins = list(plugins_service.plugins())
71
72 except Exception:
73 tracker.track_command_event(CliEvent.aborted)
74 raise
75
76 if plugin_name:
77 plugins = [plugin for plugin in plugins if plugin.name in plugin_name]
78
79 if plugin_type:
80 plugin_type = PluginType.from_cli_argument(plugin_type)
81 plugins = [plugin for plugin in plugins if plugin.type == plugin_type]
82
83 tracked_plugins = []
84
85 for plugin in plugins:
86 descriptor = f"{plugin.type.descriptor} {plugin.name}"
87 if plugin.is_custom():
88 click.secho(f"{descriptor.capitalize()} is a custom plugin", fg="yellow")
89 else:
90 try:
91 lock_service.save(plugin, exists_ok=update)
92 except LockfileAlreadyExistsError as err:
93 relative_path = err.path.relative_to(project.root)
94 click.secho(
95 f"Lockfile exists for {descriptor} at {relative_path}",
96 fg="red",
97 )
98 continue
99
100 tracked_plugins.append((plugin, None))
101 click.secho(f"Locked definition for {descriptor}", fg="green")
102
103 tracker.add_contexts(PluginsTrackingContext(tracked_plugins))
104 tracker.track_command_event(CliEvent.completed)
105
[end of src/meltano/cli/lock.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/meltano/cli/lock.py b/src/meltano/cli/lock.py
--- a/src/meltano/cli/lock.py
+++ b/src/meltano/cli/lock.py
@@ -64,14 +64,12 @@
tracker.track_command_event(CliEvent.aborted)
raise CliError("Exactly one of --all or plugin name must be specified.")
- with plugins_service.use_preferred_source(DefinitionSource.HUB):
- try:
- # Make it a list so source preference is not lazily evaluated.
- plugins = list(plugins_service.plugins())
-
- except Exception:
- tracker.track_command_event(CliEvent.aborted)
- raise
+ try:
+ # Make it a list so source preference is not lazily evaluated.
+ plugins = list(plugins_service.plugins())
+ except Exception:
+ tracker.track_command_event(CliEvent.aborted)
+ raise
if plugin_name:
plugins = [plugin for plugin in plugins if plugin.name in plugin_name]
@@ -86,7 +84,14 @@
descriptor = f"{plugin.type.descriptor} {plugin.name}"
if plugin.is_custom():
click.secho(f"{descriptor.capitalize()} is a custom plugin", fg="yellow")
+ elif plugin.inherit_from is not None:
+ click.secho(
+ f"{descriptor.capitalize()} is an inherited plugin", fg="yellow"
+ )
else:
+ plugin.parent = None
+ with plugins_service.use_preferred_source(DefinitionSource.HUB):
+ plugin = plugins_service.ensure_parent(plugin)
try:
lock_service.save(plugin, exists_ok=update)
except LockfileAlreadyExistsError as err:
|
{"golden_diff": "diff --git a/src/meltano/cli/lock.py b/src/meltano/cli/lock.py\n--- a/src/meltano/cli/lock.py\n+++ b/src/meltano/cli/lock.py\n@@ -64,14 +64,12 @@\n tracker.track_command_event(CliEvent.aborted)\n raise CliError(\"Exactly one of --all or plugin name must be specified.\")\n \n- with plugins_service.use_preferred_source(DefinitionSource.HUB):\n- try:\n- # Make it a list so source preference is not lazily evaluated.\n- plugins = list(plugins_service.plugins())\n-\n- except Exception:\n- tracker.track_command_event(CliEvent.aborted)\n- raise\n+ try:\n+ # Make it a list so source preference is not lazily evaluated.\n+ plugins = list(plugins_service.plugins())\n+ except Exception:\n+ tracker.track_command_event(CliEvent.aborted)\n+ raise\n \n if plugin_name:\n plugins = [plugin for plugin in plugins if plugin.name in plugin_name]\n@@ -86,7 +84,14 @@\n descriptor = f\"{plugin.type.descriptor} {plugin.name}\"\n if plugin.is_custom():\n click.secho(f\"{descriptor.capitalize()} is a custom plugin\", fg=\"yellow\")\n+ elif plugin.inherit_from is not None:\n+ click.secho(\n+ f\"{descriptor.capitalize()} is an inherited plugin\", fg=\"yellow\"\n+ )\n else:\n+ plugin.parent = None\n+ with plugins_service.use_preferred_source(DefinitionSource.HUB):\n+ plugin = plugins_service.ensure_parent(plugin)\n try:\n lock_service.save(plugin, exists_ok=update)\n except LockfileAlreadyExistsError as err:\n", "issue": "`meltano lock` -> variant 'original' is not known to Meltano\nI'm getting the following error when I'm trying to run a `meltano lock --all` on [the squared project](https://github.com/meltano/squared/blob/4d0151cf714d9158617d6d76757d21a29c3a38f7/data/extract/extractors.meltano.yml#L63) (note, I did first manually updated to remove executable and namespace which were defined in my project). When I commented out the `inherit_from` children it works, the plugin is locked, then uncommenting the children again makes it fail again so I'm pretty sure its related to the inheriting.\r\n\r\n```\r\nFile \"/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/project_plugins_service.py\", line 521, in get_parent\r\n parent, source = self.find_parent(plugin)\r\n File \"/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/project_plugins_service.py\", line 510, in find_parent\r\n raise error\r\n File \"/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/project_plugins_service.py\", line 505, in find_parent\r\n return (self._get_parent_from_hub(plugin), DefinitionSource.HUB)\r\n File \"/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/project_plugins_service.py\", line 441, in _get_parent_from_hub\r\n return self.hub_service.get_base_plugin(plugin, variant_name=plugin.variant)\r\n File \"/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/plugin_discovery_service.py\", line 136, in get_base_plugin\r\n plugin = project_plugin.custom_definition or self.find_definition(\r\n File \"/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/core/hub/client.py\", line 189, in find_definition\r\n raise HubPluginVariantNotFound(\r\nmeltano.core.hub.client.HubPluginVariantNotFound: Extractor 'tap-github' variant 'original' is not known to Meltano. Variants: ['singer-io (default)', 'meltanolabs']\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/pnadolny/.virtualenvs/meltano/lib/python3.8/site-packages/meltano/cli/__init__.py\", line 74, in _run_cli\r\n raise CliError(str(err)) from err\r\nmeltano.cli.utils.CliError: Extractor 'tap-github' variant 'original' is not known to Meltano. Variants: ['singer-io (default)', 'meltanolabs']\r\nExtractor 'tap-github' variant 'original' is not known to Meltano. Variants: ['singer-io (default)', 'meltanolabs']\r\n```\n", "before_files": [{"content": "\"\"\"Lock command.\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nimport click\nimport structlog\n\nfrom meltano.core.plugin import PluginType\nfrom meltano.core.plugin_lock_service import (\n LockfileAlreadyExistsError,\n PluginLockService,\n)\nfrom meltano.core.project_plugins_service import DefinitionSource, ProjectPluginsService\nfrom meltano.core.tracking import CliEvent, PluginsTrackingContext\n\nfrom . import CliError, cli\nfrom .params import pass_project\nfrom .utils import InstrumentedCmd\n\nif TYPE_CHECKING:\n from meltano.core.project import Project\n\n\n__all__ = [\"lock\"]\nlogger = structlog.get_logger(__name__)\n\n\[email protected](cls=InstrumentedCmd, short_help=\"Lock plugin definitions.\")\[email protected](\n \"--all\",\n \"all_plugins\",\n is_flag=True,\n help=\"Lock all the plugins of the project.\",\n)\[email protected](\n \"--plugin-type\",\n type=click.Choice(PluginType.cli_arguments()),\n help=\"Lock only the plugins of the given type.\",\n)\[email protected](\"plugin_name\", nargs=-1, required=False)\[email protected](\"--update\", \"-u\", is_flag=True, help=\"Update the lock file.\")\[email protected]_context\n@pass_project()\ndef lock(\n project: Project,\n ctx: click.Context,\n all_plugins: bool,\n plugin_type: str | None,\n plugin_name: tuple[str, ...],\n update: bool,\n):\n \"\"\"Lock plugin definitions.\n\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#lock\n \"\"\"\n tracker = ctx.obj[\"tracker\"]\n\n lock_service = PluginLockService(project)\n plugins_service = ProjectPluginsService(project)\n\n if (all_plugins and plugin_name) or not (all_plugins or plugin_name):\n tracker.track_command_event(CliEvent.aborted)\n raise CliError(\"Exactly one of --all or plugin name must be specified.\")\n\n with plugins_service.use_preferred_source(DefinitionSource.HUB):\n try:\n # Make it a list so source preference is not lazily evaluated.\n plugins = list(plugins_service.plugins())\n\n except Exception:\n tracker.track_command_event(CliEvent.aborted)\n raise\n\n if plugin_name:\n plugins = [plugin for plugin in plugins if plugin.name in plugin_name]\n\n if plugin_type:\n plugin_type = PluginType.from_cli_argument(plugin_type)\n plugins = [plugin for plugin in plugins if plugin.type == plugin_type]\n\n tracked_plugins = []\n\n for plugin in plugins:\n descriptor = f\"{plugin.type.descriptor} {plugin.name}\"\n if plugin.is_custom():\n click.secho(f\"{descriptor.capitalize()} is a custom plugin\", fg=\"yellow\")\n else:\n try:\n lock_service.save(plugin, exists_ok=update)\n except LockfileAlreadyExistsError as err:\n relative_path = err.path.relative_to(project.root)\n click.secho(\n f\"Lockfile exists for {descriptor} at {relative_path}\",\n fg=\"red\",\n )\n continue\n\n tracked_plugins.append((plugin, None))\n click.secho(f\"Locked definition for {descriptor}\", fg=\"green\")\n\n tracker.add_contexts(PluginsTrackingContext(tracked_plugins))\n tracker.track_command_event(CliEvent.completed)\n", "path": "src/meltano/cli/lock.py"}]}
| 2,139 | 371 |
gh_patches_debug_716
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-1541
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add readthedocs documentation tortoiseorm instrumentation
Part of [1491](https://github.com/open-telemetry/opentelemetry-python-contrib/issues/1491)
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-tortoiseorm/src/opentelemetry/instrumentation/tortoiseorm/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Instrument `tortoise-orm`_ to report SQL queries.
17
18 Usage
19 -----
20
21 .. code:: python
22
23 from fastapi import FastAPI
24 from tortoise.contrib.fastapi import register_tortoise
25 from opentelemetry.sdk.resources import SERVICE_NAME, Resource
26 from opentelemetry.sdk.trace import TracerProvider
27 from opentelemetry.instrumentation.tortoiseorm import TortoiseORMInstrumentor
28
29 app = FastAPI()
30 tracer = TracerProvider(resource=Resource({SERVICE_NAME: "FastAPI"}))
31 TortoiseORMInstrumentor().instrument(tracer_provider=tracer)
32
33 register_tortoise(
34 app,
35 db_url="sqlite://sample.db",
36 modules={"models": ["example_app.db_models"]}
37 )
38
39 API
40 ---
41 """
42 from typing import Collection
43
44 import wrapt
45
46 from opentelemetry import trace
47 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
48 from opentelemetry.instrumentation.tortoiseorm.package import _instruments
49 from opentelemetry.instrumentation.tortoiseorm.version import __version__
50 from opentelemetry.instrumentation.utils import unwrap
51 from opentelemetry.semconv.trace import DbSystemValues, SpanAttributes
52 from opentelemetry.trace import SpanKind
53 from opentelemetry.trace.status import Status, StatusCode
54
55 try:
56 import tortoise.backends.asyncpg.client
57
58 TORTOISE_POSTGRES_SUPPORT = True
59 except ModuleNotFoundError:
60 TORTOISE_POSTGRES_SUPPORT = False
61
62 try:
63 import tortoise.backends.mysql.client
64
65 TORTOISE_MYSQL_SUPPORT = True
66 except ModuleNotFoundError:
67 TORTOISE_MYSQL_SUPPORT = False
68
69 try:
70 import tortoise.backends.sqlite.client
71
72 TORTOISE_SQLITE_SUPPORT = True
73 except ModuleNotFoundError:
74 TORTOISE_SQLITE_SUPPORT = False
75
76 import tortoise.contrib.pydantic.base
77
78
79 class TortoiseORMInstrumentor(BaseInstrumentor):
80 """An instrumentor for Tortoise-ORM
81 See `BaseInstrumentor`
82 """
83
84 def instrumentation_dependencies(self) -> Collection[str]:
85 return _instruments
86
87 def _instrument(self, **kwargs):
88 """Instruments Tortoise ORM backend methods.
89 Args:
90 **kwargs: Optional arguments
91 ``tracer_provider``: a TracerProvider, defaults to global
92 ``capture_parameters``: set to True to capture SQL query parameters
93 Returns:
94 None
95 """
96 tracer_provider = kwargs.get("tracer_provider")
97 # pylint: disable=attribute-defined-outside-init
98 self._tracer = trace.get_tracer(__name__, __version__, tracer_provider)
99 self.capture_parameters = kwargs.get("capture_parameters", False)
100 if TORTOISE_SQLITE_SUPPORT:
101 funcs = [
102 "SqliteClient.execute_many",
103 "SqliteClient.execute_query",
104 "SqliteClient.execute_insert",
105 "SqliteClient.execute_query_dict",
106 "SqliteClient.execute_script",
107 ]
108 for func in funcs:
109 wrapt.wrap_function_wrapper(
110 "tortoise.backends.sqlite.client",
111 func,
112 self._do_execute,
113 )
114
115 if TORTOISE_POSTGRES_SUPPORT:
116 funcs = [
117 "AsyncpgDBClient.execute_many",
118 "AsyncpgDBClient.execute_query",
119 "AsyncpgDBClient.execute_insert",
120 "AsyncpgDBClient.execute_query_dict",
121 "AsyncpgDBClient.execute_script",
122 ]
123 for func in funcs:
124 wrapt.wrap_function_wrapper(
125 "tortoise.backends.asyncpg.client",
126 func,
127 self._do_execute,
128 )
129
130 if TORTOISE_MYSQL_SUPPORT:
131 funcs = [
132 "MySQLClient.execute_many",
133 "MySQLClient.execute_query",
134 "MySQLClient.execute_insert",
135 "MySQLClient.execute_query_dict",
136 "MySQLClient.execute_script",
137 ]
138 for func in funcs:
139 wrapt.wrap_function_wrapper(
140 "tortoise.backends.mysql.client",
141 func,
142 self._do_execute,
143 )
144 wrapt.wrap_function_wrapper(
145 "tortoise.contrib.pydantic.base",
146 "PydanticModel.from_queryset",
147 self._from_queryset,
148 )
149 wrapt.wrap_function_wrapper(
150 "tortoise.contrib.pydantic.base",
151 "PydanticModel.from_queryset_single",
152 self._from_queryset,
153 )
154 wrapt.wrap_function_wrapper(
155 "tortoise.contrib.pydantic.base",
156 "PydanticListModel.from_queryset",
157 self._from_queryset,
158 )
159
160 def _uninstrument(self, **kwargs):
161 if TORTOISE_SQLITE_SUPPORT:
162 unwrap(
163 tortoise.backends.sqlite.client.SqliteClient, "execute_query"
164 )
165 unwrap(
166 tortoise.backends.sqlite.client.SqliteClient, "execute_many"
167 )
168 unwrap(
169 tortoise.backends.sqlite.client.SqliteClient, "execute_insert"
170 )
171 unwrap(
172 tortoise.backends.sqlite.client.SqliteClient,
173 "execute_query_dict",
174 )
175 unwrap(
176 tortoise.backends.sqlite.client.SqliteClient, "execute_script"
177 )
178 if TORTOISE_MYSQL_SUPPORT:
179 unwrap(tortoise.backends.mysql.client.MySQLClient, "execute_query")
180 unwrap(tortoise.backends.mysql.client.MySQLClient, "execute_many")
181 unwrap(
182 tortoise.backends.mysql.client.MySQLClient, "execute_insert"
183 )
184 unwrap(
185 tortoise.backends.mysql.client.MySQLClient,
186 "execute_query_dict",
187 )
188 unwrap(
189 tortoise.backends.mysql.client.MySQLClient, "execute_script"
190 )
191 if TORTOISE_POSTGRES_SUPPORT:
192 unwrap(
193 tortoise.backends.asyncpg.client.AsyncpgDBClient,
194 "execute_query",
195 )
196 unwrap(
197 tortoise.backends.asyncpg.client.AsyncpgDBClient,
198 "execute_many",
199 )
200 unwrap(
201 tortoise.backends.asyncpg.client.AsyncpgDBClient,
202 "execute_insert",
203 )
204 unwrap(
205 tortoise.backends.asyncpg.client.AsyncpgDBClient,
206 "execute_query_dict",
207 )
208 unwrap(
209 tortoise.backends.asyncpg.client.AsyncpgDBClient,
210 "execute_script",
211 )
212 unwrap(tortoise.contrib.pydantic.base.PydanticModel, "from_queryset")
213 unwrap(
214 tortoise.contrib.pydantic.base.PydanticModel,
215 "from_queryset_single",
216 )
217 unwrap(
218 tortoise.contrib.pydantic.base.PydanticListModel, "from_queryset"
219 )
220
221 def _hydrate_span_from_args(self, connection, query, parameters) -> dict:
222 """Get network and database attributes from connection."""
223 span_attributes = {}
224 capabilities = getattr(connection, "capabilities", None)
225 if capabilities is not None:
226 if capabilities.dialect == "sqlite":
227 span_attributes[
228 SpanAttributes.DB_SYSTEM
229 ] = DbSystemValues.SQLITE.value
230 elif capabilities.dialect == "postgres":
231 span_attributes[
232 SpanAttributes.DB_SYSTEM
233 ] = DbSystemValues.POSTGRESQL.value
234 elif capabilities.dialect == "mysql":
235 span_attributes[
236 SpanAttributes.DB_SYSTEM
237 ] = DbSystemValues.MYSQL.value
238 dbname = getattr(connection, "filename", None)
239 if dbname:
240 span_attributes[SpanAttributes.DB_NAME] = dbname
241 dbname = getattr(connection, "database", None)
242 if dbname:
243 span_attributes[SpanAttributes.DB_NAME] = dbname
244 if query is not None:
245 span_attributes[SpanAttributes.DB_STATEMENT] = query
246 user = getattr(connection, "user", None)
247 if user:
248 span_attributes[SpanAttributes.DB_USER] = user
249 host = getattr(connection, "host", None)
250 if host:
251 span_attributes[SpanAttributes.NET_PEER_NAME] = host
252 port = getattr(connection, "port", None)
253 if port:
254 span_attributes[SpanAttributes.NET_PEER_PORT] = port
255
256 if self.capture_parameters:
257 if parameters is not None and len(parameters) > 0:
258 span_attributes["db.statement.parameters"] = str(parameters)
259
260 return span_attributes
261
262 async def _do_execute(self, func, instance, args, kwargs):
263
264 exception = None
265 name = args[0].split()[0]
266
267 with self._tracer.start_as_current_span(
268 name, kind=SpanKind.CLIENT
269 ) as span:
270 if span.is_recording():
271 span_attributes = self._hydrate_span_from_args(
272 instance,
273 args[0],
274 args[1:],
275 )
276 for attribute, value in span_attributes.items():
277 span.set_attribute(attribute, value)
278
279 try:
280 result = await func(*args, **kwargs)
281 except Exception as exc: # pylint: disable=W0703
282 exception = exc
283 raise
284 finally:
285 if span.is_recording() and exception is not None:
286 span.set_status(Status(StatusCode.ERROR))
287
288 return result
289
290 async def _from_queryset(self, func, modelcls, args, kwargs):
291
292 exception = None
293 name = f"pydantic.{func.__name__}"
294
295 with self._tracer.start_as_current_span(
296 name, kind=SpanKind.INTERNAL
297 ) as span:
298 if span.is_recording():
299 span_attributes = {}
300
301 model_config = getattr(modelcls, "Config", None)
302 if model_config:
303 model_title = getattr(modelcls.Config, "title")
304 if model_title:
305 span_attributes["pydantic.model"] = model_title
306
307 for attribute, value in span_attributes.items():
308 span.set_attribute(attribute, value)
309
310 try:
311 result = await func(*args, **kwargs)
312 except Exception as exc: # pylint: disable=W0703
313 exception = exc
314 raise
315 finally:
316 if span.is_recording() and exception is not None:
317 span.set_status(Status(StatusCode.ERROR))
318
319 return result
320
[end of instrumentation/opentelemetry-instrumentation-tortoiseorm/src/opentelemetry/instrumentation/tortoiseorm/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/instrumentation/opentelemetry-instrumentation-tortoiseorm/src/opentelemetry/instrumentation/tortoiseorm/__init__.py b/instrumentation/opentelemetry-instrumentation-tortoiseorm/src/opentelemetry/instrumentation/tortoiseorm/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-tortoiseorm/src/opentelemetry/instrumentation/tortoiseorm/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-tortoiseorm/src/opentelemetry/instrumentation/tortoiseorm/__init__.py
@@ -13,7 +13,7 @@
# limitations under the License.
"""
-Instrument `tortoise-orm`_ to report SQL queries.
+Instrument tortoise-orm to report SQL queries.
Usage
-----
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-tortoiseorm/src/opentelemetry/instrumentation/tortoiseorm/__init__.py b/instrumentation/opentelemetry-instrumentation-tortoiseorm/src/opentelemetry/instrumentation/tortoiseorm/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-tortoiseorm/src/opentelemetry/instrumentation/tortoiseorm/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-tortoiseorm/src/opentelemetry/instrumentation/tortoiseorm/__init__.py\n@@ -13,7 +13,7 @@\n # limitations under the License.\n \n \"\"\"\n-Instrument `tortoise-orm`_ to report SQL queries.\n+Instrument tortoise-orm to report SQL queries.\n \n Usage\n -----\n", "issue": "Add readthedocs documentation tortoiseorm instrumentation\nPart of [1491](https://github.com/open-telemetry/opentelemetry-python-contrib/issues/1491)\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nInstrument `tortoise-orm`_ to report SQL queries.\n\nUsage\n-----\n\n.. code:: python\n\n from fastapi import FastAPI\n from tortoise.contrib.fastapi import register_tortoise\n from opentelemetry.sdk.resources import SERVICE_NAME, Resource\n from opentelemetry.sdk.trace import TracerProvider\n from opentelemetry.instrumentation.tortoiseorm import TortoiseORMInstrumentor\n\n app = FastAPI()\n tracer = TracerProvider(resource=Resource({SERVICE_NAME: \"FastAPI\"}))\n TortoiseORMInstrumentor().instrument(tracer_provider=tracer)\n\n register_tortoise(\n app,\n db_url=\"sqlite://sample.db\",\n modules={\"models\": [\"example_app.db_models\"]}\n )\n\nAPI\n---\n\"\"\"\nfrom typing import Collection\n\nimport wrapt\n\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.tortoiseorm.package import _instruments\nfrom opentelemetry.instrumentation.tortoiseorm.version import __version__\nfrom opentelemetry.instrumentation.utils import unwrap\nfrom opentelemetry.semconv.trace import DbSystemValues, SpanAttributes\nfrom opentelemetry.trace import SpanKind\nfrom opentelemetry.trace.status import Status, StatusCode\n\ntry:\n import tortoise.backends.asyncpg.client\n\n TORTOISE_POSTGRES_SUPPORT = True\nexcept ModuleNotFoundError:\n TORTOISE_POSTGRES_SUPPORT = False\n\ntry:\n import tortoise.backends.mysql.client\n\n TORTOISE_MYSQL_SUPPORT = True\nexcept ModuleNotFoundError:\n TORTOISE_MYSQL_SUPPORT = False\n\ntry:\n import tortoise.backends.sqlite.client\n\n TORTOISE_SQLITE_SUPPORT = True\nexcept ModuleNotFoundError:\n TORTOISE_SQLITE_SUPPORT = False\n\nimport tortoise.contrib.pydantic.base\n\n\nclass TortoiseORMInstrumentor(BaseInstrumentor):\n \"\"\"An instrumentor for Tortoise-ORM\n See `BaseInstrumentor`\n \"\"\"\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n \"\"\"Instruments Tortoise ORM backend methods.\n Args:\n **kwargs: Optional arguments\n ``tracer_provider``: a TracerProvider, defaults to global\n ``capture_parameters``: set to True to capture SQL query parameters\n Returns:\n None\n \"\"\"\n tracer_provider = kwargs.get(\"tracer_provider\")\n # pylint: disable=attribute-defined-outside-init\n self._tracer = trace.get_tracer(__name__, __version__, tracer_provider)\n self.capture_parameters = kwargs.get(\"capture_parameters\", False)\n if TORTOISE_SQLITE_SUPPORT:\n funcs = [\n \"SqliteClient.execute_many\",\n \"SqliteClient.execute_query\",\n \"SqliteClient.execute_insert\",\n \"SqliteClient.execute_query_dict\",\n \"SqliteClient.execute_script\",\n ]\n for func in funcs:\n wrapt.wrap_function_wrapper(\n \"tortoise.backends.sqlite.client\",\n func,\n self._do_execute,\n )\n\n if TORTOISE_POSTGRES_SUPPORT:\n funcs = [\n \"AsyncpgDBClient.execute_many\",\n \"AsyncpgDBClient.execute_query\",\n \"AsyncpgDBClient.execute_insert\",\n \"AsyncpgDBClient.execute_query_dict\",\n \"AsyncpgDBClient.execute_script\",\n ]\n for func in funcs:\n wrapt.wrap_function_wrapper(\n \"tortoise.backends.asyncpg.client\",\n func,\n self._do_execute,\n )\n\n if TORTOISE_MYSQL_SUPPORT:\n funcs = [\n \"MySQLClient.execute_many\",\n \"MySQLClient.execute_query\",\n \"MySQLClient.execute_insert\",\n \"MySQLClient.execute_query_dict\",\n \"MySQLClient.execute_script\",\n ]\n for func in funcs:\n wrapt.wrap_function_wrapper(\n \"tortoise.backends.mysql.client\",\n func,\n self._do_execute,\n )\n wrapt.wrap_function_wrapper(\n \"tortoise.contrib.pydantic.base\",\n \"PydanticModel.from_queryset\",\n self._from_queryset,\n )\n wrapt.wrap_function_wrapper(\n \"tortoise.contrib.pydantic.base\",\n \"PydanticModel.from_queryset_single\",\n self._from_queryset,\n )\n wrapt.wrap_function_wrapper(\n \"tortoise.contrib.pydantic.base\",\n \"PydanticListModel.from_queryset\",\n self._from_queryset,\n )\n\n def _uninstrument(self, **kwargs):\n if TORTOISE_SQLITE_SUPPORT:\n unwrap(\n tortoise.backends.sqlite.client.SqliteClient, \"execute_query\"\n )\n unwrap(\n tortoise.backends.sqlite.client.SqliteClient, \"execute_many\"\n )\n unwrap(\n tortoise.backends.sqlite.client.SqliteClient, \"execute_insert\"\n )\n unwrap(\n tortoise.backends.sqlite.client.SqliteClient,\n \"execute_query_dict\",\n )\n unwrap(\n tortoise.backends.sqlite.client.SqliteClient, \"execute_script\"\n )\n if TORTOISE_MYSQL_SUPPORT:\n unwrap(tortoise.backends.mysql.client.MySQLClient, \"execute_query\")\n unwrap(tortoise.backends.mysql.client.MySQLClient, \"execute_many\")\n unwrap(\n tortoise.backends.mysql.client.MySQLClient, \"execute_insert\"\n )\n unwrap(\n tortoise.backends.mysql.client.MySQLClient,\n \"execute_query_dict\",\n )\n unwrap(\n tortoise.backends.mysql.client.MySQLClient, \"execute_script\"\n )\n if TORTOISE_POSTGRES_SUPPORT:\n unwrap(\n tortoise.backends.asyncpg.client.AsyncpgDBClient,\n \"execute_query\",\n )\n unwrap(\n tortoise.backends.asyncpg.client.AsyncpgDBClient,\n \"execute_many\",\n )\n unwrap(\n tortoise.backends.asyncpg.client.AsyncpgDBClient,\n \"execute_insert\",\n )\n unwrap(\n tortoise.backends.asyncpg.client.AsyncpgDBClient,\n \"execute_query_dict\",\n )\n unwrap(\n tortoise.backends.asyncpg.client.AsyncpgDBClient,\n \"execute_script\",\n )\n unwrap(tortoise.contrib.pydantic.base.PydanticModel, \"from_queryset\")\n unwrap(\n tortoise.contrib.pydantic.base.PydanticModel,\n \"from_queryset_single\",\n )\n unwrap(\n tortoise.contrib.pydantic.base.PydanticListModel, \"from_queryset\"\n )\n\n def _hydrate_span_from_args(self, connection, query, parameters) -> dict:\n \"\"\"Get network and database attributes from connection.\"\"\"\n span_attributes = {}\n capabilities = getattr(connection, \"capabilities\", None)\n if capabilities is not None:\n if capabilities.dialect == \"sqlite\":\n span_attributes[\n SpanAttributes.DB_SYSTEM\n ] = DbSystemValues.SQLITE.value\n elif capabilities.dialect == \"postgres\":\n span_attributes[\n SpanAttributes.DB_SYSTEM\n ] = DbSystemValues.POSTGRESQL.value\n elif capabilities.dialect == \"mysql\":\n span_attributes[\n SpanAttributes.DB_SYSTEM\n ] = DbSystemValues.MYSQL.value\n dbname = getattr(connection, \"filename\", None)\n if dbname:\n span_attributes[SpanAttributes.DB_NAME] = dbname\n dbname = getattr(connection, \"database\", None)\n if dbname:\n span_attributes[SpanAttributes.DB_NAME] = dbname\n if query is not None:\n span_attributes[SpanAttributes.DB_STATEMENT] = query\n user = getattr(connection, \"user\", None)\n if user:\n span_attributes[SpanAttributes.DB_USER] = user\n host = getattr(connection, \"host\", None)\n if host:\n span_attributes[SpanAttributes.NET_PEER_NAME] = host\n port = getattr(connection, \"port\", None)\n if port:\n span_attributes[SpanAttributes.NET_PEER_PORT] = port\n\n if self.capture_parameters:\n if parameters is not None and len(parameters) > 0:\n span_attributes[\"db.statement.parameters\"] = str(parameters)\n\n return span_attributes\n\n async def _do_execute(self, func, instance, args, kwargs):\n\n exception = None\n name = args[0].split()[0]\n\n with self._tracer.start_as_current_span(\n name, kind=SpanKind.CLIENT\n ) as span:\n if span.is_recording():\n span_attributes = self._hydrate_span_from_args(\n instance,\n args[0],\n args[1:],\n )\n for attribute, value in span_attributes.items():\n span.set_attribute(attribute, value)\n\n try:\n result = await func(*args, **kwargs)\n except Exception as exc: # pylint: disable=W0703\n exception = exc\n raise\n finally:\n if span.is_recording() and exception is not None:\n span.set_status(Status(StatusCode.ERROR))\n\n return result\n\n async def _from_queryset(self, func, modelcls, args, kwargs):\n\n exception = None\n name = f\"pydantic.{func.__name__}\"\n\n with self._tracer.start_as_current_span(\n name, kind=SpanKind.INTERNAL\n ) as span:\n if span.is_recording():\n span_attributes = {}\n\n model_config = getattr(modelcls, \"Config\", None)\n if model_config:\n model_title = getattr(modelcls.Config, \"title\")\n if model_title:\n span_attributes[\"pydantic.model\"] = model_title\n\n for attribute, value in span_attributes.items():\n span.set_attribute(attribute, value)\n\n try:\n result = await func(*args, **kwargs)\n except Exception as exc: # pylint: disable=W0703\n exception = exc\n raise\n finally:\n if span.is_recording() and exception is not None:\n span.set_status(Status(StatusCode.ERROR))\n\n return result\n", "path": "instrumentation/opentelemetry-instrumentation-tortoiseorm/src/opentelemetry/instrumentation/tortoiseorm/__init__.py"}]}
| 3,666 | 174 |
gh_patches_debug_38034
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-28558
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rename and reorder custom profile field types
<!-- Issue description -->
In the custom profile field type dropdown, we should:
1. Make the following naming changes:
* Short text -> Text (short)
* Long text -> Text (long)
* Date picker -> Date
* Person picker -> Users
2. Order the fields alphabetically.
Current menu:

<!-- Link to a message in the chat.zulip.org discussion. Message links will still work even if the topic is renamed or resolved. Link back to this issue from the chat.zulip.org thread. -->
[CZO thread](https://chat.zulip.org/#narrow/stream/137-feedback/topic/Improvements.20for.20Custom.20profile.20fields/near/1712583)
</issue>
<code>
[start of zerver/models/custom_profile_fields.py]
1 from typing import Any, Callable, Dict, List, Tuple
2
3 import orjson
4 from django.core.exceptions import ValidationError
5 from django.db import models
6 from django.db.models import CASCADE, QuerySet
7 from django.utils.translation import gettext as _
8 from django.utils.translation import gettext_lazy
9 from django_stubs_ext import StrPromise
10 from typing_extensions import override
11
12 from zerver.lib.types import (
13 ExtendedFieldElement,
14 ExtendedValidator,
15 FieldElement,
16 ProfileDataElementBase,
17 ProfileDataElementValue,
18 RealmUserValidator,
19 UserFieldElement,
20 Validator,
21 )
22 from zerver.lib.validator import (
23 check_date,
24 check_int,
25 check_list,
26 check_long_string,
27 check_short_string,
28 check_url,
29 validate_select_field,
30 )
31 from zerver.models.realms import Realm
32 from zerver.models.users import UserProfile, get_user_profile_by_id_in_realm
33
34
35 def check_valid_user_ids(realm_id: int, val: object, allow_deactivated: bool = False) -> List[int]:
36 user_ids = check_list(check_int)("User IDs", val)
37 realm = Realm.objects.get(id=realm_id)
38 for user_id in user_ids:
39 # TODO: Structurally, we should be doing a bulk fetch query to
40 # get the users here, not doing these in a loop. But because
41 # this is a rarely used feature and likely to never have more
42 # than a handful of users, it's probably mostly OK.
43 try:
44 user_profile = get_user_profile_by_id_in_realm(user_id, realm)
45 except UserProfile.DoesNotExist:
46 raise ValidationError(_("Invalid user ID: {user_id}").format(user_id=user_id))
47
48 if not allow_deactivated and not user_profile.is_active:
49 raise ValidationError(
50 _("User with ID {user_id} is deactivated").format(user_id=user_id)
51 )
52
53 if user_profile.is_bot:
54 raise ValidationError(_("User with ID {user_id} is a bot").format(user_id=user_id))
55
56 return user_ids
57
58
59 class CustomProfileField(models.Model):
60 """Defines a form field for the per-realm custom profile fields feature.
61
62 See CustomProfileFieldValue for an individual user's values for one of
63 these fields.
64 """
65
66 HINT_MAX_LENGTH = 80
67 NAME_MAX_LENGTH = 40
68 MAX_DISPLAY_IN_PROFILE_SUMMARY_FIELDS = 2
69
70 realm = models.ForeignKey(Realm, on_delete=CASCADE)
71 name = models.CharField(max_length=NAME_MAX_LENGTH)
72 hint = models.CharField(max_length=HINT_MAX_LENGTH, default="")
73
74 # Sort order for display of custom profile fields.
75 order = models.IntegerField(default=0)
76
77 # Whether the field should be displayed in smaller summary
78 # sections of a page displaying custom profile fields.
79 display_in_profile_summary = models.BooleanField(default=False)
80
81 SHORT_TEXT = 1
82 LONG_TEXT = 2
83 SELECT = 3
84 DATE = 4
85 URL = 5
86 USER = 6
87 EXTERNAL_ACCOUNT = 7
88 PRONOUNS = 8
89
90 # These are the fields whose validators require more than var_name
91 # and value argument. i.e. SELECT require field_data, USER require
92 # realm as argument.
93 SELECT_FIELD_TYPE_DATA: List[ExtendedFieldElement] = [
94 (SELECT, gettext_lazy("List of options"), validate_select_field, str, "SELECT"),
95 ]
96 USER_FIELD_TYPE_DATA: List[UserFieldElement] = [
97 (USER, gettext_lazy("Person picker"), check_valid_user_ids, orjson.loads, "USER"),
98 ]
99
100 SELECT_FIELD_VALIDATORS: Dict[int, ExtendedValidator] = {
101 item[0]: item[2] for item in SELECT_FIELD_TYPE_DATA
102 }
103 USER_FIELD_VALIDATORS: Dict[int, RealmUserValidator] = {
104 item[0]: item[2] for item in USER_FIELD_TYPE_DATA
105 }
106
107 FIELD_TYPE_DATA: List[FieldElement] = [
108 # Type, display name, validator, converter, keyword
109 (SHORT_TEXT, gettext_lazy("Short text"), check_short_string, str, "SHORT_TEXT"),
110 (LONG_TEXT, gettext_lazy("Long text"), check_long_string, str, "LONG_TEXT"),
111 (DATE, gettext_lazy("Date picker"), check_date, str, "DATE"),
112 (URL, gettext_lazy("Link"), check_url, str, "URL"),
113 (
114 EXTERNAL_ACCOUNT,
115 gettext_lazy("External account"),
116 check_short_string,
117 str,
118 "EXTERNAL_ACCOUNT",
119 ),
120 (PRONOUNS, gettext_lazy("Pronouns"), check_short_string, str, "PRONOUNS"),
121 ]
122
123 ALL_FIELD_TYPES = [*FIELD_TYPE_DATA, *SELECT_FIELD_TYPE_DATA, *USER_FIELD_TYPE_DATA]
124
125 FIELD_VALIDATORS: Dict[int, Validator[ProfileDataElementValue]] = {
126 item[0]: item[2] for item in FIELD_TYPE_DATA
127 }
128 FIELD_CONVERTERS: Dict[int, Callable[[Any], Any]] = {
129 item[0]: item[3] for item in ALL_FIELD_TYPES
130 }
131 FIELD_TYPE_CHOICES: List[Tuple[int, StrPromise]] = [
132 (item[0], item[1]) for item in ALL_FIELD_TYPES
133 ]
134
135 field_type = models.PositiveSmallIntegerField(
136 choices=FIELD_TYPE_CHOICES,
137 default=SHORT_TEXT,
138 )
139
140 # A JSON blob of any additional data needed to define the field beyond
141 # type/name/hint.
142 #
143 # The format depends on the type. Field types SHORT_TEXT, LONG_TEXT,
144 # DATE, URL, and USER leave this empty. Fields of type SELECT store the
145 # choices' descriptions.
146 #
147 # Note: There is no performance overhead of using TextField in PostgreSQL.
148 # See https://www.postgresql.org/docs/9.0/static/datatype-character.html
149 field_data = models.TextField(default="")
150
151 class Meta:
152 unique_together = ("realm", "name")
153
154 @override
155 def __str__(self) -> str:
156 return f"{self.realm!r} {self.name} {self.field_type} {self.order}"
157
158 def as_dict(self) -> ProfileDataElementBase:
159 data_as_dict: ProfileDataElementBase = {
160 "id": self.id,
161 "name": self.name,
162 "type": self.field_type,
163 "hint": self.hint,
164 "field_data": self.field_data,
165 "order": self.order,
166 }
167 if self.display_in_profile_summary:
168 data_as_dict["display_in_profile_summary"] = True
169
170 return data_as_dict
171
172 def is_renderable(self) -> bool:
173 if self.field_type in [CustomProfileField.SHORT_TEXT, CustomProfileField.LONG_TEXT]:
174 return True
175 return False
176
177
178 def custom_profile_fields_for_realm(realm_id: int) -> QuerySet[CustomProfileField]:
179 return CustomProfileField.objects.filter(realm=realm_id).order_by("order")
180
181
182 class CustomProfileFieldValue(models.Model):
183 user_profile = models.ForeignKey(UserProfile, on_delete=CASCADE)
184 field = models.ForeignKey(CustomProfileField, on_delete=CASCADE)
185 value = models.TextField()
186 rendered_value = models.TextField(null=True, default=None)
187
188 class Meta:
189 unique_together = ("user_profile", "field")
190
191 @override
192 def __str__(self) -> str:
193 return f"{self.user_profile!r} {self.field!r} {self.value}"
194
[end of zerver/models/custom_profile_fields.py]
[start of zerver/migrations/0417_alter_customprofilefield_field_type.py]
1 # Generated by Django 4.0.7 on 2022-10-04 13:59
2
3 from django.db import migrations, models
4
5
6 class Migration(migrations.Migration):
7 dependencies = [
8 ("zerver", "0416_set_default_emoji_style"),
9 ]
10
11 operations = [
12 migrations.AlterField(
13 model_name="customprofilefield",
14 name="field_type",
15 field=models.PositiveSmallIntegerField(
16 choices=[
17 (1, "Short text"),
18 (2, "Long text"),
19 (4, "Date picker"),
20 (5, "Link"),
21 (7, "External account"),
22 (8, "Pronouns"),
23 (3, "List of options"),
24 (6, "Person picker"),
25 ],
26 default=1,
27 ),
28 ),
29 ]
30
[end of zerver/migrations/0417_alter_customprofilefield_field_type.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zerver/migrations/0417_alter_customprofilefield_field_type.py b/zerver/migrations/0417_alter_customprofilefield_field_type.py
--- a/zerver/migrations/0417_alter_customprofilefield_field_type.py
+++ b/zerver/migrations/0417_alter_customprofilefield_field_type.py
@@ -14,14 +14,14 @@
name="field_type",
field=models.PositiveSmallIntegerField(
choices=[
- (1, "Short text"),
- (2, "Long text"),
- (4, "Date picker"),
- (5, "Link"),
+ (4, "Date"),
(7, "External account"),
- (8, "Pronouns"),
+ (5, "Link"),
(3, "List of options"),
- (6, "Person picker"),
+ (8, "Pronouns"),
+ (2, "Text (long)"),
+ (1, "Text (short)"),
+ (6, "Users"),
],
default=1,
),
diff --git a/zerver/models/custom_profile_fields.py b/zerver/models/custom_profile_fields.py
--- a/zerver/models/custom_profile_fields.py
+++ b/zerver/models/custom_profile_fields.py
@@ -94,7 +94,7 @@
(SELECT, gettext_lazy("List of options"), validate_select_field, str, "SELECT"),
]
USER_FIELD_TYPE_DATA: List[UserFieldElement] = [
- (USER, gettext_lazy("Person picker"), check_valid_user_ids, orjson.loads, "USER"),
+ (USER, gettext_lazy("Users"), check_valid_user_ids, orjson.loads, "USER"),
]
SELECT_FIELD_VALIDATORS: Dict[int, ExtendedValidator] = {
@@ -106,9 +106,9 @@
FIELD_TYPE_DATA: List[FieldElement] = [
# Type, display name, validator, converter, keyword
- (SHORT_TEXT, gettext_lazy("Short text"), check_short_string, str, "SHORT_TEXT"),
- (LONG_TEXT, gettext_lazy("Long text"), check_long_string, str, "LONG_TEXT"),
- (DATE, gettext_lazy("Date picker"), check_date, str, "DATE"),
+ (SHORT_TEXT, gettext_lazy("Text (short)"), check_short_string, str, "SHORT_TEXT"),
+ (LONG_TEXT, gettext_lazy("Text (long)"), check_long_string, str, "LONG_TEXT"),
+ (DATE, gettext_lazy("Date"), check_date, str, "DATE"),
(URL, gettext_lazy("Link"), check_url, str, "URL"),
(
EXTERNAL_ACCOUNT,
@@ -120,7 +120,9 @@
(PRONOUNS, gettext_lazy("Pronouns"), check_short_string, str, "PRONOUNS"),
]
- ALL_FIELD_TYPES = [*FIELD_TYPE_DATA, *SELECT_FIELD_TYPE_DATA, *USER_FIELD_TYPE_DATA]
+ ALL_FIELD_TYPES = sorted(
+ [*FIELD_TYPE_DATA, *SELECT_FIELD_TYPE_DATA, *USER_FIELD_TYPE_DATA], key=lambda x: x[1]
+ )
FIELD_VALIDATORS: Dict[int, Validator[ProfileDataElementValue]] = {
item[0]: item[2] for item in FIELD_TYPE_DATA
|
{"golden_diff": "diff --git a/zerver/migrations/0417_alter_customprofilefield_field_type.py b/zerver/migrations/0417_alter_customprofilefield_field_type.py\n--- a/zerver/migrations/0417_alter_customprofilefield_field_type.py\n+++ b/zerver/migrations/0417_alter_customprofilefield_field_type.py\n@@ -14,14 +14,14 @@\n name=\"field_type\",\n field=models.PositiveSmallIntegerField(\n choices=[\n- (1, \"Short text\"),\n- (2, \"Long text\"),\n- (4, \"Date picker\"),\n- (5, \"Link\"),\n+ (4, \"Date\"),\n (7, \"External account\"),\n- (8, \"Pronouns\"),\n+ (5, \"Link\"),\n (3, \"List of options\"),\n- (6, \"Person picker\"),\n+ (8, \"Pronouns\"),\n+ (2, \"Text (long)\"),\n+ (1, \"Text (short)\"),\n+ (6, \"Users\"),\n ],\n default=1,\n ),\ndiff --git a/zerver/models/custom_profile_fields.py b/zerver/models/custom_profile_fields.py\n--- a/zerver/models/custom_profile_fields.py\n+++ b/zerver/models/custom_profile_fields.py\n@@ -94,7 +94,7 @@\n (SELECT, gettext_lazy(\"List of options\"), validate_select_field, str, \"SELECT\"),\n ]\n USER_FIELD_TYPE_DATA: List[UserFieldElement] = [\n- (USER, gettext_lazy(\"Person picker\"), check_valid_user_ids, orjson.loads, \"USER\"),\n+ (USER, gettext_lazy(\"Users\"), check_valid_user_ids, orjson.loads, \"USER\"),\n ]\n \n SELECT_FIELD_VALIDATORS: Dict[int, ExtendedValidator] = {\n@@ -106,9 +106,9 @@\n \n FIELD_TYPE_DATA: List[FieldElement] = [\n # Type, display name, validator, converter, keyword\n- (SHORT_TEXT, gettext_lazy(\"Short text\"), check_short_string, str, \"SHORT_TEXT\"),\n- (LONG_TEXT, gettext_lazy(\"Long text\"), check_long_string, str, \"LONG_TEXT\"),\n- (DATE, gettext_lazy(\"Date picker\"), check_date, str, \"DATE\"),\n+ (SHORT_TEXT, gettext_lazy(\"Text (short)\"), check_short_string, str, \"SHORT_TEXT\"),\n+ (LONG_TEXT, gettext_lazy(\"Text (long)\"), check_long_string, str, \"LONG_TEXT\"),\n+ (DATE, gettext_lazy(\"Date\"), check_date, str, \"DATE\"),\n (URL, gettext_lazy(\"Link\"), check_url, str, \"URL\"),\n (\n EXTERNAL_ACCOUNT,\n@@ -120,7 +120,9 @@\n (PRONOUNS, gettext_lazy(\"Pronouns\"), check_short_string, str, \"PRONOUNS\"),\n ]\n \n- ALL_FIELD_TYPES = [*FIELD_TYPE_DATA, *SELECT_FIELD_TYPE_DATA, *USER_FIELD_TYPE_DATA]\n+ ALL_FIELD_TYPES = sorted(\n+ [*FIELD_TYPE_DATA, *SELECT_FIELD_TYPE_DATA, *USER_FIELD_TYPE_DATA], key=lambda x: x[1]\n+ )\n \n FIELD_VALIDATORS: Dict[int, Validator[ProfileDataElementValue]] = {\n item[0]: item[2] for item in FIELD_TYPE_DATA\n", "issue": "Rename and reorder custom profile field types\n<!-- Issue description -->\r\n\r\nIn the custom profile field type dropdown, we should:\r\n\r\n1. Make the following naming changes:\r\n * Short text -> Text (short)\r\n * Long text -> Text (long)\r\n * Date picker -> Date\r\n * Person picker -> Users\r\n2. Order the fields alphabetically.\r\n\r\nCurrent menu:\r\n\r\n\r\n\r\n<!-- Link to a message in the chat.zulip.org discussion. Message links will still work even if the topic is renamed or resolved. Link back to this issue from the chat.zulip.org thread. -->\r\n\r\n[CZO thread](https://chat.zulip.org/#narrow/stream/137-feedback/topic/Improvements.20for.20Custom.20profile.20fields/near/1712583)\r\n\n", "before_files": [{"content": "from typing import Any, Callable, Dict, List, Tuple\n\nimport orjson\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import CASCADE, QuerySet\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\nfrom django_stubs_ext import StrPromise\nfrom typing_extensions import override\n\nfrom zerver.lib.types import (\n ExtendedFieldElement,\n ExtendedValidator,\n FieldElement,\n ProfileDataElementBase,\n ProfileDataElementValue,\n RealmUserValidator,\n UserFieldElement,\n Validator,\n)\nfrom zerver.lib.validator import (\n check_date,\n check_int,\n check_list,\n check_long_string,\n check_short_string,\n check_url,\n validate_select_field,\n)\nfrom zerver.models.realms import Realm\nfrom zerver.models.users import UserProfile, get_user_profile_by_id_in_realm\n\n\ndef check_valid_user_ids(realm_id: int, val: object, allow_deactivated: bool = False) -> List[int]:\n user_ids = check_list(check_int)(\"User IDs\", val)\n realm = Realm.objects.get(id=realm_id)\n for user_id in user_ids:\n # TODO: Structurally, we should be doing a bulk fetch query to\n # get the users here, not doing these in a loop. But because\n # this is a rarely used feature and likely to never have more\n # than a handful of users, it's probably mostly OK.\n try:\n user_profile = get_user_profile_by_id_in_realm(user_id, realm)\n except UserProfile.DoesNotExist:\n raise ValidationError(_(\"Invalid user ID: {user_id}\").format(user_id=user_id))\n\n if not allow_deactivated and not user_profile.is_active:\n raise ValidationError(\n _(\"User with ID {user_id} is deactivated\").format(user_id=user_id)\n )\n\n if user_profile.is_bot:\n raise ValidationError(_(\"User with ID {user_id} is a bot\").format(user_id=user_id))\n\n return user_ids\n\n\nclass CustomProfileField(models.Model):\n \"\"\"Defines a form field for the per-realm custom profile fields feature.\n\n See CustomProfileFieldValue for an individual user's values for one of\n these fields.\n \"\"\"\n\n HINT_MAX_LENGTH = 80\n NAME_MAX_LENGTH = 40\n MAX_DISPLAY_IN_PROFILE_SUMMARY_FIELDS = 2\n\n realm = models.ForeignKey(Realm, on_delete=CASCADE)\n name = models.CharField(max_length=NAME_MAX_LENGTH)\n hint = models.CharField(max_length=HINT_MAX_LENGTH, default=\"\")\n\n # Sort order for display of custom profile fields.\n order = models.IntegerField(default=0)\n\n # Whether the field should be displayed in smaller summary\n # sections of a page displaying custom profile fields.\n display_in_profile_summary = models.BooleanField(default=False)\n\n SHORT_TEXT = 1\n LONG_TEXT = 2\n SELECT = 3\n DATE = 4\n URL = 5\n USER = 6\n EXTERNAL_ACCOUNT = 7\n PRONOUNS = 8\n\n # These are the fields whose validators require more than var_name\n # and value argument. i.e. SELECT require field_data, USER require\n # realm as argument.\n SELECT_FIELD_TYPE_DATA: List[ExtendedFieldElement] = [\n (SELECT, gettext_lazy(\"List of options\"), validate_select_field, str, \"SELECT\"),\n ]\n USER_FIELD_TYPE_DATA: List[UserFieldElement] = [\n (USER, gettext_lazy(\"Person picker\"), check_valid_user_ids, orjson.loads, \"USER\"),\n ]\n\n SELECT_FIELD_VALIDATORS: Dict[int, ExtendedValidator] = {\n item[0]: item[2] for item in SELECT_FIELD_TYPE_DATA\n }\n USER_FIELD_VALIDATORS: Dict[int, RealmUserValidator] = {\n item[0]: item[2] for item in USER_FIELD_TYPE_DATA\n }\n\n FIELD_TYPE_DATA: List[FieldElement] = [\n # Type, display name, validator, converter, keyword\n (SHORT_TEXT, gettext_lazy(\"Short text\"), check_short_string, str, \"SHORT_TEXT\"),\n (LONG_TEXT, gettext_lazy(\"Long text\"), check_long_string, str, \"LONG_TEXT\"),\n (DATE, gettext_lazy(\"Date picker\"), check_date, str, \"DATE\"),\n (URL, gettext_lazy(\"Link\"), check_url, str, \"URL\"),\n (\n EXTERNAL_ACCOUNT,\n gettext_lazy(\"External account\"),\n check_short_string,\n str,\n \"EXTERNAL_ACCOUNT\",\n ),\n (PRONOUNS, gettext_lazy(\"Pronouns\"), check_short_string, str, \"PRONOUNS\"),\n ]\n\n ALL_FIELD_TYPES = [*FIELD_TYPE_DATA, *SELECT_FIELD_TYPE_DATA, *USER_FIELD_TYPE_DATA]\n\n FIELD_VALIDATORS: Dict[int, Validator[ProfileDataElementValue]] = {\n item[0]: item[2] for item in FIELD_TYPE_DATA\n }\n FIELD_CONVERTERS: Dict[int, Callable[[Any], Any]] = {\n item[0]: item[3] for item in ALL_FIELD_TYPES\n }\n FIELD_TYPE_CHOICES: List[Tuple[int, StrPromise]] = [\n (item[0], item[1]) for item in ALL_FIELD_TYPES\n ]\n\n field_type = models.PositiveSmallIntegerField(\n choices=FIELD_TYPE_CHOICES,\n default=SHORT_TEXT,\n )\n\n # A JSON blob of any additional data needed to define the field beyond\n # type/name/hint.\n #\n # The format depends on the type. Field types SHORT_TEXT, LONG_TEXT,\n # DATE, URL, and USER leave this empty. Fields of type SELECT store the\n # choices' descriptions.\n #\n # Note: There is no performance overhead of using TextField in PostgreSQL.\n # See https://www.postgresql.org/docs/9.0/static/datatype-character.html\n field_data = models.TextField(default=\"\")\n\n class Meta:\n unique_together = (\"realm\", \"name\")\n\n @override\n def __str__(self) -> str:\n return f\"{self.realm!r} {self.name} {self.field_type} {self.order}\"\n\n def as_dict(self) -> ProfileDataElementBase:\n data_as_dict: ProfileDataElementBase = {\n \"id\": self.id,\n \"name\": self.name,\n \"type\": self.field_type,\n \"hint\": self.hint,\n \"field_data\": self.field_data,\n \"order\": self.order,\n }\n if self.display_in_profile_summary:\n data_as_dict[\"display_in_profile_summary\"] = True\n\n return data_as_dict\n\n def is_renderable(self) -> bool:\n if self.field_type in [CustomProfileField.SHORT_TEXT, CustomProfileField.LONG_TEXT]:\n return True\n return False\n\n\ndef custom_profile_fields_for_realm(realm_id: int) -> QuerySet[CustomProfileField]:\n return CustomProfileField.objects.filter(realm=realm_id).order_by(\"order\")\n\n\nclass CustomProfileFieldValue(models.Model):\n user_profile = models.ForeignKey(UserProfile, on_delete=CASCADE)\n field = models.ForeignKey(CustomProfileField, on_delete=CASCADE)\n value = models.TextField()\n rendered_value = models.TextField(null=True, default=None)\n\n class Meta:\n unique_together = (\"user_profile\", \"field\")\n\n @override\n def __str__(self) -> str:\n return f\"{self.user_profile!r} {self.field!r} {self.value}\"\n", "path": "zerver/models/custom_profile_fields.py"}, {"content": "# Generated by Django 4.0.7 on 2022-10-04 13:59\n\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n dependencies = [\n (\"zerver\", \"0416_set_default_emoji_style\"),\n ]\n\n operations = [\n migrations.AlterField(\n model_name=\"customprofilefield\",\n name=\"field_type\",\n field=models.PositiveSmallIntegerField(\n choices=[\n (1, \"Short text\"),\n (2, \"Long text\"),\n (4, \"Date picker\"),\n (5, \"Link\"),\n (7, \"External account\"),\n (8, \"Pronouns\"),\n (3, \"List of options\"),\n (6, \"Person picker\"),\n ],\n default=1,\n ),\n ),\n ]\n", "path": "zerver/migrations/0417_alter_customprofilefield_field_type.py"}]}
| 3,142 | 727 |
gh_patches_debug_20946
|
rasdani/github-patches
|
git_diff
|
google__jax-3485
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Module level documentation no longer shown on RTD
Example:
https://jax.readthedocs.io/en/latest/jax.experimental.loops.html
This used to show the module level docstring, but not anymore.
My guess is that this might be related to inserting the line `from __future__ import annotations` into each source file?
</issue>
<code>
[start of docs/conf.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 # -*- coding: utf-8 -*-
16 #
17 # Configuration file for the Sphinx documentation builder.
18 #
19 # This file does only contain a selection of the most common options. For a
20 # full list see the documentation:
21 # http://www.sphinx-doc.org/en/master/config
22
23 # -- Path setup --------------------------------------------------------------
24
25 # If extensions (or modules to document with autodoc) are in another directory,
26 # add these directories to sys.path here. If the directory is relative to the
27 # documentation root, use os.path.abspath to make it absolute, like shown here.
28 #
29 import os
30 import sys
31 import typing
32
33 sys.path.insert(0, os.path.abspath('..'))
34
35
36 # Workaround to avoid expanding type aliases. See:
37 # https://github.com/sphinx-doc/sphinx/issues/6518#issuecomment-589613836
38
39 # When building docs, enable `from __future__ import annotations` everywhere.
40 def _rewrite(p):
41 with open(p) as f:
42 contents = f.read()
43 with open(p, 'w') as f:
44 f.write('from __future__ import annotations\n')
45 f.write(contents)
46
47 if 'READTHEDOCS' in os.environ:
48 for path, dirs, files in os.walk('../jax/'):
49 for file in files:
50 if file.endswith('.py'):
51 _rewrite(os.path.abspath(os.path.join(path, file)))
52
53 # Monkey patch for the typing module to prevent it from expanding type aliases.
54 typing.get_type_hints = lambda obj, *unused: obj.__annotations__
55
56 # -- Project information -----------------------------------------------------
57
58 project = 'JAX'
59 copyright = '2019, Google LLC. NumPy and SciPy documentation are copyright the respective authors.'
60 author = 'The JAX authors'
61
62 # The short X.Y version
63 version = ''
64 # The full version, including alpha/beta/rc tags
65 release = ''
66
67
68 # -- General configuration ---------------------------------------------------
69
70 # If your documentation needs a minimal Sphinx version, state it here.
71 #
72 needs_sphinx = '2.1'
73
74 # Add any Sphinx extension module names here, as strings. They can be
75 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
76 # ones.
77 extensions = [
78 'sphinx.ext.autodoc',
79 'sphinx.ext.autosummary',
80 'sphinx.ext.intersphinx',
81 'sphinx.ext.mathjax',
82 'sphinx.ext.napoleon',
83 'sphinx.ext.viewcode',
84 'nbsphinx',
85 'sphinx_autodoc_typehints',
86 'myst_parser',
87 ]
88
89 intersphinx_mapping = {
90 'python': ('https://docs.python.org/3/', None),
91 'numpy': ('https://docs.scipy.org/doc/numpy/', None),
92 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),
93 }
94
95 # Add any paths that contain templates here, relative to this directory.
96 templates_path = ['_templates']
97
98 # The suffix(es) of source filenames.
99 # You can specify multiple suffix as a list of string:
100 #
101 source_suffix = '.rst'
102
103 # The master toctree document.
104 master_doc = 'index'
105
106 # The language for content autogenerated by Sphinx. Refer to documentation
107 # for a list of supported languages.
108 #
109 # This is also used if you do content translation via gettext catalogs.
110 # Usually you set "language" from the command line for these cases.
111 language = None
112
113 # List of patterns, relative to source directory, that match files and
114 # directories to ignore when looking for source files.
115 # This pattern also affects html_static_path and html_extra_path.
116 exclude_patterns = [
117 # Slow notebook: long time to load tf.ds
118 'notebooks/neural_network_with_tfds_data.ipynb',
119 # Slow notebook
120 'notebooks/Neural_Network_and_Data_Loading.ipynb',
121 'notebooks/score_matching.ipynb',
122 'notebooks/maml.ipynb',
123 # Fails with shape error in XL
124 'notebooks/XLA_in_Python.ipynb',
125 # Sometimes sphinx reads its own outputs as inputs!
126 'build/html',
127 ]
128
129 # The name of the Pygments (syntax highlighting) style to use.
130 pygments_style = None
131
132
133 autosummary_generate = True
134 napolean_use_rtype = False
135
136 # -- Options for nbsphinx -----------------------------------------------------
137
138 # Execute notebooks before conversion: 'always', 'never', 'auto' (default)
139 # We execute all notebooks, exclude the slow ones using 'exclude_patterns'
140 nbsphinx_execute = 'always'
141
142 # Use this kernel instead of the one stored in the notebook metadata:
143 #nbsphinx_kernel_name = 'python3'
144
145 # List of arguments to be passed to the kernel that executes the notebooks:
146 # nbsphinx_execute_arguments = []
147
148 # If True, the build process is continued even if an exception occurs:
149 #nbsphinx_allow_errors = True
150
151
152 # Controls when a cell will time out (defaults to 30; use -1 for no timeout):
153 nbsphinx_timeout = 180
154
155 # Default Pygments lexer for syntax highlighting in code cells:
156 #nbsphinx_codecell_lexer = 'ipython3'
157
158 # Width of input/output prompts used in CSS:
159 #nbsphinx_prompt_width = '8ex'
160
161 # If window is narrower than this, input/output prompts are on separate lines:
162 #nbsphinx_responsive_width = '700px'
163
164 # This is processed by Jinja2 and inserted before each notebook
165 nbsphinx_prolog = r"""
166 {% set docname = 'docs/' + env.doc2path(env.docname, base=None) %}
167
168 .. only:: html
169
170 .. role:: raw-html(raw)
171 :format: html
172
173 .. nbinfo::
174
175 Interactive online version:
176 :raw-html:`<a href="https://colab.research.google.com/github/google/jax/blob/master/{{ docname }}"><img alt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" style="vertical-align:text-bottom"></a>`
177
178 __ https://github.com/google/jax/blob/
179 {{ env.config.release }}/{{ docname }}
180 """
181
182 # This is processed by Jinja2 and inserted after each notebook
183 # nbsphinx_epilog = r"""
184 # """
185
186 # Input prompt for code cells. "%s" is replaced by the execution count.
187 #nbsphinx_input_prompt = 'In [%s]:'
188
189 # Output prompt for code cells. "%s" is replaced by the execution count.
190 #nbsphinx_output_prompt = 'Out[%s]:'
191
192 # Specify conversion functions for custom notebook formats:
193 #import jupytext
194 #nbsphinx_custom_formats = {
195 # '.Rmd': lambda s: jupytext.reads(s, '.Rmd'),
196 #}
197
198 # Link or path to require.js, set to empty string to disable
199 #nbsphinx_requirejs_path = ''
200
201 # Options for loading require.js
202 #nbsphinx_requirejs_options = {'async': 'async'}
203
204 # mathjax_config = {
205 # 'TeX': {'equationNumbers': {'autoNumber': 'AMS', 'useLabelIds': True}},
206 # }
207
208 # Additional files needed for generating LaTeX/PDF output:
209 # latex_additional_files = ['references.bib']
210
211 # -- Options for HTML output -------------------------------------------------
212
213 # The theme to use for HTML and HTML Help pages. See the documentation for
214 # a list of builtin themes.
215 #
216 html_theme = 'sphinx_rtd_theme'
217
218 # Theme options are theme-specific and customize the look and feel of a theme
219 # further. For a list of options available for each theme, see the
220 # documentation.
221 html_theme_options = {
222 'logo_only': True,
223 }
224
225 # The name of an image file (relative to this directory) to place at the top
226 # of the sidebar.
227 html_logo = '_static/jax_logo_250px.png'
228
229 # Add any paths that contain custom static files (such as style sheets) here,
230 # relative to this directory. They are copied after the builtin static files,
231 # so a file named "default.css" will overwrite the builtin "default.css".
232 html_static_path = ['_static']
233
234 # Custom sidebar templates, must be a dictionary that maps document names
235 # to template names.
236 #
237 # The default sidebars (for documents that don't match any pattern) are
238 # defined by theme itself. Builtin themes are using these templates by
239 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
240 # 'searchbox.html']``.
241 #
242 # html_sidebars = {}
243
244
245 # -- Options for HTMLHelp output ---------------------------------------------
246
247 # Output file base name for HTML help builder.
248 htmlhelp_basename = 'JAXdoc'
249
250
251 # -- Options for LaTeX output ------------------------------------------------
252
253 latex_elements = {
254 # The paper size ('letterpaper' or 'a4paper').
255 #
256 # 'papersize': 'letterpaper',
257
258 # The font size ('10pt', '11pt' or '12pt').
259 #
260 # 'pointsize': '10pt',
261
262 # Additional stuff for the LaTeX preamble.
263 #
264 # 'preamble': '',
265
266 # Latex figure (float) alignment
267 #
268 # 'figure_align': 'htbp',
269 }
270
271 # Grouping the document tree into LaTeX files. List of tuples
272 # (source start file, target name, title,
273 # author, documentclass [howto, manual, or own class]).
274 latex_documents = [
275 (master_doc, 'JAX.tex', 'JAX Documentation',
276 'The JAX authors', 'manual'),
277 ]
278
279
280 # -- Options for manual page output ------------------------------------------
281
282 # One entry per manual page. List of tuples
283 # (source start file, name, description, authors, manual section).
284 man_pages = [
285 (master_doc, 'jax', 'JAX Documentation',
286 [author], 1)
287 ]
288
289
290 # -- Options for Texinfo output ----------------------------------------------
291
292 # Grouping the document tree into Texinfo files. List of tuples
293 # (source start file, target name, title, author,
294 # dir menu entry, description, category)
295 texinfo_documents = [
296 (master_doc, 'JAX', 'JAX Documentation',
297 author, 'JAX', 'One line description of project.',
298 'Miscellaneous'),
299 ]
300
301
302 # -- Options for Epub output -------------------------------------------------
303
304 # Bibliographic Dublin Core info.
305 epub_title = project
306
307 # The unique identifier of the text. This can be a ISBN number
308 # or the project homepage.
309 #
310 # epub_identifier = ''
311
312 # A unique identification for the text.
313 #
314 # epub_uid = ''
315
316 # A list of files that should not be packed into the epub file.
317 epub_exclude_files = ['search.html']
318
319
320 # -- Extension configuration -------------------------------------------------
321
322 # Tell sphinx-autodoc-typehints to generate stub parameter annotations including
323 # types, even if the parameters aren't explicitly documented.
324 always_document_param_types = True
325
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -28,30 +28,14 @@
#
import os
import sys
-import typing
sys.path.insert(0, os.path.abspath('..'))
-# Workaround to avoid expanding type aliases. See:
+# Currently type aliases are expanded. We tried a workaround along the lines of:
# https://github.com/sphinx-doc/sphinx/issues/6518#issuecomment-589613836
-
-# When building docs, enable `from __future__ import annotations` everywhere.
-def _rewrite(p):
- with open(p) as f:
- contents = f.read()
- with open(p, 'w') as f:
- f.write('from __future__ import annotations\n')
- f.write(contents)
-
-if 'READTHEDOCS' in os.environ:
- for path, dirs, files in os.walk('../jax/'):
- for file in files:
- if file.endswith('.py'):
- _rewrite(os.path.abspath(os.path.join(path, file)))
-
-# Monkey patch for the typing module to prevent it from expanding type aliases.
-typing.get_type_hints = lambda obj, *unused: obj.__annotations__
+# Unfortunately, this workaround makes Sphinx drop module-level documentation.
+# See https://github.com/google/jax/issues/3452.
# -- Project information -----------------------------------------------------
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -28,30 +28,14 @@\n #\n import os\n import sys\n-import typing\n \n sys.path.insert(0, os.path.abspath('..'))\n \n \n-# Workaround to avoid expanding type aliases. See:\n+# Currently type aliases are expanded. We tried a workaround along the lines of:\n # https://github.com/sphinx-doc/sphinx/issues/6518#issuecomment-589613836\n-\n-# When building docs, enable `from __future__ import annotations` everywhere.\n-def _rewrite(p):\n- with open(p) as f:\n- contents = f.read()\n- with open(p, 'w') as f:\n- f.write('from __future__ import annotations\\n')\n- f.write(contents)\n-\n-if 'READTHEDOCS' in os.environ:\n- for path, dirs, files in os.walk('../jax/'):\n- for file in files:\n- if file.endswith('.py'):\n- _rewrite(os.path.abspath(os.path.join(path, file)))\n-\n-# Monkey patch for the typing module to prevent it from expanding type aliases.\n-typing.get_type_hints = lambda obj, *unused: obj.__annotations__\n+# Unfortunately, this workaround makes Sphinx drop module-level documentation.\n+# See https://github.com/google/jax/issues/3452.\n \n # -- Project information -----------------------------------------------------\n", "issue": "Module level documentation no longer shown on RTD\nExample:\r\nhttps://jax.readthedocs.io/en/latest/jax.experimental.loops.html\r\n\r\nThis used to show the module level docstring, but not anymore.\r\n\r\nMy guess is that this might be related to inserting the line `from __future__ import annotations` into each source file?\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nimport typing\n\nsys.path.insert(0, os.path.abspath('..'))\n\n\n# Workaround to avoid expanding type aliases. See:\n# https://github.com/sphinx-doc/sphinx/issues/6518#issuecomment-589613836\n\n# When building docs, enable `from __future__ import annotations` everywhere.\ndef _rewrite(p):\n with open(p) as f:\n contents = f.read()\n with open(p, 'w') as f:\n f.write('from __future__ import annotations\\n')\n f.write(contents)\n\nif 'READTHEDOCS' in os.environ:\n for path, dirs, files in os.walk('../jax/'):\n for file in files:\n if file.endswith('.py'):\n _rewrite(os.path.abspath(os.path.join(path, file)))\n\n# Monkey patch for the typing module to prevent it from expanding type aliases.\ntyping.get_type_hints = lambda obj, *unused: obj.__annotations__\n\n# -- Project information -----------------------------------------------------\n\nproject = 'JAX'\ncopyright = '2019, Google LLC. NumPy and SciPy documentation are copyright the respective authors.'\nauthor = 'The JAX authors'\n\n# The short X.Y version\nversion = ''\n# The full version, including alpha/beta/rc tags\nrelease = ''\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\nneeds_sphinx = '2.1'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.viewcode',\n 'nbsphinx',\n 'sphinx_autodoc_typehints',\n 'myst_parser',\n]\n\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3/', None),\n 'numpy': ('https://docs.scipy.org/doc/numpy/', None),\n 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),\n}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\n # Slow notebook: long time to load tf.ds\n 'notebooks/neural_network_with_tfds_data.ipynb',\n # Slow notebook\n 'notebooks/Neural_Network_and_Data_Loading.ipynb',\n 'notebooks/score_matching.ipynb',\n 'notebooks/maml.ipynb',\n # Fails with shape error in XL\n 'notebooks/XLA_in_Python.ipynb',\n # Sometimes sphinx reads its own outputs as inputs!\n 'build/html',\n]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = None\n\n\nautosummary_generate = True\nnapolean_use_rtype = False\n\n# -- Options for nbsphinx -----------------------------------------------------\n\n# Execute notebooks before conversion: 'always', 'never', 'auto' (default)\n# We execute all notebooks, exclude the slow ones using 'exclude_patterns'\nnbsphinx_execute = 'always'\n\n# Use this kernel instead of the one stored in the notebook metadata:\n#nbsphinx_kernel_name = 'python3'\n\n# List of arguments to be passed to the kernel that executes the notebooks:\n# nbsphinx_execute_arguments = []\n\n# If True, the build process is continued even if an exception occurs:\n#nbsphinx_allow_errors = True\n\n\n# Controls when a cell will time out (defaults to 30; use -1 for no timeout):\nnbsphinx_timeout = 180\n\n# Default Pygments lexer for syntax highlighting in code cells:\n#nbsphinx_codecell_lexer = 'ipython3'\n\n# Width of input/output prompts used in CSS:\n#nbsphinx_prompt_width = '8ex'\n\n# If window is narrower than this, input/output prompts are on separate lines:\n#nbsphinx_responsive_width = '700px'\n\n# This is processed by Jinja2 and inserted before each notebook\nnbsphinx_prolog = r\"\"\"\n{% set docname = 'docs/' + env.doc2path(env.docname, base=None) %}\n\n.. only:: html\n\n .. role:: raw-html(raw)\n :format: html\n\n .. nbinfo::\n\n Interactive online version:\n :raw-html:`<a href=\"https://colab.research.google.com/github/google/jax/blob/master/{{ docname }}\"><img alt=\"Open In Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" style=\"vertical-align:text-bottom\"></a>`\n\n __ https://github.com/google/jax/blob/\n {{ env.config.release }}/{{ docname }}\n\"\"\"\n\n# This is processed by Jinja2 and inserted after each notebook\n# nbsphinx_epilog = r\"\"\"\n# \"\"\"\n\n# Input prompt for code cells. \"%s\" is replaced by the execution count.\n#nbsphinx_input_prompt = 'In [%s]:'\n\n# Output prompt for code cells. \"%s\" is replaced by the execution count.\n#nbsphinx_output_prompt = 'Out[%s]:'\n\n# Specify conversion functions for custom notebook formats:\n#import jupytext\n#nbsphinx_custom_formats = {\n# '.Rmd': lambda s: jupytext.reads(s, '.Rmd'),\n#}\n\n# Link or path to require.js, set to empty string to disable\n#nbsphinx_requirejs_path = ''\n\n# Options for loading require.js\n#nbsphinx_requirejs_options = {'async': 'async'}\n\n# mathjax_config = {\n# 'TeX': {'equationNumbers': {'autoNumber': 'AMS', 'useLabelIds': True}},\n# }\n\n# Additional files needed for generating LaTeX/PDF output:\n# latex_additional_files = ['references.bib']\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'logo_only': True,\n}\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = '_static/jax_logo_250px.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'JAXdoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'JAX.tex', 'JAX Documentation',\n 'The JAX authors', 'manual'),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'jax', 'JAX Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'JAX', 'JAX Documentation',\n author, 'JAX', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\n# -- Options for Epub output -------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\n\n# -- Extension configuration -------------------------------------------------\n\n# Tell sphinx-autodoc-typehints to generate stub parameter annotations including\n# types, even if the parameters aren't explicitly documented.\nalways_document_param_types = True\n", "path": "docs/conf.py"}]}
| 3,951 | 316 |
gh_patches_debug_27042
|
rasdani/github-patches
|
git_diff
|
python-poetry__poetry-1610
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
poetry export adds extra url multiple times
<!--
Hi there! Thank you for discovering and submitting an issue.
Before you submit this; let's make sure of a few things.
Please make sure the following boxes are ticked if they are correct.
If not, please try and fulfill these first.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
<!--
Once those are done, if you're able to fill in the following list with your information,
it'd be very helpful to whoever handles the issue.
-->
- **OS version and name**: macOS Mojave 10.14.6
- **Poetry version**: 1.0.0b1
- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: I can't give this out as it's not open source, but I have a source in `[[tool.poetry.source]]` which is an internal pypi repo.
## Issue
If I run `poetry export -f requirements.txt` the result include the same `--extra-index-url` entry with the same URL at the top of the output once for each one of the packages in my poetry.lock. I suspect this is because poetry.lock (rightly) includes the same `[package.source]` entry for each dependency as they were downloaded from the source in `pyproject.toml` `[[tool.poetry.source]]`.
An interesting extra bit of information is that if I add `default = true` to the `[[tool.poetry.source]]` I only get one `--index-url` entry.
I'm also seeing some odd inconsistencies with the poetry.lock. When I deleted it and recreated it with the source set to default I got a poetry.lock with no (or at least fewer) sources. Running poetry update seemed to put them back. I'm now not able to reproduce this issue, though, so I'm not sure what exactly caused it.
</issue>
<code>
[start of poetry/utils/exporter.py]
1 from typing import Union
2
3 from clikit.api.io import IO
4
5 from poetry.packages.directory_dependency import DirectoryDependency
6 from poetry.packages.file_dependency import FileDependency
7 from poetry.packages.url_dependency import URLDependency
8 from poetry.packages.vcs_dependency import VCSDependency
9 from poetry.poetry import Poetry
10 from poetry.utils._compat import Path
11 from poetry.utils._compat import decode
12 from poetry.utils.extras import get_extra_package_names
13
14
15 class Exporter(object):
16 """
17 Exporter class to export a lock file to alternative formats.
18 """
19
20 ACCEPTED_FORMATS = ("requirements.txt",)
21 ALLOWED_HASH_ALGORITHMS = ("sha256", "sha384", "sha512")
22
23 def __init__(self, poetry): # type: (Poetry) -> None
24 self._poetry = poetry
25
26 def export(
27 self,
28 fmt,
29 cwd,
30 output,
31 with_hashes=True,
32 dev=False,
33 extras=None,
34 with_credentials=False,
35 ): # type: (str, Path, Union[IO, str], bool, bool, bool) -> None
36 if fmt not in self.ACCEPTED_FORMATS:
37 raise ValueError("Invalid export format: {}".format(fmt))
38
39 getattr(self, "_export_{}".format(fmt.replace(".", "_")))(
40 cwd,
41 output,
42 with_hashes=with_hashes,
43 dev=dev,
44 extras=extras,
45 with_credentials=with_credentials,
46 )
47
48 def _export_requirements_txt(
49 self,
50 cwd,
51 output,
52 with_hashes=True,
53 dev=False,
54 extras=None,
55 with_credentials=False,
56 ): # type: (Path, Union[IO, str], bool, bool, bool) -> None
57 indexes = []
58 content = ""
59 packages = self._poetry.locker.locked_repository(dev).packages
60
61 # Build a set of all packages required by our selected extras
62 extra_package_names = set(
63 get_extra_package_names(
64 packages, self._poetry.locker.lock_data.get("extras", {}), extras or ()
65 )
66 )
67
68 for package in sorted(packages, key=lambda p: p.name):
69 # If a package is optional and we haven't opted in to it, continue
70 if package.optional and package.name not in extra_package_names:
71 continue
72
73 if package.source_type == "git":
74 dependency = VCSDependency(
75 package.name,
76 package.source_type,
77 package.source_url,
78 package.source_reference,
79 )
80 dependency.marker = package.marker
81 line = "-e git+{}@{}#egg={}".format(
82 package.source_url, package.source_reference, package.name
83 )
84 elif package.source_type in ["directory", "file", "url"]:
85 if package.source_type == "file":
86 dependency = FileDependency(package.name, Path(package.source_url))
87 elif package.source_type == "directory":
88 dependency = DirectoryDependency(
89 package.name, Path(package.source_url)
90 )
91 else:
92 dependency = URLDependency(package.name, package.source_url)
93
94 dependency.marker = package.marker
95
96 line = "{}".format(package.source_url)
97 if package.develop:
98 line = "-e " + line
99 else:
100 dependency = package.to_dependency()
101 line = "{}=={}".format(package.name, package.version)
102
103 requirement = dependency.to_pep_508()
104 if ";" in requirement:
105 line += "; {}".format(requirement.split(";")[1].strip())
106
107 if (
108 package.source_type not in {"git", "directory", "file", "url"}
109 and package.source_url
110 ):
111 indexes.append(package.source_url)
112
113 if package.files and with_hashes:
114 hashes = []
115 for f in package.files:
116 h = f["hash"]
117 algorithm = "sha256"
118 if ":" in h:
119 algorithm, h = h.split(":")
120
121 if algorithm not in self.ALLOWED_HASH_ALGORITHMS:
122 continue
123
124 hashes.append("{}:{}".format(algorithm, h))
125
126 if hashes:
127 line += " \\\n"
128 for i, h in enumerate(hashes):
129 line += " --hash={}{}".format(
130 h, " \\\n" if i < len(hashes) - 1 else ""
131 )
132
133 line += "\n"
134 content += line
135
136 if indexes:
137 # If we have extra indexes, we add them to the begin
138 # of the output
139 indexes_header = ""
140 for index in indexes:
141 repository = [
142 r
143 for r in self._poetry.pool.repositories
144 if r.url == index.rstrip("/")
145 ][0]
146 if (
147 self._poetry.pool.has_default()
148 and repository is self._poetry.pool.repositories[0]
149 ):
150 url = (
151 repository.authenticated_url
152 if with_credentials
153 else repository.url
154 )
155 indexes_header = "--index-url {}\n".format(url)
156 continue
157
158 url = (
159 repository.authenticated_url if with_credentials else repository.url
160 )
161 indexes_header += "--extra-index-url {}\n".format(url)
162
163 content = indexes_header + "\n" + content
164
165 self._output(content, cwd, output)
166
167 def _output(
168 self, content, cwd, output
169 ): # type: (str, Path, Union[IO, str]) -> None
170 decoded = decode(content)
171 try:
172 output.write(decoded)
173 except AttributeError:
174 filepath = cwd / output
175 with filepath.open("w", encoding="utf-8") as f:
176 f.write(decoded)
177
[end of poetry/utils/exporter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/poetry/utils/exporter.py b/poetry/utils/exporter.py
--- a/poetry/utils/exporter.py
+++ b/poetry/utils/exporter.py
@@ -54,7 +54,7 @@
extras=None,
with_credentials=False,
): # type: (Path, Union[IO, str], bool, bool, bool) -> None
- indexes = []
+ indexes = set()
content = ""
packages = self._poetry.locker.locked_repository(dev).packages
@@ -108,7 +108,7 @@
package.source_type not in {"git", "directory", "file", "url"}
and package.source_url
):
- indexes.append(package.source_url)
+ indexes.add(package.source_url)
if package.files and with_hashes:
hashes = []
@@ -134,10 +134,9 @@
content += line
if indexes:
- # If we have extra indexes, we add them to the begin
- # of the output
+ # If we have extra indexes, we add them to the beginning of the output
indexes_header = ""
- for index in indexes:
+ for index in sorted(indexes):
repository = [
r
for r in self._poetry.pool.repositories
|
{"golden_diff": "diff --git a/poetry/utils/exporter.py b/poetry/utils/exporter.py\n--- a/poetry/utils/exporter.py\n+++ b/poetry/utils/exporter.py\n@@ -54,7 +54,7 @@\n extras=None,\n with_credentials=False,\n ): # type: (Path, Union[IO, str], bool, bool, bool) -> None\n- indexes = []\n+ indexes = set()\n content = \"\"\n packages = self._poetry.locker.locked_repository(dev).packages\n \n@@ -108,7 +108,7 @@\n package.source_type not in {\"git\", \"directory\", \"file\", \"url\"}\n and package.source_url\n ):\n- indexes.append(package.source_url)\n+ indexes.add(package.source_url)\n \n if package.files and with_hashes:\n hashes = []\n@@ -134,10 +134,9 @@\n content += line\n \n if indexes:\n- # If we have extra indexes, we add them to the begin\n- # of the output\n+ # If we have extra indexes, we add them to the beginning of the output\n indexes_header = \"\"\n- for index in indexes:\n+ for index in sorted(indexes):\n repository = [\n r\n for r in self._poetry.pool.repositories\n", "issue": "poetry export adds extra url multiple times\n<!--\r\n Hi there! Thank you for discovering and submitting an issue.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n<!--\r\n Once those are done, if you're able to fill in the following list with your information,\r\n it'd be very helpful to whoever handles the issue.\r\n-->\r\n\r\n- **OS version and name**: macOS Mojave 10.14.6\r\n- **Poetry version**: 1.0.0b1\r\n- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: I can't give this out as it's not open source, but I have a source in `[[tool.poetry.source]]` which is an internal pypi repo.\r\n\r\n## Issue\r\nIf I run `poetry export -f requirements.txt` the result include the same `--extra-index-url` entry with the same URL at the top of the output once for each one of the packages in my poetry.lock. I suspect this is because poetry.lock (rightly) includes the same `[package.source]` entry for each dependency as they were downloaded from the source in `pyproject.toml` `[[tool.poetry.source]]`.\r\n\r\nAn interesting extra bit of information is that if I add `default = true` to the `[[tool.poetry.source]]` I only get one `--index-url` entry.\r\n\r\nI'm also seeing some odd inconsistencies with the poetry.lock. When I deleted it and recreated it with the source set to default I got a poetry.lock with no (or at least fewer) sources. Running poetry update seemed to put them back. I'm now not able to reproduce this issue, though, so I'm not sure what exactly caused it.\n", "before_files": [{"content": "from typing import Union\n\nfrom clikit.api.io import IO\n\nfrom poetry.packages.directory_dependency import DirectoryDependency\nfrom poetry.packages.file_dependency import FileDependency\nfrom poetry.packages.url_dependency import URLDependency\nfrom poetry.packages.vcs_dependency import VCSDependency\nfrom poetry.poetry import Poetry\nfrom poetry.utils._compat import Path\nfrom poetry.utils._compat import decode\nfrom poetry.utils.extras import get_extra_package_names\n\n\nclass Exporter(object):\n \"\"\"\n Exporter class to export a lock file to alternative formats.\n \"\"\"\n\n ACCEPTED_FORMATS = (\"requirements.txt\",)\n ALLOWED_HASH_ALGORITHMS = (\"sha256\", \"sha384\", \"sha512\")\n\n def __init__(self, poetry): # type: (Poetry) -> None\n self._poetry = poetry\n\n def export(\n self,\n fmt,\n cwd,\n output,\n with_hashes=True,\n dev=False,\n extras=None,\n with_credentials=False,\n ): # type: (str, Path, Union[IO, str], bool, bool, bool) -> None\n if fmt not in self.ACCEPTED_FORMATS:\n raise ValueError(\"Invalid export format: {}\".format(fmt))\n\n getattr(self, \"_export_{}\".format(fmt.replace(\".\", \"_\")))(\n cwd,\n output,\n with_hashes=with_hashes,\n dev=dev,\n extras=extras,\n with_credentials=with_credentials,\n )\n\n def _export_requirements_txt(\n self,\n cwd,\n output,\n with_hashes=True,\n dev=False,\n extras=None,\n with_credentials=False,\n ): # type: (Path, Union[IO, str], bool, bool, bool) -> None\n indexes = []\n content = \"\"\n packages = self._poetry.locker.locked_repository(dev).packages\n\n # Build a set of all packages required by our selected extras\n extra_package_names = set(\n get_extra_package_names(\n packages, self._poetry.locker.lock_data.get(\"extras\", {}), extras or ()\n )\n )\n\n for package in sorted(packages, key=lambda p: p.name):\n # If a package is optional and we haven't opted in to it, continue\n if package.optional and package.name not in extra_package_names:\n continue\n\n if package.source_type == \"git\":\n dependency = VCSDependency(\n package.name,\n package.source_type,\n package.source_url,\n package.source_reference,\n )\n dependency.marker = package.marker\n line = \"-e git+{}@{}#egg={}\".format(\n package.source_url, package.source_reference, package.name\n )\n elif package.source_type in [\"directory\", \"file\", \"url\"]:\n if package.source_type == \"file\":\n dependency = FileDependency(package.name, Path(package.source_url))\n elif package.source_type == \"directory\":\n dependency = DirectoryDependency(\n package.name, Path(package.source_url)\n )\n else:\n dependency = URLDependency(package.name, package.source_url)\n\n dependency.marker = package.marker\n\n line = \"{}\".format(package.source_url)\n if package.develop:\n line = \"-e \" + line\n else:\n dependency = package.to_dependency()\n line = \"{}=={}\".format(package.name, package.version)\n\n requirement = dependency.to_pep_508()\n if \";\" in requirement:\n line += \"; {}\".format(requirement.split(\";\")[1].strip())\n\n if (\n package.source_type not in {\"git\", \"directory\", \"file\", \"url\"}\n and package.source_url\n ):\n indexes.append(package.source_url)\n\n if package.files and with_hashes:\n hashes = []\n for f in package.files:\n h = f[\"hash\"]\n algorithm = \"sha256\"\n if \":\" in h:\n algorithm, h = h.split(\":\")\n\n if algorithm not in self.ALLOWED_HASH_ALGORITHMS:\n continue\n\n hashes.append(\"{}:{}\".format(algorithm, h))\n\n if hashes:\n line += \" \\\\\\n\"\n for i, h in enumerate(hashes):\n line += \" --hash={}{}\".format(\n h, \" \\\\\\n\" if i < len(hashes) - 1 else \"\"\n )\n\n line += \"\\n\"\n content += line\n\n if indexes:\n # If we have extra indexes, we add them to the begin\n # of the output\n indexes_header = \"\"\n for index in indexes:\n repository = [\n r\n for r in self._poetry.pool.repositories\n if r.url == index.rstrip(\"/\")\n ][0]\n if (\n self._poetry.pool.has_default()\n and repository is self._poetry.pool.repositories[0]\n ):\n url = (\n repository.authenticated_url\n if with_credentials\n else repository.url\n )\n indexes_header = \"--index-url {}\\n\".format(url)\n continue\n\n url = (\n repository.authenticated_url if with_credentials else repository.url\n )\n indexes_header += \"--extra-index-url {}\\n\".format(url)\n\n content = indexes_header + \"\\n\" + content\n\n self._output(content, cwd, output)\n\n def _output(\n self, content, cwd, output\n ): # type: (str, Path, Union[IO, str]) -> None\n decoded = decode(content)\n try:\n output.write(decoded)\n except AttributeError:\n filepath = cwd / output\n with filepath.open(\"w\", encoding=\"utf-8\") as f:\n f.write(decoded)\n", "path": "poetry/utils/exporter.py"}]}
| 2,663 | 290 |
gh_patches_debug_38295
|
rasdani/github-patches
|
git_diff
|
adap__flower-641
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update mypy to 0.810
</issue>
<code>
[start of src/py/flwr_experimental/ops/cluster.py]
1 # Copyright 2020 Adap GmbH. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Implements compute classes for EC2."""
16 import concurrent.futures
17 from contextlib import contextmanager
18 from itertools import groupby
19 from logging import DEBUG, ERROR
20 from typing import Dict, Iterator, List, Optional, Tuple
21
22 from paramiko.client import SSHClient
23 from paramiko.sftp_attr import SFTPAttributes
24
25 from flwr.common.logger import log
26
27 from .compute.adapter import Adapter
28 from .instance import Instance
29
30 ExecInfo = Tuple[str, str]
31
32
33 class StartFailed(Exception):
34 """Raised when cluster could not start."""
35
36
37 class InstanceIdNotFound(Exception):
38 """Raised when there was no instance with given id."""
39
40
41 class InstanceMismatch(Exception):
42 """Raised when instances passed to create_instances do not have the same
43 values for RAM or CPU."""
44
45
46 class IgnoreHostKeyPolicy:
47 """Policy for accepting any unknown host key.
48
49 This is used by `paramiko.client.SSHClient`.
50 """
51
52 # pylint: disable=no-self-use, unused-argument
53 def missing_host_key(self, client: SSHClient, hostname: str, key: str) -> None:
54 """Simply return to ignore the host key.
55
56 As we create and destroy machines quite regularly and don't
57 reuse them we will not store the host key in the local system to
58 avoid pollution the local known_hosts file.
59 """
60 return None
61
62
63 SSHCredentials = Tuple[str, str] # username, key_filename
64
65
66 @contextmanager
67 def ssh_connection(
68 instance: Instance, ssh_credentials: SSHCredentials
69 ) -> Iterator[SSHClient]:
70 """Connect to server and yield SSH client."""
71 username, key_filename = ssh_credentials
72
73 client = SSHClient()
74 client.set_missing_host_key_policy(IgnoreHostKeyPolicy)
75 client.connect(
76 hostname=instance.public_ip,
77 port=instance.ssh_port,
78 username=username,
79 key_filename=key_filename,
80 )
81
82 yield client
83
84 client.close()
85
86
87 def create_instances(adapter: Adapter, instances: List[Instance], timeout: int) -> None:
88 """Start instances and set props of each instance.
89
90 Fails if CPU and RAM of instances are not all the same.
91 """
92 if not all(
93 [
94 ins.num_cpu == instances[0].num_cpu and ins.num_ram == instances[0].num_ram
95 for ins in instances
96 ]
97 ):
98 raise InstanceMismatch(
99 "Values of num_cpu and num_ram have to be equal for all instances."
100 )
101
102 # As checked before that each instance has the same num_cpu and num_ram
103 # we can just take the values from the first => instances[0]
104 adapter_instances = adapter.create_instances(
105 num_cpu=instances[0].num_cpu,
106 num_ram=instances[0].num_ram,
107 num_instance=len(instances),
108 gpu=instances[0].gpu,
109 timeout=timeout,
110 )
111
112 for i, adp_ins in enumerate(adapter_instances):
113 instance_id, private_ip, public_ip, ssh_port, state = adp_ins
114
115 instances[i].instance_id = instance_id
116 instances[i].private_ip = private_ip
117 instances[i].public_ip = public_ip
118 instances[i].ssh_port = ssh_port
119 instances[i].state = state
120
121
122 def group_instances_by_specs(instances: List[Instance]) -> List[List[Instance]]:
123 """Group instances by num_cpu and num_ram."""
124 groups: List[List[Instance]] = []
125 keyfunc = lambda ins: f"{ins.num_cpu}-{ins.num_ram}"
126 instances = sorted(instances, key=keyfunc)
127 for _, group in groupby(instances, keyfunc):
128 groups.append(list(group))
129 return groups
130
131
132 class Cluster:
133 """Compute enviroment independend compute cluster."""
134
135 def __init__(
136 self,
137 adapter: Adapter,
138 ssh_credentials: SSHCredentials,
139 instances: List[Instance],
140 timeout: int,
141 ):
142 """Create cluster.
143
144 Args:
145 timeout (int): Minutes after which the machine will shutdown and terminate.
146 This is a safety mechanism to avoid run aways cost. The user should still
147 make sure to monitor the progress in case this mechanism fails.
148
149 Example:
150 To start two groups of instances where the first one has one instance and the
151 second one has two instances you might define the following list of instances:
152
153 instances = [
154 Instance(name='server', group='server', num_cpu=2, num_ram=1.0),
155 Instance(name='client_0', group='clients', num_cpu=4, num_ram=16.0),
156 Instance(name='client_1', group='clients', num_cpu=4, num_ram=16.0),
157 ]
158
159 Depending on the adapter used not every combination of vCPU and RAM might be available.
160 """
161 instance_names = {ins.name for ins in instances}
162 assert len(instance_names) == len(instances), "Instance names must be unique."
163
164 self.adapter = adapter
165 self.ssh_credentials = ssh_credentials
166 self.instances = instances
167 self.timeout = timeout
168
169 def get_instance(self, instance_name: str) -> Instance:
170 """Return instance by instance_name."""
171 for ins in self.instances:
172 if ins.name == instance_name:
173 return ins
174
175 raise InstanceIdNotFound()
176
177 def get_instance_names(self, groups: Optional[List[str]] = None) -> List[str]:
178 """Return a list of all instance names."""
179 return [
180 ins.name for ins in self.instances if groups is None or ins.group in groups
181 ]
182
183 def start(self) -> None:
184 """Start the instance."""
185 instance_groups = group_instances_by_specs(self.instances)
186
187 with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
188 futures = [
189 executor.submit(
190 create_instances, self.adapter, instance_group, self.timeout
191 )
192 for instance_group in instance_groups
193 ]
194 concurrent.futures.wait(futures)
195
196 try:
197 for future in futures:
198 future.result()
199 # pylint: disable=broad-except
200 except Exception as exc:
201 log(
202 ERROR,
203 "Failed to start the cluster completely. Shutting down...",
204 )
205 log(ERROR, exc)
206
207 for future in futures:
208 future.cancel()
209
210 self.terminate()
211 raise StartFailed() from exc
212
213 for ins in self.instances:
214 log(DEBUG, ins)
215
216 def terminate(self) -> None:
217 """Terminate all instances and shutdown cluster."""
218 self.adapter.terminate_all_instances()
219
220 def upload(
221 self, instance_name: str, local_path: str, remote_path: str
222 ) -> SFTPAttributes:
223 """Upload local file to remote instance."""
224 instance = self.get_instance(instance_name)
225
226 with ssh_connection(instance, self.ssh_credentials) as client:
227 sftp = client.open_sftp()
228 sftp_file_attributes = sftp.put(local_path, remote_path)
229
230 return sftp_file_attributes
231
232 def upload_all(
233 self, local_path: str, remote_path: str
234 ) -> Dict[str, SFTPAttributes]:
235 """Upload file to all instances."""
236 results: Dict[str, SFTPAttributes] = {}
237
238 with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
239 # Start the load operations and mark each future with its URL
240 future_to_result = {
241 executor.submit(
242 self.upload, instance_name, local_path, remote_path
243 ): instance_name
244 for instance_name in self.get_instance_names()
245 }
246
247 for future in concurrent.futures.as_completed(future_to_result):
248 instance_name = future_to_result[future]
249 try:
250 results[instance_name] = future.result()
251 # pylint: disable=broad-except
252 except Exception as exc:
253 log(ERROR, (instance_name, exc))
254
255 return results
256
257 def exec(self, instance_name: str, command: str) -> ExecInfo:
258 """Run command on instance and return stdout."""
259 log(DEBUG, "Exec on %s: %s", instance_name, command)
260
261 instance = self.get_instance(instance_name)
262
263 with ssh_connection(instance, self.ssh_credentials) as client:
264 _, stdout, stderr = client.exec_command(command)
265 stdout = stdout.readlines()
266 stderr = stderr.readlines()
267
268 print(stdout, stderr)
269
270 return stdout, stderr
271
272 def exec_all(
273 self, command: str, groups: Optional[List[str]] = None
274 ) -> Dict[str, ExecInfo]:
275 """Run command on all instances.
276
277 If provided filter by group.
278 """
279 instance_names = self.get_instance_names(groups)
280
281 results: Dict[str, ExecInfo] = {}
282
283 with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
284 # Start the load operations and mark each future with its URL
285 future_to_result = {
286 executor.submit(self.exec, instance_name, command): instance_name
287 for instance_name in instance_names
288 }
289
290 for future in concurrent.futures.as_completed(future_to_result):
291 instance_name = future_to_result[future]
292 try:
293 results[instance_name] = future.result()
294 # pylint: disable=broad-except
295 except Exception as exc:
296 log(ERROR, (instance_name, exc))
297
298 return results
299
[end of src/py/flwr_experimental/ops/cluster.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/py/flwr_experimental/ops/cluster.py b/src/py/flwr_experimental/ops/cluster.py
--- a/src/py/flwr_experimental/ops/cluster.py
+++ b/src/py/flwr_experimental/ops/cluster.py
@@ -17,9 +17,9 @@
from contextlib import contextmanager
from itertools import groupby
from logging import DEBUG, ERROR
-from typing import Dict, Iterator, List, Optional, Tuple
+from typing import Dict, Iterator, List, Optional, Tuple, Type, Union, cast
-from paramiko.client import SSHClient
+from paramiko.client import MissingHostKeyPolicy, SSHClient
from paramiko.sftp_attr import SFTPAttributes
from flwr.common.logger import log
@@ -27,7 +27,7 @@
from .compute.adapter import Adapter
from .instance import Instance
-ExecInfo = Tuple[str, str]
+ExecInfo = Tuple[List[str], List[str]]
class StartFailed(Exception):
@@ -70,11 +70,18 @@
"""Connect to server and yield SSH client."""
username, key_filename = ssh_credentials
+ instance_ssh_port: int = cast(int, instance.ssh_port)
+ ignore_host_key_policy: Union[
+ Type[MissingHostKeyPolicy], MissingHostKeyPolicy
+ ] = cast(
+ Union[Type[MissingHostKeyPolicy], MissingHostKeyPolicy], IgnoreHostKeyPolicy
+ )
+
client = SSHClient()
- client.set_missing_host_key_policy(IgnoreHostKeyPolicy)
+ client.set_missing_host_key_policy(ignore_host_key_policy)
client.connect(
- hostname=instance.public_ip,
- port=instance.ssh_port,
+ hostname=str(instance.public_ip),
+ port=instance_ssh_port,
username=username,
key_filename=key_filename,
)
@@ -225,7 +232,9 @@
with ssh_connection(instance, self.ssh_credentials) as client:
sftp = client.open_sftp()
- sftp_file_attributes = sftp.put(local_path, remote_path)
+
+ if sftp is not None:
+ sftp_file_attributes = sftp.put(local_path, remote_path)
return sftp_file_attributes
@@ -262,12 +271,12 @@
with ssh_connection(instance, self.ssh_credentials) as client:
_, stdout, stderr = client.exec_command(command)
- stdout = stdout.readlines()
- stderr = stderr.readlines()
+ lines_stdout = stdout.readlines()
+ lines_stderr = stderr.readlines()
- print(stdout, stderr)
+ print(lines_stdout, lines_stderr)
- return stdout, stderr
+ return lines_stdout, lines_stderr
def exec_all(
self, command: str, groups: Optional[List[str]] = None
|
{"golden_diff": "diff --git a/src/py/flwr_experimental/ops/cluster.py b/src/py/flwr_experimental/ops/cluster.py\n--- a/src/py/flwr_experimental/ops/cluster.py\n+++ b/src/py/flwr_experimental/ops/cluster.py\n@@ -17,9 +17,9 @@\n from contextlib import contextmanager\n from itertools import groupby\n from logging import DEBUG, ERROR\n-from typing import Dict, Iterator, List, Optional, Tuple\n+from typing import Dict, Iterator, List, Optional, Tuple, Type, Union, cast\n \n-from paramiko.client import SSHClient\n+from paramiko.client import MissingHostKeyPolicy, SSHClient\n from paramiko.sftp_attr import SFTPAttributes\n \n from flwr.common.logger import log\n@@ -27,7 +27,7 @@\n from .compute.adapter import Adapter\n from .instance import Instance\n \n-ExecInfo = Tuple[str, str]\n+ExecInfo = Tuple[List[str], List[str]]\n \n \n class StartFailed(Exception):\n@@ -70,11 +70,18 @@\n \"\"\"Connect to server and yield SSH client.\"\"\"\n username, key_filename = ssh_credentials\n \n+ instance_ssh_port: int = cast(int, instance.ssh_port)\n+ ignore_host_key_policy: Union[\n+ Type[MissingHostKeyPolicy], MissingHostKeyPolicy\n+ ] = cast(\n+ Union[Type[MissingHostKeyPolicy], MissingHostKeyPolicy], IgnoreHostKeyPolicy\n+ )\n+\n client = SSHClient()\n- client.set_missing_host_key_policy(IgnoreHostKeyPolicy)\n+ client.set_missing_host_key_policy(ignore_host_key_policy)\n client.connect(\n- hostname=instance.public_ip,\n- port=instance.ssh_port,\n+ hostname=str(instance.public_ip),\n+ port=instance_ssh_port,\n username=username,\n key_filename=key_filename,\n )\n@@ -225,7 +232,9 @@\n \n with ssh_connection(instance, self.ssh_credentials) as client:\n sftp = client.open_sftp()\n- sftp_file_attributes = sftp.put(local_path, remote_path)\n+\n+ if sftp is not None:\n+ sftp_file_attributes = sftp.put(local_path, remote_path)\n \n return sftp_file_attributes\n \n@@ -262,12 +271,12 @@\n \n with ssh_connection(instance, self.ssh_credentials) as client:\n _, stdout, stderr = client.exec_command(command)\n- stdout = stdout.readlines()\n- stderr = stderr.readlines()\n+ lines_stdout = stdout.readlines()\n+ lines_stderr = stderr.readlines()\n \n- print(stdout, stderr)\n+ print(lines_stdout, lines_stderr)\n \n- return stdout, stderr\n+ return lines_stdout, lines_stderr\n \n def exec_all(\n self, command: str, groups: Optional[List[str]] = None\n", "issue": "Update mypy to 0.810\n\n", "before_files": [{"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements compute classes for EC2.\"\"\"\nimport concurrent.futures\nfrom contextlib import contextmanager\nfrom itertools import groupby\nfrom logging import DEBUG, ERROR\nfrom typing import Dict, Iterator, List, Optional, Tuple\n\nfrom paramiko.client import SSHClient\nfrom paramiko.sftp_attr import SFTPAttributes\n\nfrom flwr.common.logger import log\n\nfrom .compute.adapter import Adapter\nfrom .instance import Instance\n\nExecInfo = Tuple[str, str]\n\n\nclass StartFailed(Exception):\n \"\"\"Raised when cluster could not start.\"\"\"\n\n\nclass InstanceIdNotFound(Exception):\n \"\"\"Raised when there was no instance with given id.\"\"\"\n\n\nclass InstanceMismatch(Exception):\n \"\"\"Raised when instances passed to create_instances do not have the same\n values for RAM or CPU.\"\"\"\n\n\nclass IgnoreHostKeyPolicy:\n \"\"\"Policy for accepting any unknown host key.\n\n This is used by `paramiko.client.SSHClient`.\n \"\"\"\n\n # pylint: disable=no-self-use, unused-argument\n def missing_host_key(self, client: SSHClient, hostname: str, key: str) -> None:\n \"\"\"Simply return to ignore the host key.\n\n As we create and destroy machines quite regularly and don't\n reuse them we will not store the host key in the local system to\n avoid pollution the local known_hosts file.\n \"\"\"\n return None\n\n\nSSHCredentials = Tuple[str, str] # username, key_filename\n\n\n@contextmanager\ndef ssh_connection(\n instance: Instance, ssh_credentials: SSHCredentials\n) -> Iterator[SSHClient]:\n \"\"\"Connect to server and yield SSH client.\"\"\"\n username, key_filename = ssh_credentials\n\n client = SSHClient()\n client.set_missing_host_key_policy(IgnoreHostKeyPolicy)\n client.connect(\n hostname=instance.public_ip,\n port=instance.ssh_port,\n username=username,\n key_filename=key_filename,\n )\n\n yield client\n\n client.close()\n\n\ndef create_instances(adapter: Adapter, instances: List[Instance], timeout: int) -> None:\n \"\"\"Start instances and set props of each instance.\n\n Fails if CPU and RAM of instances are not all the same.\n \"\"\"\n if not all(\n [\n ins.num_cpu == instances[0].num_cpu and ins.num_ram == instances[0].num_ram\n for ins in instances\n ]\n ):\n raise InstanceMismatch(\n \"Values of num_cpu and num_ram have to be equal for all instances.\"\n )\n\n # As checked before that each instance has the same num_cpu and num_ram\n # we can just take the values from the first => instances[0]\n adapter_instances = adapter.create_instances(\n num_cpu=instances[0].num_cpu,\n num_ram=instances[0].num_ram,\n num_instance=len(instances),\n gpu=instances[0].gpu,\n timeout=timeout,\n )\n\n for i, adp_ins in enumerate(adapter_instances):\n instance_id, private_ip, public_ip, ssh_port, state = adp_ins\n\n instances[i].instance_id = instance_id\n instances[i].private_ip = private_ip\n instances[i].public_ip = public_ip\n instances[i].ssh_port = ssh_port\n instances[i].state = state\n\n\ndef group_instances_by_specs(instances: List[Instance]) -> List[List[Instance]]:\n \"\"\"Group instances by num_cpu and num_ram.\"\"\"\n groups: List[List[Instance]] = []\n keyfunc = lambda ins: f\"{ins.num_cpu}-{ins.num_ram}\"\n instances = sorted(instances, key=keyfunc)\n for _, group in groupby(instances, keyfunc):\n groups.append(list(group))\n return groups\n\n\nclass Cluster:\n \"\"\"Compute enviroment independend compute cluster.\"\"\"\n\n def __init__(\n self,\n adapter: Adapter,\n ssh_credentials: SSHCredentials,\n instances: List[Instance],\n timeout: int,\n ):\n \"\"\"Create cluster.\n\n Args:\n timeout (int): Minutes after which the machine will shutdown and terminate.\n This is a safety mechanism to avoid run aways cost. The user should still\n make sure to monitor the progress in case this mechanism fails.\n\n Example:\n To start two groups of instances where the first one has one instance and the\n second one has two instances you might define the following list of instances:\n\n instances = [\n Instance(name='server', group='server', num_cpu=2, num_ram=1.0),\n Instance(name='client_0', group='clients', num_cpu=4, num_ram=16.0),\n Instance(name='client_1', group='clients', num_cpu=4, num_ram=16.0),\n ]\n\n Depending on the adapter used not every combination of vCPU and RAM might be available.\n \"\"\"\n instance_names = {ins.name for ins in instances}\n assert len(instance_names) == len(instances), \"Instance names must be unique.\"\n\n self.adapter = adapter\n self.ssh_credentials = ssh_credentials\n self.instances = instances\n self.timeout = timeout\n\n def get_instance(self, instance_name: str) -> Instance:\n \"\"\"Return instance by instance_name.\"\"\"\n for ins in self.instances:\n if ins.name == instance_name:\n return ins\n\n raise InstanceIdNotFound()\n\n def get_instance_names(self, groups: Optional[List[str]] = None) -> List[str]:\n \"\"\"Return a list of all instance names.\"\"\"\n return [\n ins.name for ins in self.instances if groups is None or ins.group in groups\n ]\n\n def start(self) -> None:\n \"\"\"Start the instance.\"\"\"\n instance_groups = group_instances_by_specs(self.instances)\n\n with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:\n futures = [\n executor.submit(\n create_instances, self.adapter, instance_group, self.timeout\n )\n for instance_group in instance_groups\n ]\n concurrent.futures.wait(futures)\n\n try:\n for future in futures:\n future.result()\n # pylint: disable=broad-except\n except Exception as exc:\n log(\n ERROR,\n \"Failed to start the cluster completely. Shutting down...\",\n )\n log(ERROR, exc)\n\n for future in futures:\n future.cancel()\n\n self.terminate()\n raise StartFailed() from exc\n\n for ins in self.instances:\n log(DEBUG, ins)\n\n def terminate(self) -> None:\n \"\"\"Terminate all instances and shutdown cluster.\"\"\"\n self.adapter.terminate_all_instances()\n\n def upload(\n self, instance_name: str, local_path: str, remote_path: str\n ) -> SFTPAttributes:\n \"\"\"Upload local file to remote instance.\"\"\"\n instance = self.get_instance(instance_name)\n\n with ssh_connection(instance, self.ssh_credentials) as client:\n sftp = client.open_sftp()\n sftp_file_attributes = sftp.put(local_path, remote_path)\n\n return sftp_file_attributes\n\n def upload_all(\n self, local_path: str, remote_path: str\n ) -> Dict[str, SFTPAttributes]:\n \"\"\"Upload file to all instances.\"\"\"\n results: Dict[str, SFTPAttributes] = {}\n\n with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:\n # Start the load operations and mark each future with its URL\n future_to_result = {\n executor.submit(\n self.upload, instance_name, local_path, remote_path\n ): instance_name\n for instance_name in self.get_instance_names()\n }\n\n for future in concurrent.futures.as_completed(future_to_result):\n instance_name = future_to_result[future]\n try:\n results[instance_name] = future.result()\n # pylint: disable=broad-except\n except Exception as exc:\n log(ERROR, (instance_name, exc))\n\n return results\n\n def exec(self, instance_name: str, command: str) -> ExecInfo:\n \"\"\"Run command on instance and return stdout.\"\"\"\n log(DEBUG, \"Exec on %s: %s\", instance_name, command)\n\n instance = self.get_instance(instance_name)\n\n with ssh_connection(instance, self.ssh_credentials) as client:\n _, stdout, stderr = client.exec_command(command)\n stdout = stdout.readlines()\n stderr = stderr.readlines()\n\n print(stdout, stderr)\n\n return stdout, stderr\n\n def exec_all(\n self, command: str, groups: Optional[List[str]] = None\n ) -> Dict[str, ExecInfo]:\n \"\"\"Run command on all instances.\n\n If provided filter by group.\n \"\"\"\n instance_names = self.get_instance_names(groups)\n\n results: Dict[str, ExecInfo] = {}\n\n with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:\n # Start the load operations and mark each future with its URL\n future_to_result = {\n executor.submit(self.exec, instance_name, command): instance_name\n for instance_name in instance_names\n }\n\n for future in concurrent.futures.as_completed(future_to_result):\n instance_name = future_to_result[future]\n try:\n results[instance_name] = future.result()\n # pylint: disable=broad-except\n except Exception as exc:\n log(ERROR, (instance_name, exc))\n\n return results\n", "path": "src/py/flwr_experimental/ops/cluster.py"}]}
| 3,479 | 613 |
gh_patches_debug_27279
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-2618
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pcreate -s shows wrong link to tutorials
after a
```
pcreate -s alchemy scaffold-alchemy
```
I see a link to tutorials, but this link is a 404:
```
Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials
```
</issue>
<code>
[start of pyramid/scaffolds/__init__.py]
1 import binascii
2 import os
3 from textwrap import dedent
4
5 from pyramid.compat import native_
6
7 from pyramid.scaffolds.template import Template # API
8
9 class PyramidTemplate(Template):
10 """
11 A class that can be used as a base class for Pyramid scaffolding
12 templates.
13 """
14 def pre(self, command, output_dir, vars):
15 """ Overrides :meth:`pyramid.scaffolds.template.Template.pre`, adding
16 several variables to the default variables list (including
17 ``random_string``, and ``package_logger``). It also prevents common
18 misnamings (such as naming a package "site" or naming a package
19 logger "root".
20 """
21 vars['random_string'] = native_(binascii.hexlify(os.urandom(20)))
22 package_logger = vars['package']
23 if package_logger == 'root':
24 # Rename the app logger in the rare case a project is named 'root'
25 package_logger = 'app'
26 vars['package_logger'] = package_logger
27 return Template.pre(self, command, output_dir, vars)
28
29 def post(self, command, output_dir, vars): # pragma: no cover
30 """ Overrides :meth:`pyramid.scaffolds.template.Template.post`, to
31 print "Welcome to Pyramid. Sorry for the convenience." after a
32 successful scaffolding rendering."""
33
34 separator = "=" * 79
35 msg = dedent(
36 """
37 %(separator)s
38 Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials
39 Documentation: http://docs.pylonsproject.org/projects/pyramid
40
41 Twitter (tips & updates): http://twitter.com/pylons
42 Mailing List: http://groups.google.com/group/pylons-discuss
43
44 Welcome to Pyramid. Sorry for the convenience.
45 %(separator)s
46 """ % {'separator': separator})
47
48 self.out(msg)
49 return Template.post(self, command, output_dir, vars)
50
51 def out(self, msg): # pragma: no cover (replaceable testing hook)
52 print(msg)
53
54 class StarterProjectTemplate(PyramidTemplate):
55 _template_dir = 'starter'
56 summary = 'Pyramid starter project'
57
58 class ZODBProjectTemplate(PyramidTemplate):
59 _template_dir = 'zodb'
60 summary = 'Pyramid ZODB project using traversal'
61
62 class AlchemyProjectTemplate(PyramidTemplate):
63 _template_dir = 'alchemy'
64 summary = 'Pyramid SQLAlchemy project using url dispatch'
65
[end of pyramid/scaffolds/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyramid/scaffolds/__init__.py b/pyramid/scaffolds/__init__.py
--- a/pyramid/scaffolds/__init__.py
+++ b/pyramid/scaffolds/__init__.py
@@ -35,11 +35,10 @@
msg = dedent(
"""
%(separator)s
- Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials
- Documentation: http://docs.pylonsproject.org/projects/pyramid
-
- Twitter (tips & updates): http://twitter.com/pylons
- Mailing List: http://groups.google.com/group/pylons-discuss
+ Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials/en/latest/
+ Documentation: http://docs.pylonsproject.org/projects/pyramid/en/latest/
+ Twitter: https://twitter.com/trypyramid
+ Mailing List: https://groups.google.com/forum/#!forum/pylons-discuss
Welcome to Pyramid. Sorry for the convenience.
%(separator)s
@@ -53,12 +52,13 @@
class StarterProjectTemplate(PyramidTemplate):
_template_dir = 'starter'
- summary = 'Pyramid starter project'
+ summary = 'Pyramid starter project using URL dispatch and Chameleon'
class ZODBProjectTemplate(PyramidTemplate):
_template_dir = 'zodb'
- summary = 'Pyramid ZODB project using traversal'
+ summary = 'Pyramid project using ZODB, traversal, and Chameleon'
class AlchemyProjectTemplate(PyramidTemplate):
_template_dir = 'alchemy'
- summary = 'Pyramid SQLAlchemy project using url dispatch'
+ summary = 'Pyramid project using SQLAlchemy, SQLite, URL dispatch, and'
+ ' Chameleon'
|
{"golden_diff": "diff --git a/pyramid/scaffolds/__init__.py b/pyramid/scaffolds/__init__.py\n--- a/pyramid/scaffolds/__init__.py\n+++ b/pyramid/scaffolds/__init__.py\n@@ -35,11 +35,10 @@\n msg = dedent(\n \"\"\"\n %(separator)s\n- Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials\n- Documentation: http://docs.pylonsproject.org/projects/pyramid\n-\n- Twitter (tips & updates): http://twitter.com/pylons\n- Mailing List: http://groups.google.com/group/pylons-discuss\n+ Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials/en/latest/\n+ Documentation: http://docs.pylonsproject.org/projects/pyramid/en/latest/\n+ Twitter: https://twitter.com/trypyramid\n+ Mailing List: https://groups.google.com/forum/#!forum/pylons-discuss\n \n Welcome to Pyramid. Sorry for the convenience.\n %(separator)s\n@@ -53,12 +52,13 @@\n \n class StarterProjectTemplate(PyramidTemplate):\n _template_dir = 'starter'\n- summary = 'Pyramid starter project'\n+ summary = 'Pyramid starter project using URL dispatch and Chameleon'\n \n class ZODBProjectTemplate(PyramidTemplate):\n _template_dir = 'zodb'\n- summary = 'Pyramid ZODB project using traversal'\n+ summary = 'Pyramid project using ZODB, traversal, and Chameleon'\n \n class AlchemyProjectTemplate(PyramidTemplate):\n _template_dir = 'alchemy'\n- summary = 'Pyramid SQLAlchemy project using url dispatch'\n+ summary = 'Pyramid project using SQLAlchemy, SQLite, URL dispatch, and'\n+ ' Chameleon'\n", "issue": "pcreate -s shows wrong link to tutorials\nafter a \n\n```\npcreate -s alchemy scaffold-alchemy\n```\n\nI see a link to tutorials, but this link is a 404: \n\n```\nTutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials\n```\n\n", "before_files": [{"content": "import binascii\nimport os\nfrom textwrap import dedent\n\nfrom pyramid.compat import native_\n\nfrom pyramid.scaffolds.template import Template # API\n\nclass PyramidTemplate(Template):\n \"\"\"\n A class that can be used as a base class for Pyramid scaffolding\n templates.\n \"\"\"\n def pre(self, command, output_dir, vars):\n \"\"\" Overrides :meth:`pyramid.scaffolds.template.Template.pre`, adding\n several variables to the default variables list (including\n ``random_string``, and ``package_logger``). It also prevents common\n misnamings (such as naming a package \"site\" or naming a package\n logger \"root\".\n \"\"\"\n vars['random_string'] = native_(binascii.hexlify(os.urandom(20)))\n package_logger = vars['package']\n if package_logger == 'root':\n # Rename the app logger in the rare case a project is named 'root'\n package_logger = 'app'\n vars['package_logger'] = package_logger\n return Template.pre(self, command, output_dir, vars)\n\n def post(self, command, output_dir, vars): # pragma: no cover\n \"\"\" Overrides :meth:`pyramid.scaffolds.template.Template.post`, to\n print \"Welcome to Pyramid. Sorry for the convenience.\" after a\n successful scaffolding rendering.\"\"\"\n\n separator = \"=\" * 79\n msg = dedent(\n \"\"\"\n %(separator)s\n Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials\n Documentation: http://docs.pylonsproject.org/projects/pyramid\n\n Twitter (tips & updates): http://twitter.com/pylons\n Mailing List: http://groups.google.com/group/pylons-discuss\n\n Welcome to Pyramid. Sorry for the convenience.\n %(separator)s\n \"\"\" % {'separator': separator})\n\n self.out(msg)\n return Template.post(self, command, output_dir, vars)\n\n def out(self, msg): # pragma: no cover (replaceable testing hook)\n print(msg)\n\nclass StarterProjectTemplate(PyramidTemplate):\n _template_dir = 'starter'\n summary = 'Pyramid starter project'\n\nclass ZODBProjectTemplate(PyramidTemplate):\n _template_dir = 'zodb'\n summary = 'Pyramid ZODB project using traversal'\n\nclass AlchemyProjectTemplate(PyramidTemplate):\n _template_dir = 'alchemy'\n summary = 'Pyramid SQLAlchemy project using url dispatch'\n", "path": "pyramid/scaffolds/__init__.py"}]}
| 1,257 | 398 |
gh_patches_debug_13808
|
rasdani/github-patches
|
git_diff
|
lisa-lab__pylearn2-1503
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] print_monitor_cv.py model not iterable
I've tried `print_monitor_cv.py model.pkl` but I've got
```
Traceback (most recent call last):
File "~/pylearn2/pylearn2/scripts/print_monitor_cv.py", line 84, in <module>
main(**vars(args))
File "~/pylearn2/pylearn2/scripts/print_monitor_cv.py", line 38, in main
for model in list(this_models):
TypeError: 'MLP' object is not iterable
```
so I changed [this part](https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/scripts/print_monitor_cv.py#L38):
``` python
this_models = serial.load(filename)
for model in list(this_models):
# ...
```
to
``` python
# ....
this_models = serial.load(filename)
try:
this_models = list(this_models)
except TypeError:
this_models = [this_models]
for model in this_models:
# ...
```
PR?
</issue>
<code>
[start of pylearn2/scripts/print_monitor_cv.py]
1 #!/usr/bin/env python
2 """
3 Print (average) channel values for a collection of models, such as that
4 serialized by TrainCV. Based on print_monitor.py.
5
6 usage: print_monitor_cv.py model.pkl [-a]
7 """
8 from __future__ import print_function
9
10 __author__ = "Steven Kearnes"
11 __copyright__ = "Copyright 2014, Stanford University"
12 __license__ = "3-clause BSD"
13 __maintainer__ = "Steven Kearnes"
14
15 import argparse
16 import numpy as np
17
18 from pylearn2.utils import serial
19
20
21 def main(models, all=False):
22 """
23 Print (average) final channel values for a collection of models.
24
25 Parameters
26 ----------
27 models : list
28 Filename(s) for models to analyze.
29 all : bool, optional (default False)
30 Whether to output values for all models. If False, only averages
31 and standard deviations across all models are displayed.
32 """
33 epochs = []
34 time = []
35 values = {}
36 for filename in np.atleast_1d(models):
37 this_models = serial.load(filename)
38 for model in list(this_models):
39 monitor = model.monitor
40 channels = monitor.channels
41 epochs.append(monitor._epochs_seen)
42 time.append(max(channels[key].time_record[-1] for key in channels))
43 for key in sorted(channels.keys()):
44 if key not in values:
45 values[key] = []
46 values[key].append(channels[key].val_record[-1])
47 n_models = len(epochs)
48 print('number of models: {0}'.format(n_models))
49 if n_models > 1:
50 if all:
51 print('\nepochs seen:\n{0}\n{1} +/- {2}'.format(np.asarray(epochs),
52 np.mean(epochs),
53 np.std(epochs)))
54 print('\ntraining time:\n{0}\n{1} +/- {2}'.format(np.asarray(time),
55 np.mean(time),
56 np.std(time)))
57 else:
58 print('epochs seen: {0} +/- {1}'.format(np.mean(epochs),
59 np.std(epochs)))
60 print('training time: {0} +/- {1}'.format(np.mean(time),
61 np.std(time)))
62 for key in sorted(values.keys()):
63 if all:
64 print('\n{0}:\n{1}\n{2} +/- {3}'.format(
65 key, np.asarray(values[key]),
66 np.mean(values[key]), np.std(values[key])))
67 else:
68 print('{0}: {1} +/- {2}'.format(key, np.mean(values[key]),
69 np.std(values[key])))
70 else:
71 print('epochs seen: {0}'.format(epochs[0]))
72 print('training time: {0}'.format(time[0]))
73 for key in sorted(values.keys()):
74 print('{0}: {1}'.format(key, values[key][0]))
75
76 if __name__ == '__main__':
77 parser = argparse.ArgumentParser()
78 parser.add_argument('models', nargs='+',
79 help='Model or models to analyze.')
80 parser.add_argument('-a', '--all', action='store_true',
81 help='Print values for all models instead of ' +
82 'averages.')
83 args = parser.parse_args()
84 main(**vars(args))
85
[end of pylearn2/scripts/print_monitor_cv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pylearn2/scripts/print_monitor_cv.py b/pylearn2/scripts/print_monitor_cv.py
--- a/pylearn2/scripts/print_monitor_cv.py
+++ b/pylearn2/scripts/print_monitor_cv.py
@@ -13,6 +13,7 @@
__maintainer__ = "Steven Kearnes"
import argparse
+from collections import Iterable
import numpy as np
from pylearn2.utils import serial
@@ -35,6 +36,8 @@
values = {}
for filename in np.atleast_1d(models):
this_models = serial.load(filename)
+ if not isinstance(this_models, Iterable):
+ this_models = [this_models]
for model in list(this_models):
monitor = model.monitor
channels = monitor.channels
|
{"golden_diff": "diff --git a/pylearn2/scripts/print_monitor_cv.py b/pylearn2/scripts/print_monitor_cv.py\n--- a/pylearn2/scripts/print_monitor_cv.py\n+++ b/pylearn2/scripts/print_monitor_cv.py\n@@ -13,6 +13,7 @@\n __maintainer__ = \"Steven Kearnes\"\n \n import argparse\n+from collections import Iterable\n import numpy as np\n \n from pylearn2.utils import serial\n@@ -35,6 +36,8 @@\n values = {}\n for filename in np.atleast_1d(models):\n this_models = serial.load(filename)\n+ if not isinstance(this_models, Iterable):\n+ this_models = [this_models]\n for model in list(this_models):\n monitor = model.monitor\n channels = monitor.channels\n", "issue": "[bug] print_monitor_cv.py model not iterable\nI've tried `print_monitor_cv.py model.pkl` but I've got\n\n```\nTraceback (most recent call last):\n File \"~/pylearn2/pylearn2/scripts/print_monitor_cv.py\", line 84, in <module>\n main(**vars(args))\n File \"~/pylearn2/pylearn2/scripts/print_monitor_cv.py\", line 38, in main\n for model in list(this_models):\nTypeError: 'MLP' object is not iterable\n```\n\nso I changed [this part](https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/scripts/print_monitor_cv.py#L38):\n\n``` python\n this_models = serial.load(filename)\n for model in list(this_models):\n # ...\n```\n\nto\n\n``` python\n # ....\n this_models = serial.load(filename)\n\n try:\n this_models = list(this_models)\n except TypeError:\n this_models = [this_models]\n\n for model in this_models:\n # ...\n```\n\nPR?\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nPrint (average) channel values for a collection of models, such as that\nserialized by TrainCV. Based on print_monitor.py.\n\nusage: print_monitor_cv.py model.pkl [-a]\n\"\"\"\nfrom __future__ import print_function\n\n__author__ = \"Steven Kearnes\"\n__copyright__ = \"Copyright 2014, Stanford University\"\n__license__ = \"3-clause BSD\"\n__maintainer__ = \"Steven Kearnes\"\n\nimport argparse\nimport numpy as np\n\nfrom pylearn2.utils import serial\n\n\ndef main(models, all=False):\n \"\"\"\n Print (average) final channel values for a collection of models.\n\n Parameters\n ----------\n models : list\n Filename(s) for models to analyze.\n all : bool, optional (default False)\n Whether to output values for all models. If False, only averages\n and standard deviations across all models are displayed.\n \"\"\"\n epochs = []\n time = []\n values = {}\n for filename in np.atleast_1d(models):\n this_models = serial.load(filename)\n for model in list(this_models):\n monitor = model.monitor\n channels = monitor.channels\n epochs.append(monitor._epochs_seen)\n time.append(max(channels[key].time_record[-1] for key in channels))\n for key in sorted(channels.keys()):\n if key not in values:\n values[key] = []\n values[key].append(channels[key].val_record[-1])\n n_models = len(epochs)\n print('number of models: {0}'.format(n_models))\n if n_models > 1:\n if all:\n print('\\nepochs seen:\\n{0}\\n{1} +/- {2}'.format(np.asarray(epochs),\n np.mean(epochs),\n np.std(epochs)))\n print('\\ntraining time:\\n{0}\\n{1} +/- {2}'.format(np.asarray(time),\n np.mean(time),\n np.std(time)))\n else:\n print('epochs seen: {0} +/- {1}'.format(np.mean(epochs),\n np.std(epochs)))\n print('training time: {0} +/- {1}'.format(np.mean(time),\n np.std(time)))\n for key in sorted(values.keys()):\n if all:\n print('\\n{0}:\\n{1}\\n{2} +/- {3}'.format(\n key, np.asarray(values[key]),\n np.mean(values[key]), np.std(values[key])))\n else:\n print('{0}: {1} +/- {2}'.format(key, np.mean(values[key]),\n np.std(values[key])))\n else:\n print('epochs seen: {0}'.format(epochs[0]))\n print('training time: {0}'.format(time[0]))\n for key in sorted(values.keys()):\n print('{0}: {1}'.format(key, values[key][0]))\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser()\n parser.add_argument('models', nargs='+',\n help='Model or models to analyze.')\n parser.add_argument('-a', '--all', action='store_true',\n help='Print values for all models instead of ' +\n 'averages.')\n args = parser.parse_args()\n main(**vars(args))\n", "path": "pylearn2/scripts/print_monitor_cv.py"}]}
| 1,622 | 169 |
gh_patches_debug_15325
|
rasdani/github-patches
|
git_diff
|
apache__airflow-19933
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Reference to undeclared variable: "local variable 'return_code' referenced before assignment"
### Apache Airflow version
2.2.1
### Operating System
Ubuntu 20.04 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.3.0
apache-airflow-providers-apache-cassandra==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.0.0
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-jdbc==2.0.1
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-postgres==2.3.0
apache-airflow-providers-presto==2.0.1
apache-airflow-providers-slack==4.1.0
apache-airflow-providers-sqlite==2.0.1
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
Incorrect "finally" block invokes "UnboundLocalError: local variable 'return_code' referenced before assignment"
Traceback example:
```python
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 88, in _start_by_fork
self.log.exception(
File "/usr/lib/python3.8/logging/__init__.py", line 1481, in exception
self.error(msg, *args, exc_info=exc_info, **kwargs)
File "/usr/lib/python3.8/logging/__init__.py", line 1475, in error
self._log(ERROR, msg, args, **kwargs)
File "/usr/lib/python3.8/logging/__init__.py", line 1589, in _log
self.handle(record)
File "/usr/lib/python3.8/logging/__init__.py", line 1599, in handle
self.callHandlers(record)
File "/usr/lib/python3.8/logging/__init__.py", line 1661, in callHandlers
hdlr.handle(record)
File "/usr/lib/python3.8/logging/__init__.py", line 950, in handle
rv = self.filter(record)
File "/usr/lib/python3.8/logging/__init__.py", line 811, in filter
result = f.filter(record)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 167, in filter
self._redact_exception_with_context(exc)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 150, in _redact_exception_with_context
self._redact_exception_with_context(exception.__context__)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 150, in _redact_exception_with_context
self._redact_exception_with_context(exception.__context__)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py", line 148, in _redact_exception_with_context
exception.args = (self.redact(v) for v in exception.args)
AttributeError: can't set attribute
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/executors/celery_executor.py", line 121, in _execute_in_fork
args.func(args)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 292, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 105, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py", line 163, in _run_task_by_local_task_job
run_job.run()
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/jobs/base_job.py", line 245, in run
self._execute()
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/jobs/local_task_job.py", line 103, in _execute
self.task_runner.start()
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 41, in start
self.process = self._start_by_fork()
File "/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py", line 98, in _start_by_fork
os._exit(return_code)
UnboundLocalError: local variable 'return_code' referenced before assignment
```
Bug location:
https://github.com/apache/airflow/blob/2.2.1/airflow/task/task_runner/standard_task_runner.py#L84-L98
Explanation:
Nested exception triggered when we are trying to log exception, so return_code remains undeclared.
### What you expected to happen
return_code variable should be declared
### How to reproduce
It is probably hard to reproduce because you need to have exception in task execution as well as exception in logging function.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</issue>
<code>
[start of airflow/task/task_runner/standard_task_runner.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18 """Standard task runner"""
19 import logging
20 import os
21 from typing import Optional
22
23 import psutil
24 from setproctitle import setproctitle
25
26 from airflow.settings import CAN_FORK
27 from airflow.task.task_runner.base_task_runner import BaseTaskRunner
28 from airflow.utils.process_utils import reap_process_group
29
30
31 class StandardTaskRunner(BaseTaskRunner):
32 """Standard runner for all tasks."""
33
34 def __init__(self, local_task_job):
35 super().__init__(local_task_job)
36 self._rc = None
37 self.dag = local_task_job.task_instance.task.dag
38
39 def start(self):
40 if CAN_FORK and not self.run_as_user:
41 self.process = self._start_by_fork()
42 else:
43 self.process = self._start_by_exec()
44
45 def _start_by_exec(self):
46 subprocess = self.run_command()
47 return psutil.Process(subprocess.pid)
48
49 def _start_by_fork(self):
50 pid = os.fork()
51 if pid:
52 self.log.info("Started process %d to run task", pid)
53 return psutil.Process(pid)
54 else:
55 import signal
56
57 from airflow import settings
58 from airflow.cli.cli_parser import get_parser
59 from airflow.sentry import Sentry
60
61 signal.signal(signal.SIGINT, signal.SIG_DFL)
62 signal.signal(signal.SIGTERM, signal.SIG_DFL)
63 # Start a new process group
64 os.setpgid(0, 0)
65
66 # Force a new SQLAlchemy session. We can't share open DB handles
67 # between process. The cli code will re-create this as part of its
68 # normal startup
69 settings.engine.pool.dispose()
70 settings.engine.dispose()
71
72 parser = get_parser()
73 # [1:] - remove "airflow" from the start of the command
74 args = parser.parse_args(self._command[1:])
75
76 self.log.info('Running: %s', self._command)
77 self.log.info('Job %s: Subtask %s', self._task_instance.job_id, self._task_instance.task_id)
78
79 proc_title = "airflow task runner: {0.dag_id} {0.task_id} {0.execution_date_or_run_id}"
80 if hasattr(args, "job_id"):
81 proc_title += " {0.job_id}"
82 setproctitle(proc_title.format(args))
83
84 try:
85 args.func(args, dag=self.dag)
86 return_code = 0
87 except Exception:
88 self.log.exception(
89 "Failed to execute job %s for task %s",
90 self._task_instance.job_id,
91 self._task_instance.task_id,
92 )
93 return_code = 1
94 finally:
95 # Explicitly flush any pending exception to Sentry if enabled
96 Sentry.flush()
97 logging.shutdown()
98 os._exit(return_code)
99
100 def return_code(self, timeout: int = 0) -> Optional[int]:
101 # We call this multiple times, but we can only wait on the process once
102 if self._rc is not None or not self.process:
103 return self._rc
104
105 try:
106 self._rc = self.process.wait(timeout=timeout)
107 self.process = None
108 except psutil.TimeoutExpired:
109 pass
110
111 return self._rc
112
113 def terminate(self):
114 if self.process is None:
115 return
116
117 # Reap the child process - it may already be finished
118 _ = self.return_code(timeout=0)
119
120 if self.process and self.process.is_running():
121 rcs = reap_process_group(self.process.pid, self.log)
122 self._rc = rcs.get(self.process.pid)
123
124 self.process = None
125
126 if self._rc is None:
127 # Something else reaped it before we had a chance, so let's just "guess" at an error code.
128 self._rc = -9
129
130 if self._rc == -9:
131 # If either we or psutil gives out a -9 return code, it likely means
132 # an OOM happened
133 self.log.error(
134 'Job %s was killed before it finished (likely due to running out of memory)',
135 self._task_instance.job_id,
136 )
137
[end of airflow/task/task_runner/standard_task_runner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/airflow/task/task_runner/standard_task_runner.py b/airflow/task/task_runner/standard_task_runner.py
--- a/airflow/task/task_runner/standard_task_runner.py
+++ b/airflow/task/task_runner/standard_task_runner.py
@@ -85,12 +85,12 @@
args.func(args, dag=self.dag)
return_code = 0
except Exception:
+ return_code = 1
self.log.exception(
"Failed to execute job %s for task %s",
self._task_instance.job_id,
self._task_instance.task_id,
)
- return_code = 1
finally:
# Explicitly flush any pending exception to Sentry if enabled
Sentry.flush()
|
{"golden_diff": "diff --git a/airflow/task/task_runner/standard_task_runner.py b/airflow/task/task_runner/standard_task_runner.py\n--- a/airflow/task/task_runner/standard_task_runner.py\n+++ b/airflow/task/task_runner/standard_task_runner.py\n@@ -85,12 +85,12 @@\n args.func(args, dag=self.dag)\n return_code = 0\n except Exception:\n+ return_code = 1\n self.log.exception(\n \"Failed to execute job %s for task %s\",\n self._task_instance.job_id,\n self._task_instance.task_id,\n )\n- return_code = 1\n finally:\n # Explicitly flush any pending exception to Sentry if enabled\n Sentry.flush()\n", "issue": "Reference to undeclared variable: \"local variable 'return_code' referenced before assignment\"\n### Apache Airflow version\r\n\r\n2.2.1\r\n\r\n### Operating System\r\n\r\nUbuntu 20.04 LTS\r\n\r\n### Versions of Apache Airflow Providers\r\n\r\napache-airflow-providers-amazon==2.3.0\r\napache-airflow-providers-apache-cassandra==2.1.0\r\napache-airflow-providers-ftp==2.0.1\r\napache-airflow-providers-google==6.0.0\r\napache-airflow-providers-http==2.0.1\r\napache-airflow-providers-imap==2.0.1\r\napache-airflow-providers-jdbc==2.0.1\r\napache-airflow-providers-mysql==2.1.1\r\napache-airflow-providers-postgres==2.3.0\r\napache-airflow-providers-presto==2.0.1\r\napache-airflow-providers-slack==4.1.0\r\napache-airflow-providers-sqlite==2.0.1\r\n\r\n### Deployment\r\n\r\nVirtualenv installation\r\n\r\n### Deployment details\r\n\r\n_No response_\r\n\r\n### What happened\r\n\r\nIncorrect \"finally\" block invokes \"UnboundLocalError: local variable 'return_code' referenced before assignment\"\r\nTraceback example:\r\n```python\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py\", line 88, in _start_by_fork\r\n self.log.exception(\r\n File \"/usr/lib/python3.8/logging/__init__.py\", line 1481, in exception\r\n self.error(msg, *args, exc_info=exc_info, **kwargs)\r\n File \"/usr/lib/python3.8/logging/__init__.py\", line 1475, in error\r\n self._log(ERROR, msg, args, **kwargs)\r\n File \"/usr/lib/python3.8/logging/__init__.py\", line 1589, in _log\r\n self.handle(record)\r\n File \"/usr/lib/python3.8/logging/__init__.py\", line 1599, in handle\r\n self.callHandlers(record)\r\n File \"/usr/lib/python3.8/logging/__init__.py\", line 1661, in callHandlers\r\n hdlr.handle(record)\r\n File \"/usr/lib/python3.8/logging/__init__.py\", line 950, in handle\r\n rv = self.filter(record)\r\n File \"/usr/lib/python3.8/logging/__init__.py\", line 811, in filter\r\n result = f.filter(record)\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py\", line 167, in filter\r\n self._redact_exception_with_context(exc)\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py\", line 150, in _redact_exception_with_context\r\n self._redact_exception_with_context(exception.__context__)\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py\", line 150, in _redact_exception_with_context\r\n self._redact_exception_with_context(exception.__context__)\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/log/secrets_masker.py\", line 148, in _redact_exception_with_context\r\n exception.args = (self.redact(v) for v in exception.args)\r\nAttributeError: can't set attribute\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/executors/celery_executor.py\", line 121, in _execute_in_fork\r\n args.func(args)\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/cli_parser.py\", line 48, in command\r\n return func(*args, **kwargs)\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/utils/cli.py\", line 92, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py\", line 292, in task_run\r\n _run_task_by_selected_method(args, dag, ti)\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py\", line 105, in _run_task_by_selected_method\r\n _run_task_by_local_task_job(args, ti)\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/cli/commands/task_command.py\", line 163, in _run_task_by_local_task_job\r\n run_job.run()\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/jobs/base_job.py\", line 245, in run\r\n self._execute()\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/jobs/local_task_job.py\", line 103, in _execute\r\n self.task_runner.start()\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py\", line 41, in start\r\n self.process = self._start_by_fork()\r\n File \"/var/lib/airflow/.venv/lib/python3.8/site-packages/airflow/task/task_runner/standard_task_runner.py\", line 98, in _start_by_fork\r\n os._exit(return_code)\r\nUnboundLocalError: local variable 'return_code' referenced before assignment\r\n```\r\n\r\nBug location:\r\nhttps://github.com/apache/airflow/blob/2.2.1/airflow/task/task_runner/standard_task_runner.py#L84-L98\r\n\r\nExplanation:\r\nNested exception triggered when we are trying to log exception, so return_code remains undeclared.\r\n\r\n### What you expected to happen\r\n\r\nreturn_code variable should be declared \r\n\r\n### How to reproduce\r\n\r\nIt is probably hard to reproduce because you need to have exception in task execution as well as exception in logging function.\r\n\r\n### Anything else\r\n\r\n_No response_\r\n\r\n### Are you willing to submit PR?\r\n\r\n- [ ] Yes I am willing to submit a PR!\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)\r\n\n", "before_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"Standard task runner\"\"\"\nimport logging\nimport os\nfrom typing import Optional\n\nimport psutil\nfrom setproctitle import setproctitle\n\nfrom airflow.settings import CAN_FORK\nfrom airflow.task.task_runner.base_task_runner import BaseTaskRunner\nfrom airflow.utils.process_utils import reap_process_group\n\n\nclass StandardTaskRunner(BaseTaskRunner):\n \"\"\"Standard runner for all tasks.\"\"\"\n\n def __init__(self, local_task_job):\n super().__init__(local_task_job)\n self._rc = None\n self.dag = local_task_job.task_instance.task.dag\n\n def start(self):\n if CAN_FORK and not self.run_as_user:\n self.process = self._start_by_fork()\n else:\n self.process = self._start_by_exec()\n\n def _start_by_exec(self):\n subprocess = self.run_command()\n return psutil.Process(subprocess.pid)\n\n def _start_by_fork(self):\n pid = os.fork()\n if pid:\n self.log.info(\"Started process %d to run task\", pid)\n return psutil.Process(pid)\n else:\n import signal\n\n from airflow import settings\n from airflow.cli.cli_parser import get_parser\n from airflow.sentry import Sentry\n\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n signal.signal(signal.SIGTERM, signal.SIG_DFL)\n # Start a new process group\n os.setpgid(0, 0)\n\n # Force a new SQLAlchemy session. We can't share open DB handles\n # between process. The cli code will re-create this as part of its\n # normal startup\n settings.engine.pool.dispose()\n settings.engine.dispose()\n\n parser = get_parser()\n # [1:] - remove \"airflow\" from the start of the command\n args = parser.parse_args(self._command[1:])\n\n self.log.info('Running: %s', self._command)\n self.log.info('Job %s: Subtask %s', self._task_instance.job_id, self._task_instance.task_id)\n\n proc_title = \"airflow task runner: {0.dag_id} {0.task_id} {0.execution_date_or_run_id}\"\n if hasattr(args, \"job_id\"):\n proc_title += \" {0.job_id}\"\n setproctitle(proc_title.format(args))\n\n try:\n args.func(args, dag=self.dag)\n return_code = 0\n except Exception:\n self.log.exception(\n \"Failed to execute job %s for task %s\",\n self._task_instance.job_id,\n self._task_instance.task_id,\n )\n return_code = 1\n finally:\n # Explicitly flush any pending exception to Sentry if enabled\n Sentry.flush()\n logging.shutdown()\n os._exit(return_code)\n\n def return_code(self, timeout: int = 0) -> Optional[int]:\n # We call this multiple times, but we can only wait on the process once\n if self._rc is not None or not self.process:\n return self._rc\n\n try:\n self._rc = self.process.wait(timeout=timeout)\n self.process = None\n except psutil.TimeoutExpired:\n pass\n\n return self._rc\n\n def terminate(self):\n if self.process is None:\n return\n\n # Reap the child process - it may already be finished\n _ = self.return_code(timeout=0)\n\n if self.process and self.process.is_running():\n rcs = reap_process_group(self.process.pid, self.log)\n self._rc = rcs.get(self.process.pid)\n\n self.process = None\n\n if self._rc is None:\n # Something else reaped it before we had a chance, so let's just \"guess\" at an error code.\n self._rc = -9\n\n if self._rc == -9:\n # If either we or psutil gives out a -9 return code, it likely means\n # an OOM happened\n self.log.error(\n 'Job %s was killed before it finished (likely due to running out of memory)',\n self._task_instance.job_id,\n )\n", "path": "airflow/task/task_runner/standard_task_runner.py"}]}
| 3,360 | 163 |
gh_patches_debug_53754
|
rasdani/github-patches
|
git_diff
|
pypa__pipenv-1662
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ImportError: No module named shutil_get_terminal_size
Describe the issue briefly here.
```
$ pipenv run python
Creating a virtualenv for this project…
Using /usr/bin/python (2.7.14) to create virtualenv…
⠋Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/user/dotfiles/dotfiles/.local/lib/python2.7/site-packages/pipenv/pew/__main__.py", line 1, in <module>
from pipenv.patched import pew
File "/home/user/.local/lib/python2.7/site-packages/pipenv/patched/pew/__init__.py", line 3, in <module>
from . import pew
File "/home/user/.local/lib/python2.7/site-packages/pipenv/patched/pew/pew.py", line 16, in <module>
from backports.shutil_get_terminal_size import get_terminal_size
ImportError: No module named shutil_get_terminal_size
```
Virtualenv location:
```
Loading .env environment variables…
Warning: There was an unexpected error while activating your virtualenv. Continuing anyway…
Python 2.7.14 (default, Sep 23 2017, 22:06:14)
[GCC 7.2.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
Please run `$ python -m pipenv.help`, and paste the results here.
Pipenv version: `'11.3.2'`
Pipenv location: `'/home/user/.local/lib/python2.7/site-packages/pipenv'`
Python location: `'/usr/bin/python'`
Other Python installations in `PATH`:
- `2.7`: `/usr/bin/python2.7`
- `2.7`: `/usr/bin/python2.7`
- `3.5`: `/usr/bin/python3.5m`
- `3.5`: `/usr/bin/python3.5`
- `3.6`: `/usr/bin/python3.6m`
- `3.6`: `/usr/bin/python3.6`
- `2.7.14`: `/usr/bin/python`
- `2.7.14`: `/usr/bin/python2`
- `3.6.3`: `/usr/bin/python3`
PEP 508 Information:
```
{'implementation_name': 'cpython',
'implementation_version': '0',
'os_name': 'posix',
'platform_machine': 'x86_64',
'platform_python_implementation': 'CPython',
'platform_release': '4.13.0-36-generic',
'platform_system': 'Linux',
'platform_version': '#40-Ubuntu SMP Fri Feb 16 20:07:48 UTC 2018',
'python_full_version': '2.7.14',
'python_version': '2.7',
'sys_platform': 'linux2'}
```
System environment variables:
- `QT4_IM_MODULE`
- `ALTERNATE_EDITOR`
- `PYTHONDONTWRITEBYTECODE`
- `XDG_CURRENT_DESKTOP`
- `XDG_SESSION_TYPE`
- `QT_IM_MODULE`
- `LOGNAME`
- `USER`
- `PATH`
- `XDG_VTNR`
- `HOME`
- `PYTHONUNBUFFERED`
- `VIRTUALENVWRAPPER_SCRIPT`
- `DISPLAY`
- `XDG_SESSION_DESKTOP`
- `LANG`
- `TERM`
- `SHELL`
- `JOURNAL_STREAM`
- `SESSION_MANAGER`
- `XDG_DATA_DIRS`
- `WAYLAND_DISPLAY`
- `GNOME_DESKTOP_SESSION_ID`
- `CLUTTER_IM_MODULE`
- `TEXTDOMAIN`
- `EDITOR`
- `DESKTOP_AUTOSTART_ID`
- `XMODIFIERS`
- `_VIRTUALENVWRAPPER_API`
- `USERNAME`
- `WORKON_HOME`
- `XDG_RUNTIME_DIR`
- `SSH_AUTH_SOCK`
- `VTE_VERSION`
- `GDMSESSION`
- `IM_CONFIG_PHASE`
- `VISUAL`
- `TEXTDOMAINDIR`
- `GNOME_SHELL_SESSION_MODE`
- `XDG_CONFIG_DIRS`
- `PIP_PYTHON_PATH`
- `LESSOPEN`
- `XDG_SESSION_ID`
- `DBUS_SESSION_BUS_ADDRESS`
- `_`
- `GTK_IM_MODULE`
- `DESKTOP_SESSION`
- `LESSCLOSE`
- `OLDPWD`
- `SHLVL`
- `PWD`
- `COLORTERM`
- `XDG_MENU_PREFIX`
- `GIT_EDITOR`
- `LS_COLORS`
- `PAGER`
- `XDG_SEAT`
Pipenv–specific environment variables:
Debug–specific environment variables:
- `PATH`: `/bin/:/bin/:/bin/:/home/user/.envs/dotfiles-_bp8OBTv/bin/:/home/user/.envs/dotfiles-_bp8OBTv/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/user/dotfiles/bin:/home/user/.local/bin`
- `SHELL`: `/bin/bash`
- `EDITOR`: `emacsclient -c`
- `LANG`: `en_US.UTF-8`
- `PWD`: `/home/user/dotfiles`
---------------------------
Contents of `Pipfile` ('/home/user/dotfiles/Pipfile'):
```toml
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[packages]
"autopep8" = "*"
"flake8" = "*"
autoenv = "*"
click = "*"
gitpython = "*"
ipython = "*"
mypy = "*"
percol = "*"
pex = "*"
pycodestyle = "*"
pydocstyle = "*"
pyflakes = "*"
pylama = "*"
pylint = "*"
tldr = "*"
visidata = "*"
requests = "*"
[dev-packages]
[requires]
```
------------
##### Expected result
Describe what you expected.
##### Actual result
When possible, provide the verbose output (`--verbose`), especially for locking and dependencies resolving issues.
##### Steps to replicate
Provide the steps to replicate (which usually at least includes the commands and the Pipfile).
Pipfile
```
$ cat Pipfile
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[packages]
"autopep8" = "*"
"flake8" = "*"
autoenv = "*"
click = "*"
gitpython = "*"
ipython = "*"
mypy = "*"
percol = "*"
pex = "*"
pycodestyle = "*"
pydocstyle = "*"
pyflakes = "*"
pylama = "*"
pylint = "*"
tldr = "*"
visidata = "*"
requests = "*"
[dev-packages]
[requires]
```
# Platform
Ubuntu 17.10
</issue>
<code>
[start of pipenv/pew/__main__.py]
1 from pipenv.patched import pew
2
3 if __name__ == '__main__':
4 pew.pew.pew()
5
[end of pipenv/pew/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pipenv/pew/__main__.py b/pipenv/pew/__main__.py
--- a/pipenv/pew/__main__.py
+++ b/pipenv/pew/__main__.py
@@ -1,4 +1,13 @@
from pipenv.patched import pew
+import os
+import sys
+
+pipenv_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
+pipenv_vendor = os.sep.join([pipenv_root, 'vendor'])
+pipenv_patched = os.sep.join([pipenv_root, 'patched'])
+
if __name__ == '__main__':
+ sys.path.insert(0, pipenv_vendor)
+ sys.path.insert(0, pipenv_patched)
pew.pew.pew()
|
{"golden_diff": "diff --git a/pipenv/pew/__main__.py b/pipenv/pew/__main__.py\n--- a/pipenv/pew/__main__.py\n+++ b/pipenv/pew/__main__.py\n@@ -1,4 +1,13 @@\n from pipenv.patched import pew\n+import os\n+import sys\n+\n+pipenv_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n+pipenv_vendor = os.sep.join([pipenv_root, 'vendor'])\n+pipenv_patched = os.sep.join([pipenv_root, 'patched'])\n+\n \n if __name__ == '__main__':\n+ sys.path.insert(0, pipenv_vendor)\n+ sys.path.insert(0, pipenv_patched)\n pew.pew.pew()\n", "issue": "ImportError: No module named shutil_get_terminal_size\nDescribe the issue briefly here.\r\n\r\n```\r\n$ pipenv run python\r\nCreating a virtualenv for this project\u2026\r\nUsing /usr/bin/python (2.7.14) to create virtualenv\u2026\r\n\u280bTraceback (most recent call last):\r\n File \"/usr/lib/python2.7/runpy.py\", line 174, in _run_module_as_main\r\n \"__main__\", fname, loader, pkg_name)\r\n File \"/usr/lib/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/home/user/dotfiles/dotfiles/.local/lib/python2.7/site-packages/pipenv/pew/__main__.py\", line 1, in <module>\r\n from pipenv.patched import pew\r\n File \"/home/user/.local/lib/python2.7/site-packages/pipenv/patched/pew/__init__.py\", line 3, in <module>\r\n from . import pew\r\n File \"/home/user/.local/lib/python2.7/site-packages/pipenv/patched/pew/pew.py\", line 16, in <module>\r\n from backports.shutil_get_terminal_size import get_terminal_size\r\nImportError: No module named shutil_get_terminal_size\r\n```\r\n\r\nVirtualenv location:\r\n\r\n```\r\nLoading .env environment variables\u2026\r\nWarning: There was an unexpected error while activating your virtualenv. Continuing anyway\u2026\r\nPython 2.7.14 (default, Sep 23 2017, 22:06:14)\r\n[GCC 7.2.0] on linux2\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>>\r\n```\r\n\r\nPlease run `$ python -m pipenv.help`, and paste the results here.\r\n\r\nPipenv version: `'11.3.2'`\r\n\r\nPipenv location: `'/home/user/.local/lib/python2.7/site-packages/pipenv'`\r\n\r\nPython location: `'/usr/bin/python'`\r\n\r\nOther Python installations in `PATH`:\r\n\r\n - `2.7`: `/usr/bin/python2.7`\r\n - `2.7`: `/usr/bin/python2.7`\r\n - `3.5`: `/usr/bin/python3.5m`\r\n - `3.5`: `/usr/bin/python3.5`\r\n - `3.6`: `/usr/bin/python3.6m`\r\n - `3.6`: `/usr/bin/python3.6`\r\n\r\n - `2.7.14`: `/usr/bin/python`\r\n - `2.7.14`: `/usr/bin/python2`\r\n - `3.6.3`: `/usr/bin/python3`\r\n\r\nPEP 508 Information:\r\n\r\n```\r\n{'implementation_name': 'cpython',\r\n 'implementation_version': '0',\r\n 'os_name': 'posix',\r\n 'platform_machine': 'x86_64',\r\n 'platform_python_implementation': 'CPython',\r\n 'platform_release': '4.13.0-36-generic',\r\n 'platform_system': 'Linux',\r\n 'platform_version': '#40-Ubuntu SMP Fri Feb 16 20:07:48 UTC 2018',\r\n 'python_full_version': '2.7.14',\r\n 'python_version': '2.7',\r\n 'sys_platform': 'linux2'}\r\n```\r\n\r\nSystem environment variables:\r\n\r\n - `QT4_IM_MODULE`\r\n - `ALTERNATE_EDITOR`\r\n - `PYTHONDONTWRITEBYTECODE`\r\n - `XDG_CURRENT_DESKTOP`\r\n - `XDG_SESSION_TYPE`\r\n - `QT_IM_MODULE`\r\n - `LOGNAME`\r\n - `USER`\r\n - `PATH`\r\n - `XDG_VTNR`\r\n - `HOME`\r\n - `PYTHONUNBUFFERED`\r\n - `VIRTUALENVWRAPPER_SCRIPT`\r\n - `DISPLAY`\r\n - `XDG_SESSION_DESKTOP`\r\n - `LANG`\r\n - `TERM`\r\n - `SHELL`\r\n - `JOURNAL_STREAM`\r\n - `SESSION_MANAGER`\r\n - `XDG_DATA_DIRS`\r\n - `WAYLAND_DISPLAY`\r\n - `GNOME_DESKTOP_SESSION_ID`\r\n - `CLUTTER_IM_MODULE`\r\n - `TEXTDOMAIN`\r\n - `EDITOR`\r\n - `DESKTOP_AUTOSTART_ID`\r\n - `XMODIFIERS`\r\n - `_VIRTUALENVWRAPPER_API`\r\n - `USERNAME`\r\n - `WORKON_HOME`\r\n - `XDG_RUNTIME_DIR`\r\n - `SSH_AUTH_SOCK`\r\n - `VTE_VERSION`\r\n - `GDMSESSION`\r\n - `IM_CONFIG_PHASE`\r\n - `VISUAL`\r\n - `TEXTDOMAINDIR`\r\n - `GNOME_SHELL_SESSION_MODE`\r\n - `XDG_CONFIG_DIRS`\r\n - `PIP_PYTHON_PATH`\r\n - `LESSOPEN`\r\n - `XDG_SESSION_ID`\r\n - `DBUS_SESSION_BUS_ADDRESS`\r\n - `_`\r\n - `GTK_IM_MODULE`\r\n - `DESKTOP_SESSION`\r\n - `LESSCLOSE`\r\n - `OLDPWD`\r\n - `SHLVL`\r\n - `PWD`\r\n - `COLORTERM`\r\n - `XDG_MENU_PREFIX`\r\n - `GIT_EDITOR`\r\n - `LS_COLORS`\r\n - `PAGER`\r\n - `XDG_SEAT`\r\n\r\nPipenv\u2013specific environment variables:\r\n\r\n\r\nDebug\u2013specific environment variables:\r\n\r\n - `PATH`: `/bin/:/bin/:/bin/:/home/user/.envs/dotfiles-_bp8OBTv/bin/:/home/user/.envs/dotfiles-_bp8OBTv/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/user/dotfiles/bin:/home/user/.local/bin`\r\n - `SHELL`: `/bin/bash`\r\n - `EDITOR`: `emacsclient -c`\r\n - `LANG`: `en_US.UTF-8`\r\n - `PWD`: `/home/user/dotfiles`\r\n\r\n\r\n---------------------------\r\n\r\nContents of `Pipfile` ('/home/user/dotfiles/Pipfile'):\r\n\r\n```toml\r\n[[source]]\r\n\r\nurl = \"https://pypi.python.org/simple\"\r\nverify_ssl = true\r\nname = \"pypi\"\r\n\r\n\r\n[packages]\r\n\r\n\"autopep8\" = \"*\"\r\n\"flake8\" = \"*\"\r\nautoenv = \"*\"\r\nclick = \"*\"\r\ngitpython = \"*\"\r\nipython = \"*\"\r\nmypy = \"*\"\r\npercol = \"*\"\r\npex = \"*\"\r\npycodestyle = \"*\"\r\npydocstyle = \"*\"\r\npyflakes = \"*\"\r\npylama = \"*\"\r\npylint = \"*\"\r\ntldr = \"*\"\r\nvisidata = \"*\"\r\nrequests = \"*\"\r\n\r\n\r\n[dev-packages]\r\n\r\n\r\n\r\n[requires]\r\n\r\n\r\n```\r\n\r\n\r\n\r\n------------\r\n\r\n##### Expected result\r\n\r\nDescribe what you expected.\r\n\r\n##### Actual result\r\n\r\nWhen possible, provide the verbose output (`--verbose`), especially for locking and dependencies resolving issues.\r\n\r\n##### Steps to replicate\r\n\r\nProvide the steps to replicate (which usually at least includes the commands and the Pipfile).\r\n\r\n\r\nPipfile\r\n\r\n```\r\n$ cat Pipfile \r\n[[source]]\r\n\r\nurl = \"https://pypi.python.org/simple\"\r\nverify_ssl = true\r\nname = \"pypi\"\r\n\r\n\r\n[packages]\r\n\r\n\"autopep8\" = \"*\"\r\n\"flake8\" = \"*\"\r\nautoenv = \"*\"\r\nclick = \"*\"\r\ngitpython = \"*\"\r\nipython = \"*\"\r\nmypy = \"*\"\r\npercol = \"*\"\r\npex = \"*\"\r\npycodestyle = \"*\"\r\npydocstyle = \"*\"\r\npyflakes = \"*\"\r\npylama = \"*\"\r\npylint = \"*\"\r\ntldr = \"*\"\r\nvisidata = \"*\"\r\nrequests = \"*\"\r\n\r\n\r\n[dev-packages]\r\n\r\n\r\n\r\n[requires]\r\n```\r\n\r\n\r\n\r\n# Platform\r\n\r\nUbuntu 17.10\n", "before_files": [{"content": "from pipenv.patched import pew\n\nif __name__ == '__main__':\n pew.pew.pew()\n", "path": "pipenv/pew/__main__.py"}]}
| 2,242 | 170 |
gh_patches_debug_5079
|
rasdani/github-patches
|
git_diff
|
ros2__ros2_documentation-1783
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update instructions for checking out develop, release and rolling (master) branches
Due to the release of rolling, and galactic being branched off rolling, there are some instructions that require updating in [Maintaining-a-Source-Checkout.rst](https://github.com/ros2/ros2_documentation/blob/rolling/source/Installation/Maintaining-a-Source-Checkout.rst) . We must clarify between the different type of checkouts.
See [this conversation](https://github.com/ros2/ros2_documentation/pull/1693#discussion_r657035616) for details.
</issue>
<code>
[start of conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # rosindex documentation build configuration file, created by
4 # sphinx-quickstart on Tue Oct 2 16:34:57 2018.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 # If extensions (or modules to document with autodoc) are in another directory,
16 # add these directories to sys.path here. If the directory is relative to the
17 # documentation root, use os.path.abspath to make it absolute, like shown here.
18 #
19
20 import itertools
21 import os
22 import sys
23 import time
24
25 from docutils.parsers.rst import Directive
26
27 sys.path.append(os.path.abspath('./sphinx-multiversion'))
28
29
30 # The suffix(es) of source filenames.
31 # You can specify multiple suffix as a list of string:
32 #
33 # The master toctree document.
34 master_doc = 'index'
35
36 # The default role
37 default_role = 'any'
38
39 # The set of warnings to suppress.
40 suppress_warnings = ['image.nonlocal_uri']
41
42 # General information about the project.
43 project = 'ROS 2 documentation'
44 author = 'Open Robotics'
45 copyright = '{}, {}'.format(time.strftime('%Y'), author)
46
47 # The version info for the project you're documenting, acts as replacement for
48 # |version| and |release|, also used in various other places throughout the
49 # built documents.
50 #
51 # The short X.Y version.
52 version = u''
53 # The full version, including alpha/beta/rc tags.
54 release = u''
55
56 # Define the default role to use for links
57 default_role = 'any'
58
59 # The language for content autogenerated by Sphinx. Refer to documentation
60 # for a list of supported languages.
61 #
62 # This is also used if you do content translation via gettext catalogs.
63 # Usually you set "language" from the command line for these cases.
64 language = None
65
66 # List of patterns, relative to source directory, that match files and
67 # directories to ignore when looking for source files.
68 # This patterns also effect to html_static_path and html_extra_path
69 exclude_patterns = ['**/_*.rst']
70
71 # The name of the Pygments (syntax highlighting) style to use.
72 pygments_style = 'sphinx'
73
74 # Add any Sphinx extension module names here, as strings. They can be
75 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
76 extensions = ['sphinx.ext.intersphinx', 'sphinx_tabs.tabs', 'sphinx_multiversion', 'sphinx_rtd_theme']
77
78 # Intersphinx mapping
79
80 intersphinx_mapping = {
81 'catkin_pkg': ('http://docs.ros.org/en/independent/api/catkin_pkg/html', None),
82 'jenkins_tools': ('http://docs.ros.org/en/independent/api/jenkins_tools/html', None),
83 'rosdep': ('http://docs.ros.org/en/independent/api/rosdep/html', None),
84 'rosdistro': ('http://docs.ros.org/en/independent/api/rosdistro/html', None),
85 'rosinstall': ('http://docs.ros.org/en/independent/api/rosinstall/html', None),
86 'rospkg': ('http://docs.ros.org/en/independent/api/rospkg/html', None),
87 'vcstools': ('http://docs.ros.org/en/independent/api/vcstools/html', None)
88 }
89
90 # -- Options for HTML output ----------------------------------------------
91
92 # The theme to use for HTML and HTML Help pages. See the documentation for
93 # a list of builtin themes.
94 #
95 html_theme = 'sphinx_rtd_theme'
96 html_theme_options = {
97 'collapse_navigation': False,
98 'sticky_navigation': True,
99 'navigation_depth': -1,
100 }
101
102 html_context = {
103 'display_github': True,
104 'github_user': 'ros2',
105 'github_repo': 'ros2_documentation',
106 'github_version': 'rolling/source/', # Will be overridden when building multiversion
107 }
108
109 templates_path = [
110 "source/_templates",
111 ]
112
113 # smv_tag_whitelist = None
114
115 smv_branch_whitelist = r'^(rolling|galactic|foxy|eloquent|dashing|crystal)$'
116
117
118 smv_released_pattern = r'^refs/(heads|remotes/[^/]+)/(galactic|foxy|eloquent|dashing|crystal).*$'
119 smv_remote_whitelist = r'^(origin)$'
120 smv_latest_version = 'galactic'
121 smv_eol_versions = ['crystal', 'dashing', 'eloquent']
122
123 distro_full_names = {
124 'crystal': 'Crystal Clemmys',
125 'dashing': 'Dashing Diademata',
126 'eloquent': 'Eloquent Elusor',
127 'foxy': 'Foxy Fitzroy',
128 'galactic': 'Galactic Geochelone',
129 'rolling': 'Rolling Ridley',
130 }
131
132 # These default values will be overridden when building multiversion
133 macros = {
134 'DISTRO': 'rolling',
135 'DISTRO_TITLE': 'Rolling',
136 'DISTRO_TITLE_FULL': 'Rolling Ridley',
137 'REPOS_FILE_BRANCH': 'master',
138 }
139
140 html_favicon = 'favicon.ico'
141
142 # Add any paths that contain custom static files (such as style sheets) here,
143 # relative to this directory. They are copied after the builtin static files,
144 # so a file named "default.css" will overwrite the builtin "default.css".
145 # html_static_path = ['_static']
146
147 # Drop any source link suffix
148 html_sourcelink_suffix = ''
149
150 # -- Options for HTMLHelp output ------------------------------------------
151
152 # Output file base name for HTML help builder.
153 htmlhelp_basename = 'ros2_docsdoc'
154
155 html_baseurl = 'https://docs.ros.org/en'
156
157 class RedirectFrom(Directive):
158
159 has_content = True
160 template_name = 'layout.html'
161 redirections = {}
162
163 @classmethod
164 def register(cls, app):
165 app.connect('html-collect-pages', cls.generate)
166 app.add_directive('redirect-from', cls)
167 return app
168
169 @classmethod
170 def generate(cls, app):
171 from sphinx.builders.html import StandaloneHTMLBuilder
172 if not isinstance(app.builder, StandaloneHTMLBuilder):
173 return
174
175 redirect_html_fragment = """
176 <link rel="canonical" href="{base_url}/{url}" />
177 <meta http-equiv="refresh" content="0; url={url}" />
178 <script>
179 window.location.href = '{url}';
180 </script>
181 """
182 redirections = {
183 os.path.splitext(os.path.relpath(
184 document_path, app.srcdir
185 ))[0]: redirect_urls
186 for document_path, redirect_urls in cls.redirections.items()
187 }
188 redirection_conflict = next((
189 (canon_1, canon_2, redirs_1.intersection(redirs_2))
190 for (canon_1, redirs_1), (canon_2, redirs_2)
191 in itertools.combinations(redirections.items(), 2)
192 if redirs_1.intersection(redirs_2)
193 ), None)
194 if redirection_conflict:
195 canonical_url_1, canonical_url_2 = redirection_conflict[:2]
196 conflicting_redirect_urls = redirection_conflict[-1]
197 raise RuntimeError(
198 'Documents {} and {} define conflicting redirects: {}'.format(
199 canonical_url_1, canonical_url_2, conflicting_redirect_urls
200 )
201 )
202 all_canonical_urls = set(redirections.keys())
203 all_redirect_urls = {
204 redirect_url
205 for redirect_urls in redirections.values()
206 for redirect_url in redirect_urls
207 }
208 conflicting_urls = all_canonical_urls.intersection(all_redirect_urls)
209 if conflicting_urls:
210 raise RuntimeError(
211 'Some redirects conflict with existing documents: {}'.format(
212 conflicting_urls
213 )
214 )
215
216 for canonical_url, redirect_urls in redirections.items():
217 for redirect_url in redirect_urls:
218 context = {
219 'canonical_url': os.path.relpath(
220 canonical_url, redirect_url
221 ),
222 'title': os.path.basename(redirect_url),
223 'metatags': redirect_html_fragment.format(
224 base_url=app.config.html_baseurl,
225 url=app.builder.get_relative_uri(
226 redirect_url, canonical_url
227 )
228 )
229 }
230 yield (redirect_url, context, cls.template_name)
231
232 def run(self):
233 document_path = self.state.document.current_source
234 if document_path not in RedirectFrom.redirections:
235 RedirectFrom.redirections[document_path] = set()
236 RedirectFrom.redirections[document_path].update(self.content)
237 return []
238
239
240 def make_router(origin, destination):
241 def _missing_reference(app, env, node, contnode):
242 from docutils import nodes
243 from docutils.utils import relative_path
244 from sphinx.util import docname_join
245
246 doctarget = docname_join(node['refdoc'], node['reftarget'])
247 if doctarget.startswith(origin):
248 routed_doctarget = doctarget.replace(origin, destination)
249 if routed_doctarget in env.all_docs:
250 newnode = nodes.reference(
251 '', contnode.astext(), internal=True
252 )
253 newnode['refuri'] = app.builder.get_relative_uri(
254 node['refdoc'], routed_doctarget
255 )
256 return newnode
257 return _missing_reference
258
259 def smv_rewrite_configs(app, config):
260 # When using Sphinx multiversion, there is no way at initial configuration time
261 # to determine the distribution we are currently targeting (conf.py is read before
262 # external defines are setup, and environment variables aren't passed through to
263 # conf.py). Instead, hook into the 'config-inited' event which is late enough
264 # to rewrite the various configuration items with the current version.
265 if app.config.smv_current_version != '':
266 app.config.html_baseurl = app.config.html_baseurl + '/' + app.config.smv_current_version
267 app.config.project = 'ROS 2 Documentation: ' + app.config.smv_current_version.title()
268
269 if app.config.smv_current_version not in ['rolling']:
270 app.config.html_logo = 'source/Releases/' + app.config.smv_current_version + '-small.png'
271
272 # Override default values
273 distro = app.config.smv_current_version
274 app.config.macros = {
275 'DISTRO': distro,
276 'DISTRO_TITLE': distro.title(),
277 'DISTRO_TITLE_FULL': distro_full_names[distro],
278 'REPOS_FILE_BRANCH' : 'master' if distro == 'rolling' else distro,
279 }
280
281 def github_link_rewrite_branch(app, pagename, templatename, context, doctree):
282 if app.config.smv_current_version != '':
283 context['github_version'] = app.config.smv_current_version + '/source/'
284 context['eol_versions'] = app.config.smv_eol_versions
285
286 def expand_macros(app, docname, source):
287 result = source[0]
288 for key, value in app.config.macros.items():
289 result = result.replace(f'{{{key}}}', value)
290 source[0] = result
291
292 def setup(app):
293 app.connect('config-inited', smv_rewrite_configs)
294 app.connect('html-page-context', github_link_rewrite_branch)
295 app.connect('source-read', expand_macros)
296 app.add_config_value('smv_eol_versions', [], 'html')
297 app.add_config_value('macros', {}, True)
298 RedirectFrom.register(app)
299
[end of conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conf.py b/conf.py
--- a/conf.py
+++ b/conf.py
@@ -73,7 +73,7 @@
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-extensions = ['sphinx.ext.intersphinx', 'sphinx_tabs.tabs', 'sphinx_multiversion', 'sphinx_rtd_theme']
+extensions = ['sphinx.ext.intersphinx', 'sphinx_tabs.tabs', 'sphinx_multiversion', 'sphinx_rtd_theme', 'sphinx.ext.ifconfig']
# Intersphinx mapping
|
{"golden_diff": "diff --git a/conf.py b/conf.py\n--- a/conf.py\n+++ b/conf.py\n@@ -73,7 +73,7 @@\n \n # Add any Sphinx extension module names here, as strings. They can be\n # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n-extensions = ['sphinx.ext.intersphinx', 'sphinx_tabs.tabs', 'sphinx_multiversion', 'sphinx_rtd_theme']\n+extensions = ['sphinx.ext.intersphinx', 'sphinx_tabs.tabs', 'sphinx_multiversion', 'sphinx_rtd_theme', 'sphinx.ext.ifconfig']\n \n # Intersphinx mapping\n", "issue": "Update instructions for checking out develop, release and rolling (master) branches\nDue to the release of rolling, and galactic being branched off rolling, there are some instructions that require updating in [Maintaining-a-Source-Checkout.rst](https://github.com/ros2/ros2_documentation/blob/rolling/source/Installation/Maintaining-a-Source-Checkout.rst) . We must clarify between the different type of checkouts.\r\n\r\nSee [this conversation](https://github.com/ros2/ros2_documentation/pull/1693#discussion_r657035616) for details.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# rosindex documentation build configuration file, created by\n# sphinx-quickstart on Tue Oct 2 16:34:57 2018.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n\nimport itertools\nimport os\nimport sys\nimport time\n\nfrom docutils.parsers.rst import Directive\n\nsys.path.append(os.path.abspath('./sphinx-multiversion'))\n\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# The master toctree document.\nmaster_doc = 'index'\n\n# The default role\ndefault_role = 'any'\n\n# The set of warnings to suppress.\nsuppress_warnings = ['image.nonlocal_uri']\n\n# General information about the project.\nproject = 'ROS 2 documentation'\nauthor = 'Open Robotics'\ncopyright = '{}, {}'.format(time.strftime('%Y'), author)\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = u''\n# The full version, including alpha/beta/rc tags.\nrelease = u''\n\n# Define the default role to use for links\ndefault_role = 'any'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['**/_*.rst']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\nextensions = ['sphinx.ext.intersphinx', 'sphinx_tabs.tabs', 'sphinx_multiversion', 'sphinx_rtd_theme']\n\n# Intersphinx mapping\n\nintersphinx_mapping = {\n 'catkin_pkg': ('http://docs.ros.org/en/independent/api/catkin_pkg/html', None),\n 'jenkins_tools': ('http://docs.ros.org/en/independent/api/jenkins_tools/html', None),\n 'rosdep': ('http://docs.ros.org/en/independent/api/rosdep/html', None),\n 'rosdistro': ('http://docs.ros.org/en/independent/api/rosdistro/html', None),\n 'rosinstall': ('http://docs.ros.org/en/independent/api/rosinstall/html', None),\n 'rospkg': ('http://docs.ros.org/en/independent/api/rospkg/html', None),\n 'vcstools': ('http://docs.ros.org/en/independent/api/vcstools/html', None)\n}\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'sticky_navigation': True,\n 'navigation_depth': -1,\n}\n\nhtml_context = {\n 'display_github': True,\n 'github_user': 'ros2',\n 'github_repo': 'ros2_documentation',\n 'github_version': 'rolling/source/', # Will be overridden when building multiversion\n}\n\ntemplates_path = [\n \"source/_templates\",\n]\n\n# smv_tag_whitelist = None\n\nsmv_branch_whitelist = r'^(rolling|galactic|foxy|eloquent|dashing|crystal)$'\n\n\nsmv_released_pattern = r'^refs/(heads|remotes/[^/]+)/(galactic|foxy|eloquent|dashing|crystal).*$'\nsmv_remote_whitelist = r'^(origin)$'\nsmv_latest_version = 'galactic'\nsmv_eol_versions = ['crystal', 'dashing', 'eloquent']\n\ndistro_full_names = {\n 'crystal': 'Crystal Clemmys',\n 'dashing': 'Dashing Diademata',\n 'eloquent': 'Eloquent Elusor',\n 'foxy': 'Foxy Fitzroy',\n 'galactic': 'Galactic Geochelone',\n 'rolling': 'Rolling Ridley',\n}\n\n# These default values will be overridden when building multiversion\nmacros = {\n 'DISTRO': 'rolling',\n 'DISTRO_TITLE': 'Rolling',\n 'DISTRO_TITLE_FULL': 'Rolling Ridley',\n 'REPOS_FILE_BRANCH': 'master',\n}\n\nhtml_favicon = 'favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n# Drop any source link suffix\nhtml_sourcelink_suffix = ''\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'ros2_docsdoc'\n\nhtml_baseurl = 'https://docs.ros.org/en'\n\nclass RedirectFrom(Directive):\n\n has_content = True\n template_name = 'layout.html'\n redirections = {}\n\n @classmethod\n def register(cls, app):\n app.connect('html-collect-pages', cls.generate)\n app.add_directive('redirect-from', cls)\n return app\n\n @classmethod\n def generate(cls, app):\n from sphinx.builders.html import StandaloneHTMLBuilder\n if not isinstance(app.builder, StandaloneHTMLBuilder):\n return\n\n redirect_html_fragment = \"\"\"\n <link rel=\"canonical\" href=\"{base_url}/{url}\" />\n <meta http-equiv=\"refresh\" content=\"0; url={url}\" />\n <script>\n window.location.href = '{url}';\n </script>\n \"\"\"\n redirections = {\n os.path.splitext(os.path.relpath(\n document_path, app.srcdir\n ))[0]: redirect_urls\n for document_path, redirect_urls in cls.redirections.items()\n }\n redirection_conflict = next((\n (canon_1, canon_2, redirs_1.intersection(redirs_2))\n for (canon_1, redirs_1), (canon_2, redirs_2)\n in itertools.combinations(redirections.items(), 2)\n if redirs_1.intersection(redirs_2)\n ), None)\n if redirection_conflict:\n canonical_url_1, canonical_url_2 = redirection_conflict[:2]\n conflicting_redirect_urls = redirection_conflict[-1]\n raise RuntimeError(\n 'Documents {} and {} define conflicting redirects: {}'.format(\n canonical_url_1, canonical_url_2, conflicting_redirect_urls\n )\n )\n all_canonical_urls = set(redirections.keys())\n all_redirect_urls = {\n redirect_url\n for redirect_urls in redirections.values()\n for redirect_url in redirect_urls\n }\n conflicting_urls = all_canonical_urls.intersection(all_redirect_urls)\n if conflicting_urls:\n raise RuntimeError(\n 'Some redirects conflict with existing documents: {}'.format(\n conflicting_urls\n )\n )\n\n for canonical_url, redirect_urls in redirections.items():\n for redirect_url in redirect_urls:\n context = {\n 'canonical_url': os.path.relpath(\n canonical_url, redirect_url\n ),\n 'title': os.path.basename(redirect_url),\n 'metatags': redirect_html_fragment.format(\n base_url=app.config.html_baseurl,\n url=app.builder.get_relative_uri(\n redirect_url, canonical_url\n )\n )\n }\n yield (redirect_url, context, cls.template_name)\n\n def run(self):\n document_path = self.state.document.current_source\n if document_path not in RedirectFrom.redirections:\n RedirectFrom.redirections[document_path] = set()\n RedirectFrom.redirections[document_path].update(self.content)\n return []\n\n\ndef make_router(origin, destination):\n def _missing_reference(app, env, node, contnode):\n from docutils import nodes\n from docutils.utils import relative_path\n from sphinx.util import docname_join\n\n doctarget = docname_join(node['refdoc'], node['reftarget'])\n if doctarget.startswith(origin):\n routed_doctarget = doctarget.replace(origin, destination)\n if routed_doctarget in env.all_docs:\n newnode = nodes.reference(\n '', contnode.astext(), internal=True\n )\n newnode['refuri'] = app.builder.get_relative_uri(\n node['refdoc'], routed_doctarget\n )\n return newnode\n return _missing_reference\n\ndef smv_rewrite_configs(app, config):\n # When using Sphinx multiversion, there is no way at initial configuration time\n # to determine the distribution we are currently targeting (conf.py is read before\n # external defines are setup, and environment variables aren't passed through to\n # conf.py). Instead, hook into the 'config-inited' event which is late enough\n # to rewrite the various configuration items with the current version.\n if app.config.smv_current_version != '':\n app.config.html_baseurl = app.config.html_baseurl + '/' + app.config.smv_current_version\n app.config.project = 'ROS 2 Documentation: ' + app.config.smv_current_version.title()\n\n if app.config.smv_current_version not in ['rolling']:\n app.config.html_logo = 'source/Releases/' + app.config.smv_current_version + '-small.png'\n\n # Override default values\n distro = app.config.smv_current_version\n app.config.macros = {\n 'DISTRO': distro,\n 'DISTRO_TITLE': distro.title(),\n 'DISTRO_TITLE_FULL': distro_full_names[distro],\n 'REPOS_FILE_BRANCH' : 'master' if distro == 'rolling' else distro,\n }\n\ndef github_link_rewrite_branch(app, pagename, templatename, context, doctree):\n if app.config.smv_current_version != '':\n context['github_version'] = app.config.smv_current_version + '/source/'\n context['eol_versions'] = app.config.smv_eol_versions\n\ndef expand_macros(app, docname, source):\n result = source[0]\n for key, value in app.config.macros.items():\n result = result.replace(f'{{{key}}}', value)\n source[0] = result\n\ndef setup(app):\n app.connect('config-inited', smv_rewrite_configs)\n app.connect('html-page-context', github_link_rewrite_branch)\n app.connect('source-read', expand_macros)\n app.add_config_value('smv_eol_versions', [], 'html')\n app.add_config_value('macros', {}, True)\n RedirectFrom.register(app)\n", "path": "conf.py"}]}
| 3,956 | 142 |
gh_patches_debug_43764
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-3626
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MSBuild /bl flag for Binary Logging
MSBuild has a command line flag to activate the output of a Binary Log that can replay builds to gather diagnostics on builds.
I would be really useful to add this utility to the MSBuild helper.
[MSBuild Binary Log Docs](https://github.com/Microsoft/msbuild/blob/master/documentation/wiki/Binary-Log.md)
</issue>
<code>
[start of conans/client/build/msbuild.py]
1 import copy
2 import re
3
4 from conans import tools
5 from conans.client.build.visual_environment import (VisualStudioBuildEnvironment,
6 vs_build_type_flags, vs_std_cpp)
7 from conans.client.tools.oss import cpu_count
8 from conans.client.tools.win import vcvars_command
9 from conans.errors import ConanException
10 from conans.util.env_reader import get_env
11 from conans.util.files import tmp_file
12 from conans.model.conan_file import ConanFile
13
14
15 class MSBuild(object):
16
17 def __init__(self, conanfile):
18 if isinstance(conanfile, ConanFile):
19 self._conanfile = conanfile
20 self._settings = self._conanfile.settings
21 self._output = self._conanfile.output
22 self.build_env = VisualStudioBuildEnvironment(self._conanfile)
23 else: # backwards compatible with build_sln_command
24 self._settings = conanfile
25 self.build_env = None
26
27 def build(self, project_file, targets=None, upgrade_project=True, build_type=None, arch=None,
28 parallel=True, force_vcvars=False, toolset=None, platforms=None, use_env=True,
29 vcvars_ver=None, winsdk_version=None, properties=None):
30
31 self.build_env.parallel = parallel
32
33 with tools.environment_append(self.build_env.vars):
34 # Path for custom properties file
35 props_file_contents = self._get_props_file_contents()
36 with tmp_file(props_file_contents) as props_file_path:
37 vcvars = vcvars_command(self._conanfile.settings, force=force_vcvars,
38 vcvars_ver=vcvars_ver, winsdk_version=winsdk_version)
39 command = self.get_command(project_file, props_file_path,
40 targets=targets, upgrade_project=upgrade_project,
41 build_type=build_type, arch=arch, parallel=parallel,
42 toolset=toolset, platforms=platforms,
43 use_env=use_env, properties=properties)
44 command = "%s && %s" % (vcvars, command)
45 return self._conanfile.run(command)
46
47 def get_command(self, project_file, props_file_path=None, targets=None, upgrade_project=True,
48 build_type=None, arch=None, parallel=True, toolset=None, platforms=None,
49 use_env=False, properties=None):
50
51 targets = targets or []
52 properties = properties or {}
53 command = []
54
55 if upgrade_project and not get_env("CONAN_SKIP_VS_PROJECTS_UPGRADE", False):
56 command.append('devenv "%s" /upgrade &&' % project_file)
57 else:
58 self._output.info("Skipped sln project upgrade")
59
60 build_type = build_type or self._settings.get_safe("build_type")
61 arch = arch or self._settings.get_safe("arch")
62 if not build_type:
63 raise ConanException("Cannot build_sln_command, build_type not defined")
64 if not arch:
65 raise ConanException("Cannot build_sln_command, arch not defined")
66
67 command.append('msbuild "%s" /p:Configuration="%s"' % (project_file, build_type))
68 msvc_arch = {'x86': 'x86',
69 'x86_64': 'x64',
70 'armv7': 'ARM',
71 'armv8': 'ARM64'}
72 if platforms:
73 msvc_arch.update(platforms)
74 msvc_arch = msvc_arch.get(str(arch))
75 try:
76 sln = tools.load(project_file)
77 pattern = re.compile(r"GlobalSection\(SolutionConfigurationPlatforms\)"
78 r"(.*?)EndGlobalSection", re.DOTALL)
79 solution_global = pattern.search(sln).group(1)
80 lines = solution_global.splitlines()
81 lines = [s.split("=")[0].strip() for s in lines]
82 except Exception:
83 pass
84 else:
85 config = "%s|%s" % (build_type, msvc_arch)
86 if config not in "".join(lines):
87 self._output.warn("***** The configuration %s does not exist in this solution *****" % config)
88 self._output.warn("Use 'platforms' argument to define your architectures")
89
90 if use_env:
91 command.append('/p:UseEnv=true')
92
93 if msvc_arch:
94 command.append('/p:Platform="%s"' % msvc_arch)
95
96 if parallel:
97 command.append('/m:%s' % cpu_count())
98
99 if targets:
100 command.append("/target:%s" % ";".join(targets))
101
102 if toolset:
103 command.append('/p:PlatformToolset="%s"' % toolset)
104
105 if props_file_path:
106 command.append('/p:ForceImportBeforeCppTargets="%s"' % props_file_path)
107
108 for name, value in properties.items():
109 command.append('/p:%s="%s"' % (name, value))
110
111 return " ".join(command)
112
113 def _get_props_file_contents(self):
114 # how to specify runtime in command line:
115 # https://stackoverflow.com/questions/38840332/msbuild-overrides-properties-while-building-vc-project
116 runtime_library = {"MT": "MultiThreaded",
117 "MTd": "MultiThreadedDebug",
118 "MD": "MultiThreadedDLL",
119 "MDd": "MultiThreadedDebugDLL"}.get(self._settings.get_safe("compiler.runtime"), "")
120
121 if self.build_env:
122 # Take the flags from the build env, the user was able to alter them if needed
123 flags = copy.copy(self.build_env.flags)
124 flags.append(self.build_env.std)
125 else: # To be removed when build_sln_command is deprecated
126 flags = vs_build_type_flags(self._settings)
127 flags.append(vs_std_cpp(self._settings))
128
129 flags_str = " ".join(list(filter(None, flags))) # Removes empty and None elements
130 additional_node = "<AdditionalOptions>" \
131 "{} %(AdditionalOptions)" \
132 "</AdditionalOptions>".format(flags_str) if flags_str else ""
133 runtime_node = "<RuntimeLibrary>" \
134 "{}" \
135 "</RuntimeLibrary>".format(runtime_library) if runtime_library else ""
136 template = """<?xml version="1.0" encoding="utf-8"?>
137 <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
138 <ItemDefinitionGroup>
139 <ClCompile>
140 {runtime_node}
141 {additional_node}
142 </ClCompile>
143 </ItemDefinitionGroup>
144 </Project>""".format(**{"runtime_node": runtime_node,
145 "additional_node": additional_node})
146 return template
147
[end of conans/client/build/msbuild.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conans/client/build/msbuild.py b/conans/client/build/msbuild.py
--- a/conans/client/build/msbuild.py
+++ b/conans/client/build/msbuild.py
@@ -1,5 +1,6 @@
import copy
import re
+import subprocess
from conans import tools
from conans.client.build.visual_environment import (VisualStudioBuildEnvironment,
@@ -7,8 +8,9 @@
from conans.client.tools.oss import cpu_count
from conans.client.tools.win import vcvars_command
from conans.errors import ConanException
+from conans.model.version import Version
from conans.util.env_reader import get_env
-from conans.util.files import tmp_file
+from conans.util.files import tmp_file, decode_text
from conans.model.conan_file import ConanFile
@@ -26,7 +28,7 @@
def build(self, project_file, targets=None, upgrade_project=True, build_type=None, arch=None,
parallel=True, force_vcvars=False, toolset=None, platforms=None, use_env=True,
- vcvars_ver=None, winsdk_version=None, properties=None):
+ vcvars_ver=None, winsdk_version=None, properties=None, output_binary_log=None):
self.build_env.parallel = parallel
@@ -40,13 +42,13 @@
targets=targets, upgrade_project=upgrade_project,
build_type=build_type, arch=arch, parallel=parallel,
toolset=toolset, platforms=platforms,
- use_env=use_env, properties=properties)
+ use_env=use_env, properties=properties, output_binary_log=output_binary_log)
command = "%s && %s" % (vcvars, command)
return self._conanfile.run(command)
def get_command(self, project_file, props_file_path=None, targets=None, upgrade_project=True,
build_type=None, arch=None, parallel=True, toolset=None, platforms=None,
- use_env=False, properties=None):
+ use_env=False, properties=None, output_binary_log=None):
targets = targets or []
properties = properties or {}
@@ -87,6 +89,15 @@
self._output.warn("***** The configuration %s does not exist in this solution *****" % config)
self._output.warn("Use 'platforms' argument to define your architectures")
+ if output_binary_log:
+ msbuild_version = MSBuild.get_version(self._settings)
+ if msbuild_version >= "15.3": # http://msbuildlog.com/
+ command.append('/bl' if isinstance(output_binary_log, bool)
+ else '/bl:"%s"' % output_binary_log)
+ else:
+ raise ConanException("MSBuild version detected (%s) does not support "
+ "'output_binary_log' ('/bl')" % msbuild_version)
+
if use_env:
command.append('/p:UseEnv=true')
@@ -144,3 +155,17 @@
</Project>""".format(**{"runtime_node": runtime_node,
"additional_node": additional_node})
return template
+
+ @staticmethod
+ def get_version(settings):
+ msbuild_cmd = "msbuild -version"
+ vcvars = vcvars_command(settings)
+ command = "%s && %s" % (vcvars, msbuild_cmd)
+ try:
+ out, err = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True).communicate()
+ version_line = decode_text(out).split("\n")[-1]
+ prog = re.compile("(\d+\.){2,3}\d+")
+ result = prog.match(version_line).group()
+ return Version(result)
+ except Exception as e:
+ raise ConanException("Error retrieving MSBuild version: '{}'".format(e))
|
{"golden_diff": "diff --git a/conans/client/build/msbuild.py b/conans/client/build/msbuild.py\n--- a/conans/client/build/msbuild.py\n+++ b/conans/client/build/msbuild.py\n@@ -1,5 +1,6 @@\n import copy\n import re\n+import subprocess\n \n from conans import tools\n from conans.client.build.visual_environment import (VisualStudioBuildEnvironment,\n@@ -7,8 +8,9 @@\n from conans.client.tools.oss import cpu_count\n from conans.client.tools.win import vcvars_command\n from conans.errors import ConanException\n+from conans.model.version import Version\n from conans.util.env_reader import get_env\n-from conans.util.files import tmp_file\n+from conans.util.files import tmp_file, decode_text\n from conans.model.conan_file import ConanFile\n \n \n@@ -26,7 +28,7 @@\n \n def build(self, project_file, targets=None, upgrade_project=True, build_type=None, arch=None,\n parallel=True, force_vcvars=False, toolset=None, platforms=None, use_env=True,\n- vcvars_ver=None, winsdk_version=None, properties=None):\n+ vcvars_ver=None, winsdk_version=None, properties=None, output_binary_log=None):\n \n self.build_env.parallel = parallel\n \n@@ -40,13 +42,13 @@\n targets=targets, upgrade_project=upgrade_project,\n build_type=build_type, arch=arch, parallel=parallel,\n toolset=toolset, platforms=platforms,\n- use_env=use_env, properties=properties)\n+ use_env=use_env, properties=properties, output_binary_log=output_binary_log)\n command = \"%s && %s\" % (vcvars, command)\n return self._conanfile.run(command)\n \n def get_command(self, project_file, props_file_path=None, targets=None, upgrade_project=True,\n build_type=None, arch=None, parallel=True, toolset=None, platforms=None,\n- use_env=False, properties=None):\n+ use_env=False, properties=None, output_binary_log=None):\n \n targets = targets or []\n properties = properties or {}\n@@ -87,6 +89,15 @@\n self._output.warn(\"***** The configuration %s does not exist in this solution *****\" % config)\n self._output.warn(\"Use 'platforms' argument to define your architectures\")\n \n+ if output_binary_log:\n+ msbuild_version = MSBuild.get_version(self._settings)\n+ if msbuild_version >= \"15.3\": # http://msbuildlog.com/\n+ command.append('/bl' if isinstance(output_binary_log, bool)\n+ else '/bl:\"%s\"' % output_binary_log)\n+ else:\n+ raise ConanException(\"MSBuild version detected (%s) does not support \"\n+ \"'output_binary_log' ('/bl')\" % msbuild_version)\n+\n if use_env:\n command.append('/p:UseEnv=true')\n \n@@ -144,3 +155,17 @@\n </Project>\"\"\".format(**{\"runtime_node\": runtime_node,\n \"additional_node\": additional_node})\n return template\n+\n+ @staticmethod\n+ def get_version(settings):\n+ msbuild_cmd = \"msbuild -version\"\n+ vcvars = vcvars_command(settings)\n+ command = \"%s && %s\" % (vcvars, msbuild_cmd)\n+ try:\n+ out, err = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True).communicate()\n+ version_line = decode_text(out).split(\"\\n\")[-1]\n+ prog = re.compile(\"(\\d+\\.){2,3}\\d+\")\n+ result = prog.match(version_line).group()\n+ return Version(result)\n+ except Exception as e:\n+ raise ConanException(\"Error retrieving MSBuild version: '{}'\".format(e))\n", "issue": "MSBuild /bl flag for Binary Logging\nMSBuild has a command line flag to activate the output of a Binary Log that can replay builds to gather diagnostics on builds.\r\n\r\nI would be really useful to add this utility to the MSBuild helper.\r\n\r\n[MSBuild Binary Log Docs](https://github.com/Microsoft/msbuild/blob/master/documentation/wiki/Binary-Log.md)\r\n\r\n\n", "before_files": [{"content": "import copy\nimport re\n\nfrom conans import tools\nfrom conans.client.build.visual_environment import (VisualStudioBuildEnvironment,\n vs_build_type_flags, vs_std_cpp)\nfrom conans.client.tools.oss import cpu_count\nfrom conans.client.tools.win import vcvars_command\nfrom conans.errors import ConanException\nfrom conans.util.env_reader import get_env\nfrom conans.util.files import tmp_file\nfrom conans.model.conan_file import ConanFile\n\n\nclass MSBuild(object):\n\n def __init__(self, conanfile):\n if isinstance(conanfile, ConanFile):\n self._conanfile = conanfile\n self._settings = self._conanfile.settings\n self._output = self._conanfile.output\n self.build_env = VisualStudioBuildEnvironment(self._conanfile)\n else: # backwards compatible with build_sln_command\n self._settings = conanfile\n self.build_env = None\n\n def build(self, project_file, targets=None, upgrade_project=True, build_type=None, arch=None,\n parallel=True, force_vcvars=False, toolset=None, platforms=None, use_env=True,\n vcvars_ver=None, winsdk_version=None, properties=None):\n\n self.build_env.parallel = parallel\n\n with tools.environment_append(self.build_env.vars):\n # Path for custom properties file\n props_file_contents = self._get_props_file_contents()\n with tmp_file(props_file_contents) as props_file_path:\n vcvars = vcvars_command(self._conanfile.settings, force=force_vcvars,\n vcvars_ver=vcvars_ver, winsdk_version=winsdk_version)\n command = self.get_command(project_file, props_file_path,\n targets=targets, upgrade_project=upgrade_project,\n build_type=build_type, arch=arch, parallel=parallel,\n toolset=toolset, platforms=platforms,\n use_env=use_env, properties=properties)\n command = \"%s && %s\" % (vcvars, command)\n return self._conanfile.run(command)\n\n def get_command(self, project_file, props_file_path=None, targets=None, upgrade_project=True,\n build_type=None, arch=None, parallel=True, toolset=None, platforms=None,\n use_env=False, properties=None):\n\n targets = targets or []\n properties = properties or {}\n command = []\n\n if upgrade_project and not get_env(\"CONAN_SKIP_VS_PROJECTS_UPGRADE\", False):\n command.append('devenv \"%s\" /upgrade &&' % project_file)\n else:\n self._output.info(\"Skipped sln project upgrade\")\n\n build_type = build_type or self._settings.get_safe(\"build_type\")\n arch = arch or self._settings.get_safe(\"arch\")\n if not build_type:\n raise ConanException(\"Cannot build_sln_command, build_type not defined\")\n if not arch:\n raise ConanException(\"Cannot build_sln_command, arch not defined\")\n\n command.append('msbuild \"%s\" /p:Configuration=\"%s\"' % (project_file, build_type))\n msvc_arch = {'x86': 'x86',\n 'x86_64': 'x64',\n 'armv7': 'ARM',\n 'armv8': 'ARM64'}\n if platforms:\n msvc_arch.update(platforms)\n msvc_arch = msvc_arch.get(str(arch))\n try:\n sln = tools.load(project_file)\n pattern = re.compile(r\"GlobalSection\\(SolutionConfigurationPlatforms\\)\"\n r\"(.*?)EndGlobalSection\", re.DOTALL)\n solution_global = pattern.search(sln).group(1)\n lines = solution_global.splitlines()\n lines = [s.split(\"=\")[0].strip() for s in lines]\n except Exception:\n pass\n else:\n config = \"%s|%s\" % (build_type, msvc_arch)\n if config not in \"\".join(lines):\n self._output.warn(\"***** The configuration %s does not exist in this solution *****\" % config)\n self._output.warn(\"Use 'platforms' argument to define your architectures\")\n\n if use_env:\n command.append('/p:UseEnv=true')\n\n if msvc_arch:\n command.append('/p:Platform=\"%s\"' % msvc_arch)\n\n if parallel:\n command.append('/m:%s' % cpu_count())\n\n if targets:\n command.append(\"/target:%s\" % \";\".join(targets))\n\n if toolset:\n command.append('/p:PlatformToolset=\"%s\"' % toolset)\n\n if props_file_path:\n command.append('/p:ForceImportBeforeCppTargets=\"%s\"' % props_file_path)\n\n for name, value in properties.items():\n command.append('/p:%s=\"%s\"' % (name, value))\n\n return \" \".join(command)\n\n def _get_props_file_contents(self):\n # how to specify runtime in command line:\n # https://stackoverflow.com/questions/38840332/msbuild-overrides-properties-while-building-vc-project\n runtime_library = {\"MT\": \"MultiThreaded\",\n \"MTd\": \"MultiThreadedDebug\",\n \"MD\": \"MultiThreadedDLL\",\n \"MDd\": \"MultiThreadedDebugDLL\"}.get(self._settings.get_safe(\"compiler.runtime\"), \"\")\n\n if self.build_env:\n # Take the flags from the build env, the user was able to alter them if needed\n flags = copy.copy(self.build_env.flags)\n flags.append(self.build_env.std)\n else: # To be removed when build_sln_command is deprecated\n flags = vs_build_type_flags(self._settings)\n flags.append(vs_std_cpp(self._settings))\n\n flags_str = \" \".join(list(filter(None, flags))) # Removes empty and None elements\n additional_node = \"<AdditionalOptions>\" \\\n \"{} %(AdditionalOptions)\" \\\n \"</AdditionalOptions>\".format(flags_str) if flags_str else \"\"\n runtime_node = \"<RuntimeLibrary>\" \\\n \"{}\" \\\n \"</RuntimeLibrary>\".format(runtime_library) if runtime_library else \"\"\n template = \"\"\"<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n <ItemDefinitionGroup>\n <ClCompile>\n {runtime_node}\n {additional_node}\n </ClCompile>\n </ItemDefinitionGroup>\n</Project>\"\"\".format(**{\"runtime_node\": runtime_node,\n \"additional_node\": additional_node})\n return template\n", "path": "conans/client/build/msbuild.py"}]}
| 2,348 | 830 |
gh_patches_debug_38565
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-5371
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error when importing networkx: "module 'importlib' has no attribute 'machinery'"
When importing networkx, the error `AttributeError: module 'importlib' has no attribute 'machinery'` occurs.
It seems like networkx is not importing `importlib.machinery`.
### Steps to Reproduce
```
$ sudo docker run -it --rm fedora:35
# dnf install -y python3 python3-pip
# pip3 install networkx
# python3 --version
Python 3.10.0rc2
# python3 -c "import importlib; print(dir(importlib))"
['_RELOADING', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__import__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_bootstrap', '_bootstrap_external', '_imp', '_pack_uint32', '_unpack_uint32', 'find_loader', 'import_module', 'invalidate_caches', 'reload', 'sys', 'warnings']
# python3 -c "import networkx"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.10/site-packages/networkx/__init__.py", line 59, in <module>
from networkx import utils
File "/usr/local/lib/python3.10/site-packages/networkx/utils/__init__.py", line 1, in <module>
from networkx.utils.misc import *
File "/usr/local/lib/python3.10/site-packages/networkx/utils/misc.py", line 23, in <module>
np = nx.lazy_import("numpy")
File "/usr/local/lib/python3.10/site-packages/networkx/lazy_imports.py", line 138, in lazy_import
tmp_loader = importlib.machinery.SourceFileLoader(module, path=None)
AttributeError: module 'importlib' has no attribute 'machinery'
# python3 -c "import importlib; import importlib.machinery; import networkx"
```
### Environment
Python version: Python 3.10.0rc2
NetworkX version: networkx-2.7
</issue>
<code>
[start of networkx/lazy_imports.py]
1 import importlib
2 import importlib.util
3 import types
4 import os
5 import sys
6
7
8 __all__ = ["attach", "lazy_import"]
9
10
11 def attach(module_name, submodules=None, submod_attrs=None):
12 """Attach lazily loaded submodules, and functions or other attributes.
13
14 Typically, modules import submodules and attributes as follows::
15
16 import mysubmodule
17 import anothersubmodule
18
19 from .foo import someattr
20
21 The idea of this function is to replace the `__init__.py`
22 module's `__getattr__`, `__dir__`, and `__all__` attributes such that
23 all imports work exactly the way they normally would, except that the
24 actual import is delayed until the resulting module object is first used.
25
26 The typical way to call this function, replacing the above imports, is::
27
28 __getattr__, __lazy_dir__, __all__ = lazy.attach(
29 __name__,
30 ['mysubmodule', 'anothersubmodule'],
31 {'foo': 'someattr'}
32 )
33
34 This functionality requires Python 3.7 or higher.
35
36 Parameters
37 ----------
38 module_name : str
39 Typically use __name__.
40 submodules : set
41 List of submodules to lazily import.
42 submod_attrs : dict
43 Dictionary of submodule -> list of attributes / functions.
44 These attributes are imported as they are used.
45
46 Returns
47 -------
48 __getattr__, __dir__, __all__
49
50 """
51 if submod_attrs is None:
52 submod_attrs = {}
53
54 if submodules is None:
55 submodules = set()
56 else:
57 submodules = set(submodules)
58
59 attr_to_modules = {
60 attr: mod for mod, attrs in submod_attrs.items() for attr in attrs
61 }
62
63 __all__ = list(submodules | attr_to_modules.keys())
64
65 def __getattr__(name):
66 if name in submodules:
67 return importlib.import_module(f"{module_name}.{name}")
68 elif name in attr_to_modules:
69 submod = importlib.import_module(f"{module_name}.{attr_to_modules[name]}")
70 return getattr(submod, name)
71 else:
72 raise AttributeError(f"No {module_name} attribute {name}")
73
74 def __dir__():
75 return __all__
76
77 if os.environ.get("EAGER_IMPORT", ""):
78 for attr in set(attr_to_modules.keys()) | submodules:
79 __getattr__(attr)
80
81 return __getattr__, __dir__, list(__all__)
82
83
84 def lazy_import(fullname):
85 """Return a lazily imported proxy for a module or library.
86
87 We often see the following pattern::
88
89 def myfunc():
90 import scipy as sp
91 sp.argmin(...)
92 ....
93
94 This is to prevent a library, in this case `scipy`, from being
95 imported at function definition time, since that can be slow.
96
97 This function provides a proxy module that, upon access, imports
98 the actual module. So the idiom equivalent to the above example is::
99
100 sp = lazy.load("scipy")
101
102 def myfunc():
103 sp.argmin(...)
104 ....
105
106 The initial import time is fast because the actual import is delayed
107 until the first attribute is requested. The overall import time may
108 decrease as well for users that don't make use of large portions
109 of the library.
110
111 Parameters
112 ----------
113 fullname : str
114 The full name of the package or subpackage to import. For example::
115
116 sp = lazy.load('scipy') # import scipy as sp
117 spla = lazy.load('scipy.linalg') # import scipy.linalg as spla
118
119 Returns
120 -------
121 pm : importlib.util._LazyModule
122 Proxy module. Can be used like any regularly imported module.
123 Actual loading of the module occurs upon first attribute request.
124
125 """
126 try:
127 return sys.modules[fullname]
128 except:
129 pass
130
131 # Not previously loaded -- look it up
132 spec = importlib.util.find_spec(fullname)
133
134 if spec is None:
135 # module not found - construct a DelayedImportErrorModule
136 spec = importlib.util.spec_from_loader(fullname, loader=None)
137 module = importlib.util.module_from_spec(spec)
138 tmp_loader = importlib.machinery.SourceFileLoader(module, path=None)
139 loader = DelayedImportErrorLoader(tmp_loader)
140 loader.exec_module(module)
141 # dont add to sys.modules. The module wasn't found.
142 return module
143
144 module = importlib.util.module_from_spec(spec)
145 sys.modules[fullname] = module
146
147 loader = importlib.util.LazyLoader(spec.loader)
148 loader.exec_module(module)
149
150 return module
151
152
153 class DelayedImportErrorLoader(importlib.util.LazyLoader):
154 def exec_module(self, module):
155 super().exec_module(module)
156 module.__class__ = DelayedImportErrorModule
157
158
159 class DelayedImportErrorModule(types.ModuleType):
160 def __getattribute__(self, attr):
161 """Trigger a ModuleNotFoundError upon attribute access"""
162 spec = super().__getattribute__("__spec__")
163 # allows isinstance and type functions to work without raising error
164 if attr in ["__class__"]:
165 return super().__getattribute__("__class__")
166
167 raise ModuleNotFoundError(
168 f"Delayed Report: module named '{spec.name}' not found.\n"
169 "Reporting was Lazy -- delayed until module attributes accessed.\n"
170 f"Most likely, {spec.name} is not installed"
171 )
172
[end of networkx/lazy_imports.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/networkx/lazy_imports.py b/networkx/lazy_imports.py
--- a/networkx/lazy_imports.py
+++ b/networkx/lazy_imports.py
@@ -1,5 +1,6 @@
import importlib
import importlib.util
+import inspect
import types
import os
import sys
@@ -81,6 +82,24 @@
return __getattr__, __dir__, list(__all__)
+class DelayedImportErrorModule(types.ModuleType):
+ def __init__(self, frame_data, *args, **kwargs):
+ self.__frame_data = frame_data
+ super().__init__(*args, **kwargs)
+
+ def __getattr__(self, x):
+ if x in ("__class__", "__file__", "__frame_data"):
+ super().__getattr__(x)
+ else:
+ fd = self.__frame_data
+ raise ModuleNotFoundError(
+ f"No module named '{fd['spec']}'\n\n"
+ "This error is lazily reported, having originally occured in\n"
+ f' File {fd["filename"]}, line {fd["lineno"]}, in {fd["function"]}\n\n'
+ f'----> {"".join(fd["code_context"]).strip()}'
+ )
+
+
def lazy_import(fullname):
"""Return a lazily imported proxy for a module or library.
@@ -132,14 +151,18 @@
spec = importlib.util.find_spec(fullname)
if spec is None:
- # module not found - construct a DelayedImportErrorModule
- spec = importlib.util.spec_from_loader(fullname, loader=None)
- module = importlib.util.module_from_spec(spec)
- tmp_loader = importlib.machinery.SourceFileLoader(module, path=None)
- loader = DelayedImportErrorLoader(tmp_loader)
- loader.exec_module(module)
- # dont add to sys.modules. The module wasn't found.
- return module
+ try:
+ parent = inspect.stack()[1]
+ frame_data = {
+ "spec": fullname,
+ "filename": parent.filename,
+ "lineno": parent.lineno,
+ "function": parent.function,
+ "code_context": parent.code_context,
+ }
+ return DelayedImportErrorModule(frame_data, "DelayedImportErrorModule")
+ finally:
+ del parent
module = importlib.util.module_from_spec(spec)
sys.modules[fullname] = module
@@ -148,24 +171,3 @@
loader.exec_module(module)
return module
-
-
-class DelayedImportErrorLoader(importlib.util.LazyLoader):
- def exec_module(self, module):
- super().exec_module(module)
- module.__class__ = DelayedImportErrorModule
-
-
-class DelayedImportErrorModule(types.ModuleType):
- def __getattribute__(self, attr):
- """Trigger a ModuleNotFoundError upon attribute access"""
- spec = super().__getattribute__("__spec__")
- # allows isinstance and type functions to work without raising error
- if attr in ["__class__"]:
- return super().__getattribute__("__class__")
-
- raise ModuleNotFoundError(
- f"Delayed Report: module named '{spec.name}' not found.\n"
- "Reporting was Lazy -- delayed until module attributes accessed.\n"
- f"Most likely, {spec.name} is not installed"
- )
|
{"golden_diff": "diff --git a/networkx/lazy_imports.py b/networkx/lazy_imports.py\n--- a/networkx/lazy_imports.py\n+++ b/networkx/lazy_imports.py\n@@ -1,5 +1,6 @@\n import importlib\n import importlib.util\n+import inspect\n import types\n import os\n import sys\n@@ -81,6 +82,24 @@\n return __getattr__, __dir__, list(__all__)\n \n \n+class DelayedImportErrorModule(types.ModuleType):\n+ def __init__(self, frame_data, *args, **kwargs):\n+ self.__frame_data = frame_data\n+ super().__init__(*args, **kwargs)\n+\n+ def __getattr__(self, x):\n+ if x in (\"__class__\", \"__file__\", \"__frame_data\"):\n+ super().__getattr__(x)\n+ else:\n+ fd = self.__frame_data\n+ raise ModuleNotFoundError(\n+ f\"No module named '{fd['spec']}'\\n\\n\"\n+ \"This error is lazily reported, having originally occured in\\n\"\n+ f' File {fd[\"filename\"]}, line {fd[\"lineno\"]}, in {fd[\"function\"]}\\n\\n'\n+ f'----> {\"\".join(fd[\"code_context\"]).strip()}'\n+ )\n+\n+\n def lazy_import(fullname):\n \"\"\"Return a lazily imported proxy for a module or library.\n \n@@ -132,14 +151,18 @@\n spec = importlib.util.find_spec(fullname)\n \n if spec is None:\n- # module not found - construct a DelayedImportErrorModule\n- spec = importlib.util.spec_from_loader(fullname, loader=None)\n- module = importlib.util.module_from_spec(spec)\n- tmp_loader = importlib.machinery.SourceFileLoader(module, path=None)\n- loader = DelayedImportErrorLoader(tmp_loader)\n- loader.exec_module(module)\n- # dont add to sys.modules. The module wasn't found.\n- return module\n+ try:\n+ parent = inspect.stack()[1]\n+ frame_data = {\n+ \"spec\": fullname,\n+ \"filename\": parent.filename,\n+ \"lineno\": parent.lineno,\n+ \"function\": parent.function,\n+ \"code_context\": parent.code_context,\n+ }\n+ return DelayedImportErrorModule(frame_data, \"DelayedImportErrorModule\")\n+ finally:\n+ del parent\n \n module = importlib.util.module_from_spec(spec)\n sys.modules[fullname] = module\n@@ -148,24 +171,3 @@\n loader.exec_module(module)\n \n return module\n-\n-\n-class DelayedImportErrorLoader(importlib.util.LazyLoader):\n- def exec_module(self, module):\n- super().exec_module(module)\n- module.__class__ = DelayedImportErrorModule\n-\n-\n-class DelayedImportErrorModule(types.ModuleType):\n- def __getattribute__(self, attr):\n- \"\"\"Trigger a ModuleNotFoundError upon attribute access\"\"\"\n- spec = super().__getattribute__(\"__spec__\")\n- # allows isinstance and type functions to work without raising error\n- if attr in [\"__class__\"]:\n- return super().__getattribute__(\"__class__\")\n-\n- raise ModuleNotFoundError(\n- f\"Delayed Report: module named '{spec.name}' not found.\\n\"\n- \"Reporting was Lazy -- delayed until module attributes accessed.\\n\"\n- f\"Most likely, {spec.name} is not installed\"\n- )\n", "issue": "Error when importing networkx: \"module 'importlib' has no attribute 'machinery'\"\nWhen importing networkx, the error `AttributeError: module 'importlib' has no attribute 'machinery'` occurs.\r\n\r\nIt seems like networkx is not importing `importlib.machinery`.\r\n\r\n### Steps to Reproduce\r\n```\r\n$ sudo docker run -it --rm fedora:35\r\n# dnf install -y python3 python3-pip\r\n# pip3 install networkx\r\n# python3 --version\r\nPython 3.10.0rc2\r\n# python3 -c \"import importlib; print(dir(importlib))\"\r\n['_RELOADING', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__import__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_bootstrap', '_bootstrap_external', '_imp', '_pack_uint32', '_unpack_uint32', 'find_loader', 'import_module', 'invalidate_caches', 'reload', 'sys', 'warnings']\r\n# python3 -c \"import networkx\"\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.10/site-packages/networkx/__init__.py\", line 59, in <module>\r\n from networkx import utils\r\n File \"/usr/local/lib/python3.10/site-packages/networkx/utils/__init__.py\", line 1, in <module>\r\n from networkx.utils.misc import *\r\n File \"/usr/local/lib/python3.10/site-packages/networkx/utils/misc.py\", line 23, in <module>\r\n np = nx.lazy_import(\"numpy\")\r\n File \"/usr/local/lib/python3.10/site-packages/networkx/lazy_imports.py\", line 138, in lazy_import\r\n tmp_loader = importlib.machinery.SourceFileLoader(module, path=None)\r\nAttributeError: module 'importlib' has no attribute 'machinery'\r\n# python3 -c \"import importlib; import importlib.machinery; import networkx\"\r\n```\r\n\r\n### Environment\r\nPython version: Python 3.10.0rc2\r\nNetworkX version: networkx-2.7\n", "before_files": [{"content": "import importlib\nimport importlib.util\nimport types\nimport os\nimport sys\n\n\n__all__ = [\"attach\", \"lazy_import\"]\n\n\ndef attach(module_name, submodules=None, submod_attrs=None):\n \"\"\"Attach lazily loaded submodules, and functions or other attributes.\n\n Typically, modules import submodules and attributes as follows::\n\n import mysubmodule\n import anothersubmodule\n\n from .foo import someattr\n\n The idea of this function is to replace the `__init__.py`\n module's `__getattr__`, `__dir__`, and `__all__` attributes such that\n all imports work exactly the way they normally would, except that the\n actual import is delayed until the resulting module object is first used.\n\n The typical way to call this function, replacing the above imports, is::\n\n __getattr__, __lazy_dir__, __all__ = lazy.attach(\n __name__,\n ['mysubmodule', 'anothersubmodule'],\n {'foo': 'someattr'}\n )\n\n This functionality requires Python 3.7 or higher.\n\n Parameters\n ----------\n module_name : str\n Typically use __name__.\n submodules : set\n List of submodules to lazily import.\n submod_attrs : dict\n Dictionary of submodule -> list of attributes / functions.\n These attributes are imported as they are used.\n\n Returns\n -------\n __getattr__, __dir__, __all__\n\n \"\"\"\n if submod_attrs is None:\n submod_attrs = {}\n\n if submodules is None:\n submodules = set()\n else:\n submodules = set(submodules)\n\n attr_to_modules = {\n attr: mod for mod, attrs in submod_attrs.items() for attr in attrs\n }\n\n __all__ = list(submodules | attr_to_modules.keys())\n\n def __getattr__(name):\n if name in submodules:\n return importlib.import_module(f\"{module_name}.{name}\")\n elif name in attr_to_modules:\n submod = importlib.import_module(f\"{module_name}.{attr_to_modules[name]}\")\n return getattr(submod, name)\n else:\n raise AttributeError(f\"No {module_name} attribute {name}\")\n\n def __dir__():\n return __all__\n\n if os.environ.get(\"EAGER_IMPORT\", \"\"):\n for attr in set(attr_to_modules.keys()) | submodules:\n __getattr__(attr)\n\n return __getattr__, __dir__, list(__all__)\n\n\ndef lazy_import(fullname):\n \"\"\"Return a lazily imported proxy for a module or library.\n\n We often see the following pattern::\n\n def myfunc():\n import scipy as sp\n sp.argmin(...)\n ....\n\n This is to prevent a library, in this case `scipy`, from being\n imported at function definition time, since that can be slow.\n\n This function provides a proxy module that, upon access, imports\n the actual module. So the idiom equivalent to the above example is::\n\n sp = lazy.load(\"scipy\")\n\n def myfunc():\n sp.argmin(...)\n ....\n\n The initial import time is fast because the actual import is delayed\n until the first attribute is requested. The overall import time may\n decrease as well for users that don't make use of large portions\n of the library.\n\n Parameters\n ----------\n fullname : str\n The full name of the package or subpackage to import. For example::\n\n sp = lazy.load('scipy') # import scipy as sp\n spla = lazy.load('scipy.linalg') # import scipy.linalg as spla\n\n Returns\n -------\n pm : importlib.util._LazyModule\n Proxy module. Can be used like any regularly imported module.\n Actual loading of the module occurs upon first attribute request.\n\n \"\"\"\n try:\n return sys.modules[fullname]\n except:\n pass\n\n # Not previously loaded -- look it up\n spec = importlib.util.find_spec(fullname)\n\n if spec is None:\n # module not found - construct a DelayedImportErrorModule\n spec = importlib.util.spec_from_loader(fullname, loader=None)\n module = importlib.util.module_from_spec(spec)\n tmp_loader = importlib.machinery.SourceFileLoader(module, path=None)\n loader = DelayedImportErrorLoader(tmp_loader)\n loader.exec_module(module)\n # dont add to sys.modules. The module wasn't found.\n return module\n\n module = importlib.util.module_from_spec(spec)\n sys.modules[fullname] = module\n\n loader = importlib.util.LazyLoader(spec.loader)\n loader.exec_module(module)\n\n return module\n\n\nclass DelayedImportErrorLoader(importlib.util.LazyLoader):\n def exec_module(self, module):\n super().exec_module(module)\n module.__class__ = DelayedImportErrorModule\n\n\nclass DelayedImportErrorModule(types.ModuleType):\n def __getattribute__(self, attr):\n \"\"\"Trigger a ModuleNotFoundError upon attribute access\"\"\"\n spec = super().__getattribute__(\"__spec__\")\n # allows isinstance and type functions to work without raising error\n if attr in [\"__class__\"]:\n return super().__getattribute__(\"__class__\")\n\n raise ModuleNotFoundError(\n f\"Delayed Report: module named '{spec.name}' not found.\\n\"\n \"Reporting was Lazy -- delayed until module attributes accessed.\\n\"\n f\"Most likely, {spec.name} is not installed\"\n )\n", "path": "networkx/lazy_imports.py"}]}
| 2,609 | 761 |
gh_patches_debug_31149
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2636
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider papa_johns is broken
During the global build at 2021-08-18-14-42-26, spider **papa_johns** failed with **3199 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/papa_johns.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/papa_johns.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/papa_johns.geojson))
</issue>
<code>
[start of locations/spiders/papa_johns.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8 day_map = {'MONDAY': 'Mo', 'TUESDAY': 'Tu', 'WEDNESDAY': 'We', 'THURSDAY': 'Th',
9 'FRIDAY': 'Fr', 'SATURDAY': 'Sa', 'SUNDAY': 'Su'}
10
11
12 class PapaJohnsSpider(scrapy.Spider):
13
14 name = "papa_johns"
15 item_attributes = { 'brand': "Papa John's Pizza", 'brand_wikidata': "Q2759586" }
16 allowed_domains = ["papajohns.com", ]
17
18 start_urls = (
19 'https://locations.papajohns.com/',
20 )
21 download_delay = 0.2
22
23 def parse_hours(self, hours):
24 if not hours:
25 return ''
26 try:
27 opening_hours = OpeningHours()
28 the_hours = json.loads(hours[0])
29 for day in the_hours:
30 the_day = day_map[day['day']]
31 the_start = str(day['intervals'][0]['start'])
32 the_end = str(day['intervals'][0]['end'])
33 if the_start == '0':
34 the_start = '000'
35 if the_end == '0':
36 the_end = '000'
37 opening_hours.add_range(day=the_day, open_time=the_start, close_time=the_end, time_format='%H%M')
38 return opening_hours.as_opening_hours()
39 except IndexError:
40 return ''
41
42 def parse_store(self, response):
43 hours = response.xpath('//div[@class="c-location-hours-details-wrapper js-location-hours"]/@data-days').extract()
44 opening_hours = self.parse_hours(hours)
45
46 if response.xpath('//address[@class="c-address"]/div[3]/span/text()').extract_first() is not None:
47 city = response.xpath('//address[@class="c-address"]/div[3]/span/text()').extract_first()
48 else:
49 city = response.xpath('//address[@class="c-address"]/div[2]/span/text()').extract_first()
50
51 if response.xpath('//address[@class="c-address"]/div[2]/abbr/text()').extract_first() is not None:
52 the_state = response.xpath('//address[@class="c-address"]/div[2]/abbr/text()').extract_first()
53 the_postal = response.xpath('//address[@class="c-address"]/div[2]/span[2]/text()').extract_first()
54 else:
55 the_state = response.xpath('//address[@class="c-address"]/div[3]/abbr/text()').extract_first()
56 the_postal = response.xpath('//address[@class="c-address"]/div[3]/span[2]/text()').extract_first()
57
58 if '/united-states/' in response.url:
59 country = 'US'
60 elif '/canada/' in response.url:
61 country = 'CA'
62 else:
63 country = ''
64
65 props = {
66 'ref': response.xpath('//main/@itemid').extract_first().split('#')[1],
67 'website': response.url,
68 'addr_full': response.xpath('//address[@class="c-address"]/div[1]/span/text()').extract_first(),
69 'phone': response.xpath('//div[@class="c-phone-number c-phone-main-number"]/a/text()').extract_first(),
70 'city': city,
71 'postcode': the_postal,
72 'state': the_state,
73 'opening_hours': opening_hours,
74 'country': country,
75 'lat': float(response.xpath('//span[@class="coordinates"]/meta[1]/@content').extract_first()),
76 'lon': float(response.xpath('//span[@class="coordinates"]/meta[2]/@content').extract_first()),
77 }
78
79 yield GeojsonPointItem(**props)
80
81 def parse_within_city(self, response):
82 stores = response.xpath('//h2[@class="Teaser-title"]/a/@href').extract()
83
84 for store in stores:
85 yield scrapy.Request(
86 response.urljoin(store),
87 callback=self.parse_store
88 )
89
90 def parse_city(self, response):
91 cities = response.xpath('//li[@class="Directory-listItem"]/a/@href').extract()
92
93 for city in cities:
94 determine_multi_state = city.split('/')
95 if len(determine_multi_state) == 4:
96 yield scrapy.Request(
97 response.urljoin(city),
98 callback=self.parse_within_city
99 )
100 else:
101 yield scrapy.Request(
102 response.urljoin(city),
103 callback=self.parse_store
104 )
105
106 def parse_state(self, response):
107 states = response.xpath('//li[@class="Directory-listItem"]/a/@href').extract()
108
109 for state in states:
110 determine_multi_state = state.split('/')
111 if len(determine_multi_state) <= 5:
112 yield scrapy.Request(
113 response.urljoin(state),
114 callback=self.parse_city
115 )
116 else:
117 yield scrapy.Request(
118 response.urljoin(state),
119 callback=self.parse_store
120 )
121
122 def parse(self, response):
123 countries = response.xpath('//li[@class="Directory-listItem"]/a/@href').extract()
124
125 for country in countries:
126 yield scrapy.Request(
127 response.urljoin(country),
128 callback=self.parse_state
129 )
130
[end of locations/spiders/papa_johns.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/papa_johns.py b/locations/spiders/papa_johns.py
--- a/locations/spiders/papa_johns.py
+++ b/locations/spiders/papa_johns.py
@@ -5,14 +5,20 @@
from locations.items import GeojsonPointItem
from locations.hours import OpeningHours
-day_map = {'MONDAY': 'Mo', 'TUESDAY': 'Tu', 'WEDNESDAY': 'We', 'THURSDAY': 'Th',
- 'FRIDAY': 'Fr', 'SATURDAY': 'Sa', 'SUNDAY': 'Su'}
+day_map = {
+ 'MONDAY': 'Mo',
+ 'TUESDAY': 'Tu',
+ 'WEDNESDAY': 'We',
+ 'THURSDAY': 'Th',
+ 'FRIDAY': 'Fr',
+ 'SATURDAY': 'Sa',
+ 'SUNDAY': 'Su',
+}
class PapaJohnsSpider(scrapy.Spider):
-
name = "papa_johns"
- item_attributes = { 'brand': "Papa John's Pizza", 'brand_wikidata': "Q2759586" }
+ item_attributes = {'brand': "Papa John's Pizza", 'brand_wikidata': "Q2759586"}
allowed_domains = ["papajohns.com", ]
start_urls = (
@@ -59,6 +65,8 @@
country = 'US'
elif '/canada/' in response.url:
country = 'CA'
+ elif response.url == 'https://locations.papajohns.com/index.html':
+ return
else:
country = ''
@@ -72,8 +80,8 @@
'state': the_state,
'opening_hours': opening_hours,
'country': country,
- 'lat': float(response.xpath('//span[@class="coordinates"]/meta[1]/@content').extract_first()),
- 'lon': float(response.xpath('//span[@class="coordinates"]/meta[2]/@content').extract_first()),
+ 'lat': response.xpath('//span[@class="coordinates"]/meta[1]/@content').extract_first(),
+ 'lon': response.xpath('//span[@class="coordinates"]/meta[2]/@content').extract_first(),
}
yield GeojsonPointItem(**props)
|
{"golden_diff": "diff --git a/locations/spiders/papa_johns.py b/locations/spiders/papa_johns.py\n--- a/locations/spiders/papa_johns.py\n+++ b/locations/spiders/papa_johns.py\n@@ -5,14 +5,20 @@\n from locations.items import GeojsonPointItem\n from locations.hours import OpeningHours\n \n-day_map = {'MONDAY': 'Mo', 'TUESDAY': 'Tu', 'WEDNESDAY': 'We', 'THURSDAY': 'Th',\n- 'FRIDAY': 'Fr', 'SATURDAY': 'Sa', 'SUNDAY': 'Su'}\n+day_map = {\n+ 'MONDAY': 'Mo',\n+ 'TUESDAY': 'Tu',\n+ 'WEDNESDAY': 'We',\n+ 'THURSDAY': 'Th',\n+ 'FRIDAY': 'Fr',\n+ 'SATURDAY': 'Sa',\n+ 'SUNDAY': 'Su',\n+}\n \n \n class PapaJohnsSpider(scrapy.Spider):\n-\n name = \"papa_johns\"\n- item_attributes = { 'brand': \"Papa John's Pizza\", 'brand_wikidata': \"Q2759586\" }\n+ item_attributes = {'brand': \"Papa John's Pizza\", 'brand_wikidata': \"Q2759586\"}\n allowed_domains = [\"papajohns.com\", ]\n \n start_urls = (\n@@ -59,6 +65,8 @@\n country = 'US'\n elif '/canada/' in response.url:\n country = 'CA'\n+ elif response.url == 'https://locations.papajohns.com/index.html':\n+ return\n else:\n country = ''\n \n@@ -72,8 +80,8 @@\n 'state': the_state,\n 'opening_hours': opening_hours,\n 'country': country,\n- 'lat': float(response.xpath('//span[@class=\"coordinates\"]/meta[1]/@content').extract_first()),\n- 'lon': float(response.xpath('//span[@class=\"coordinates\"]/meta[2]/@content').extract_first()),\n+ 'lat': response.xpath('//span[@class=\"coordinates\"]/meta[1]/@content').extract_first(),\n+ 'lon': response.xpath('//span[@class=\"coordinates\"]/meta[2]/@content').extract_first(),\n }\n \n yield GeojsonPointItem(**props)\n", "issue": "Spider papa_johns is broken\nDuring the global build at 2021-08-18-14-42-26, spider **papa_johns** failed with **3199 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/papa_johns.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/papa_johns.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/papa_johns.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nday_map = {'MONDAY': 'Mo', 'TUESDAY': 'Tu', 'WEDNESDAY': 'We', 'THURSDAY': 'Th',\n 'FRIDAY': 'Fr', 'SATURDAY': 'Sa', 'SUNDAY': 'Su'}\n\n\nclass PapaJohnsSpider(scrapy.Spider):\n\n name = \"papa_johns\"\n item_attributes = { 'brand': \"Papa John's Pizza\", 'brand_wikidata': \"Q2759586\" }\n allowed_domains = [\"papajohns.com\", ]\n\n start_urls = (\n 'https://locations.papajohns.com/',\n )\n download_delay = 0.2\n\n def parse_hours(self, hours):\n if not hours:\n return ''\n try:\n opening_hours = OpeningHours()\n the_hours = json.loads(hours[0])\n for day in the_hours:\n the_day = day_map[day['day']]\n the_start = str(day['intervals'][0]['start'])\n the_end = str(day['intervals'][0]['end'])\n if the_start == '0':\n the_start = '000'\n if the_end == '0':\n the_end = '000'\n opening_hours.add_range(day=the_day, open_time=the_start, close_time=the_end, time_format='%H%M')\n return opening_hours.as_opening_hours()\n except IndexError:\n return ''\n\n def parse_store(self, response):\n hours = response.xpath('//div[@class=\"c-location-hours-details-wrapper js-location-hours\"]/@data-days').extract()\n opening_hours = self.parse_hours(hours)\n\n if response.xpath('//address[@class=\"c-address\"]/div[3]/span/text()').extract_first() is not None:\n city = response.xpath('//address[@class=\"c-address\"]/div[3]/span/text()').extract_first()\n else:\n city = response.xpath('//address[@class=\"c-address\"]/div[2]/span/text()').extract_first()\n\n if response.xpath('//address[@class=\"c-address\"]/div[2]/abbr/text()').extract_first() is not None:\n the_state = response.xpath('//address[@class=\"c-address\"]/div[2]/abbr/text()').extract_first()\n the_postal = response.xpath('//address[@class=\"c-address\"]/div[2]/span[2]/text()').extract_first()\n else:\n the_state = response.xpath('//address[@class=\"c-address\"]/div[3]/abbr/text()').extract_first()\n the_postal = response.xpath('//address[@class=\"c-address\"]/div[3]/span[2]/text()').extract_first()\n\n if '/united-states/' in response.url:\n country = 'US'\n elif '/canada/' in response.url:\n country = 'CA'\n else:\n country = ''\n\n props = {\n 'ref': response.xpath('//main/@itemid').extract_first().split('#')[1],\n 'website': response.url,\n 'addr_full': response.xpath('//address[@class=\"c-address\"]/div[1]/span/text()').extract_first(),\n 'phone': response.xpath('//div[@class=\"c-phone-number c-phone-main-number\"]/a/text()').extract_first(),\n 'city': city,\n 'postcode': the_postal,\n 'state': the_state,\n 'opening_hours': opening_hours,\n 'country': country,\n 'lat': float(response.xpath('//span[@class=\"coordinates\"]/meta[1]/@content').extract_first()),\n 'lon': float(response.xpath('//span[@class=\"coordinates\"]/meta[2]/@content').extract_first()),\n }\n\n yield GeojsonPointItem(**props)\n\n def parse_within_city(self, response):\n stores = response.xpath('//h2[@class=\"Teaser-title\"]/a/@href').extract()\n\n for store in stores:\n yield scrapy.Request(\n response.urljoin(store),\n callback=self.parse_store\n )\n\n def parse_city(self, response):\n cities = response.xpath('//li[@class=\"Directory-listItem\"]/a/@href').extract()\n\n for city in cities:\n determine_multi_state = city.split('/')\n if len(determine_multi_state) == 4:\n yield scrapy.Request(\n response.urljoin(city),\n callback=self.parse_within_city\n )\n else:\n yield scrapy.Request(\n response.urljoin(city),\n callback=self.parse_store\n )\n\n def parse_state(self, response):\n states = response.xpath('//li[@class=\"Directory-listItem\"]/a/@href').extract()\n\n for state in states:\n determine_multi_state = state.split('/')\n if len(determine_multi_state) <= 5:\n yield scrapy.Request(\n response.urljoin(state),\n callback=self.parse_city\n )\n else:\n yield scrapy.Request(\n response.urljoin(state),\n callback=self.parse_store\n )\n\n def parse(self, response):\n countries = response.xpath('//li[@class=\"Directory-listItem\"]/a/@href').extract()\n\n for country in countries:\n yield scrapy.Request(\n response.urljoin(country),\n callback=self.parse_state\n )\n", "path": "locations/spiders/papa_johns.py"}]}
| 2,160 | 533 |
gh_patches_debug_2374
|
rasdani/github-patches
|
git_diff
|
deeppavlov__DeepPavlov-79
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
What is "'Chainer' object has no attribute 'infer'
2018-03-04 14:09:23,638 (util.py:64 WorkerThread2) ERROR - TeleBot: "AttributeError occurred, args=("'Chainer' object has no attribute 'infer'",)
Traceback (most recent call last):
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py", line 58, in run
task(*args, **kwargs)
File "/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py", line 48, in handle_inference
pred = model.infer(context)
AttributeError: 'Chainer' object has no attribute 'infer'
"
2018-03-04 14:09:23.638 ERROR in 'TeleBot'['util'] at line 64: AttributeError occurred, args=("'Chainer' object has no attribute 'infer'",)
Traceback (most recent call last):
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py", line 58, in run
task(*args, **kwargs)
File "/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py", line 48, in handle_inference
pred = model.infer(context)
AttributeError: 'Chainer' object has no attribute 'infer'
Traceback (most recent call last):
File "deep.py", line 60, in <module>
main()
File "deep.py", line 56, in main
interact_model_by_telegram(pipeline_config_path, token)
File "/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py", line 58, in interact_model_by_telegram
init_bot_for_model(token, model)
File "/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py", line 52, in init_bot_for_model
bot.polling()
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/__init__.py", line 264, in polling
self.__threaded_polling(none_stop, interval, timeout)
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/__init__.py", line 288, in __threaded_polling
self.worker_pool.raise_exceptions()
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py", line 107, in raise_exceptions
six.reraise(self.exc_info[0], self.exc_info[1], self.exc_info[2])
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py", line 58, in run
task(*args, **kwargs)
File "/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py", line 48, in handle_inference
pred = model.infer(context)
AttributeError: 'Chainer' object has no attribute 'infer'
Telegram interface bug
Alexander Seliverstov, [04.03.18 15:20]
/start
jhfirufoiug_bot, [04.03.18 15:20]
Welcome to DeepPavlov inference bot!
Alexander Seliverstov, [04.03.18 15:20]
Hi
jhfirufoiug_bot, [04.03.18 15:20]
['Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?']
Alexander Seliverstov, [04.03.18 15:20]
I want cheap russian food
jhfirufoiug_bot, [04.03.18 15:20]
['Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?']
</issue>
<code>
[start of telegram_utils/telegram_ui.py]
1 """
2 Copyright 2017 Neural Networks and Deep Learning lab, MIPT
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15 """
16 import telebot
17
18 from deeppavlov.core.common.file import read_json
19 from deeppavlov.core.commands.infer import build_model_from_config
20
21
22 def init_bot_for_model(token, model):
23 bot = telebot.TeleBot(token)
24
25 model_name = type(model).__name__
26 models_info = read_json('../telegram_utils/models_info.json')
27 model_info = models_info[model_name] if model_name in models_info else models_info['@default']
28
29 @bot.message_handler(commands=['start'])
30 def send_start_message(message):
31 chat_id = message.chat.id
32 out_message = model_info['start_message']
33 if hasattr(model, 'reset'):
34 model.reset()
35 bot.send_message(chat_id, out_message)
36
37 @bot.message_handler(commands=['help'])
38 def send_help_message(message):
39 chat_id = message.chat.id
40 out_message = model_info['help_message']
41 bot.send_message(chat_id, out_message)
42
43 @bot.message_handler()
44 def handle_inference(message):
45 chat_id = message.chat.id
46 context = message.text
47
48 pred = model.infer(context)
49 reply_message = str(pred)
50 bot.send_message(chat_id, reply_message)
51
52 bot.polling()
53
54
55 def interact_model_by_telegram(config_path, token):
56 config = read_json(config_path)
57 model = build_model_from_config(config)
58 init_bot_for_model(token, model)
59
[end of telegram_utils/telegram_ui.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/telegram_utils/telegram_ui.py b/telegram_utils/telegram_ui.py
--- a/telegram_utils/telegram_ui.py
+++ b/telegram_utils/telegram_ui.py
@@ -45,8 +45,8 @@
chat_id = message.chat.id
context = message.text
- pred = model.infer(context)
- reply_message = str(pred)
+ pred = model([context])
+ reply_message = str(pred[0])
bot.send_message(chat_id, reply_message)
bot.polling()
|
{"golden_diff": "diff --git a/telegram_utils/telegram_ui.py b/telegram_utils/telegram_ui.py\n--- a/telegram_utils/telegram_ui.py\n+++ b/telegram_utils/telegram_ui.py\n@@ -45,8 +45,8 @@\n chat_id = message.chat.id\n context = message.text\n \n- pred = model.infer(context)\n- reply_message = str(pred)\n+ pred = model([context])\n+ reply_message = str(pred[0])\n bot.send_message(chat_id, reply_message)\n \n bot.polling()\n", "issue": "What is \"'Chainer' object has no attribute 'infer'\n2018-03-04 14:09:23,638 (util.py:64 WorkerThread2) ERROR - TeleBot: \"AttributeError occurred, args=(\"'Chainer' object has no attribute 'infer'\",)\r\nTraceback (most recent call last):\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py\", line 58, in run\r\n task(*args, **kwargs)\r\n File \"/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py\", line 48, in handle_inference\r\n pred = model.infer(context)\r\nAttributeError: 'Chainer' object has no attribute 'infer'\r\n\"\r\n2018-03-04 14:09:23.638 ERROR in 'TeleBot'['util'] at line 64: AttributeError occurred, args=(\"'Chainer' object has no attribute 'infer'\",)\r\nTraceback (most recent call last):\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py\", line 58, in run\r\n task(*args, **kwargs)\r\n File \"/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py\", line 48, in handle_inference\r\n pred = model.infer(context)\r\nAttributeError: 'Chainer' object has no attribute 'infer'\r\n\r\nTraceback (most recent call last):\r\n File \"deep.py\", line 60, in <module>\r\n main()\r\n File \"deep.py\", line 56, in main\r\n interact_model_by_telegram(pipeline_config_path, token)\r\n File \"/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py\", line 58, in interact_model_by_telegram\r\n init_bot_for_model(token, model)\r\n File \"/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py\", line 52, in init_bot_for_model\r\n bot.polling()\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/__init__.py\", line 264, in polling\r\n self.__threaded_polling(none_stop, interval, timeout)\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/__init__.py\", line 288, in __threaded_polling\r\n self.worker_pool.raise_exceptions()\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py\", line 107, in raise_exceptions\r\n six.reraise(self.exc_info[0], self.exc_info[1], self.exc_info[2])\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/six.py\", line 693, in reraise\r\n raise value\r\n File \"/Users/developer/DeepPavlov/lib/python3.6/site-packages/telebot/util.py\", line 58, in run\r\n task(*args, **kwargs)\r\n File \"/Users/developer/Project/DeepPavlov/telegram_utils/telegram_ui.py\", line 48, in handle_inference\r\n pred = model.infer(context)\r\nAttributeError: 'Chainer' object has no attribute 'infer'\r\n\nTelegram interface bug\nAlexander Seliverstov, [04.03.18 15:20]\r\n/start\r\n\r\njhfirufoiug_bot, [04.03.18 15:20]\r\nWelcome to DeepPavlov inference bot!\r\n\r\nAlexander Seliverstov, [04.03.18 15:20]\r\nHi\r\n\r\njhfirufoiug_bot, [04.03.18 15:20]\r\n['Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?']\r\n\r\nAlexander Seliverstov, [04.03.18 15:20]\r\nI want cheap russian food\r\n\r\njhfirufoiug_bot, [04.03.18 15:20]\r\n['Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?', 'Hello, welcome to the Cambridge restaurant system. You can ask for restaurants by area, price range or food type. How may I help you?']\n", "before_files": [{"content": "\"\"\"\nCopyright 2017 Neural Networks and Deep Learning lab, MIPT\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\"\"\"\nimport telebot\n\nfrom deeppavlov.core.common.file import read_json\nfrom deeppavlov.core.commands.infer import build_model_from_config\n\n\ndef init_bot_for_model(token, model):\n bot = telebot.TeleBot(token)\n\n model_name = type(model).__name__\n models_info = read_json('../telegram_utils/models_info.json')\n model_info = models_info[model_name] if model_name in models_info else models_info['@default']\n\n @bot.message_handler(commands=['start'])\n def send_start_message(message):\n chat_id = message.chat.id\n out_message = model_info['start_message']\n if hasattr(model, 'reset'):\n model.reset()\n bot.send_message(chat_id, out_message)\n\n @bot.message_handler(commands=['help'])\n def send_help_message(message):\n chat_id = message.chat.id\n out_message = model_info['help_message']\n bot.send_message(chat_id, out_message)\n\n @bot.message_handler()\n def handle_inference(message):\n chat_id = message.chat.id\n context = message.text\n\n pred = model.infer(context)\n reply_message = str(pred)\n bot.send_message(chat_id, reply_message)\n\n bot.polling()\n\n\ndef interact_model_by_telegram(config_path, token):\n config = read_json(config_path)\n model = build_model_from_config(config)\n init_bot_for_model(token, model)\n", "path": "telegram_utils/telegram_ui.py"}]}
| 2,794 | 117 |
gh_patches_debug_57939
|
rasdani/github-patches
|
git_diff
|
coreruleset__coreruleset-3550
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
.changes-pending.md lacks space before asterisk
Our CHANGES.md has a leading space before the bullet / asterisk. The .changes-pending.md does not.
</issue>
<code>
[start of .github/create-changelog-prs.py]
1 #! /usr/bin/env python
2
3 import subprocess
4 import json
5 import datetime
6 import sys
7 import os
8 import re
9
10 DEVELOPERS = dict()
11
12 def get_pr(repository: str, number: int) -> dict:
13 command = f"""gh pr view \
14 --repo "{repository}" \
15 "{number}" \
16 --json mergeCommit,mergedBy,title,author,baseRefName,number
17 """
18 proc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
19 pr_json, errors = proc.communicate()
20 if proc.returncode != 0:
21 print(errors)
22 exit(1)
23 return json.loads(pr_json)
24
25 def get_prs(repository: str, day: datetime.date) -> list:
26 print(f"Fetching PRs for {day}")
27 command = f"""gh search prs \
28 --repo "{repository}" \
29 --merged-at "{day}" \
30 --json number \
31 -- \
32 -label:changelog-pr # ignore changelog prs
33 """
34 proc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
35 prs_json, errors = proc.communicate()
36 if proc.returncode != 0:
37 print(errors)
38 exit(1)
39 prs = list()
40 for result in json.loads(prs_json):
41 prs.append(get_pr(repository, result["number"]))
42
43 return prs
44
45 def parse_prs(prs: list) -> dict:
46 pr_map = dict()
47 for pr in prs:
48 merged_by = pr["mergedBy"]["login"]
49 if merged_by not in pr_map:
50 pr_list = list()
51 pr_map[merged_by] = pr_list
52 else:
53 pr_list = pr_map[merged_by]
54 pr_list.append(pr)
55 return pr_map
56
57
58 def create_prs(repository: str, merged_by_prs_map: dict, day: datetime.date):
59 for author in merged_by_prs_map.keys():
60 create_pr(repository, author, merged_by_prs_map[author], day)
61
62 def create_pr(repository: str, merged_by: str, prs: list, day: datetime.date):
63 if len(prs) == 0:
64 return
65 print(f"Creating changelog PR for @{merged_by}")
66
67 sample_pr = prs[0]
68 base_branch = sample_pr["baseRefName"]
69 pr_branch_name = create_pr_branch(day, merged_by, base_branch)
70 pr_body, changelog_lines = generate_content(prs, merged_by)
71 create_commit(changelog_lines)
72 push_pr_branch(pr_branch_name)
73
74 command = f"""gh pr create \
75 --repo "{repository}" \
76 --assignee "{merged_by}" \
77 --base "{base_branch}" \
78 --label "changelog-pr" \
79 --title "chore: changelog updates for {day}, merged by @{merged_by}" \
80 --body-file -
81 """
82
83 proc = subprocess.Popen(command, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
84 outs, errors = proc.communicate(input=pr_body.encode())
85 if proc.returncode != 0:
86 print(errors)
87 exit(1)
88 print(f"Created PR: {outs.decode()}")
89
90 def create_commit(changelog_lines: str):
91 with open('.changes-pending.md', 'a') as changelog:
92 changelog.write(changelog_lines)
93
94 command = "git commit .changes-pending.md -m 'Add pending changelog entries'"
95 proc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)
96 _, errors = proc.communicate()
97 if proc.returncode != 0:
98 print(errors)
99 exit(1)
100
101 def generate_content(prs: list, merged_by: str) -> (str, str):
102 changelog_lines = ""
103 pr_body = f"This PR was auto-generated to update the changelog with the following entries, merged by @{merged_by}:\n```\n"
104 pr_links = ""
105 for pr in prs:
106 pr_number = pr["number"]
107 pr_title = pr["title"]
108 pr_author = get_pr_author_name(pr["author"]["login"])
109 new_line = f"* {pr_title} ({pr_author}) [#{pr_number}]\n"
110 pr_body += new_line
111 pr_links += f"- #{pr_number}\n"
112
113 changelog_lines += new_line
114 pr_body += "```\n\n" + pr_links
115
116 return pr_body, changelog_lines
117
118 def get_pr_author_name(login: str) -> str:
119 if len(DEVELOPERS) == 0:
120 parse_contributors()
121
122 return DEVELOPERS[login] if login in DEVELOPERS else f"@{login}"
123
124 def parse_contributors():
125 regex = re.compile(r'^\s*?-\s*?\[([^]]+)\]\s*?\(http.*/([^/]+)\s*?\)')
126 with open('CONTRIBUTORS.md', 'rt') as handle:
127 line = handle.readline()
128 while not ('##' in line and 'Contributors' in line):
129 match = regex.match(line)
130 if match:
131 DEVELOPERS[match.group(2)] = match.group(1)
132 line = handle.readline()
133
134 def create_pr_branch(day: datetime.date, author: str, base_branch: str) -> str:
135 branch_name = f"changelog-updates-for-{day}-{author} {base_branch}"
136 command = f"git checkout -b {branch_name}"
137 proc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)
138 _, errors = proc.communicate()
139 if proc.returncode != 0:
140 print(errors)
141 exit(1)
142
143 return branch_name
144
145 def push_pr_branch(branch_name: str):
146 command = f"git push -u origin {branch_name}"
147 proc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)
148 _, errors = proc.communicate()
149 if proc.returncode != 0:
150 print(errors)
151 exit(1)
152
153 def run(source_repository: str, target_repository: str, today: datetime.date):
154 day = today - datetime.timedelta(days=1)
155 prs = get_prs(source_repository, day)
156 prs_length = len(prs)
157 print(f"Found {prs_length} PRs")
158 if prs_length == 0:
159 return
160
161 merged_by_prs_map = parse_prs(prs)
162 create_prs(target_repository, merged_by_prs_map, day)
163
164 if __name__ == "__main__":
165 # disable pager
166 os.environ["GH_PAGER"] = ''
167 # set variables for Git
168 os.environ["GIT_AUTHOR_NAME"] = "changelog-pr-bot"
169 os.environ["GIT_AUTHOR_EMAIL"] = "[email protected]"
170 os.environ["GIT_COMMITTER_NAME"] = "changelog-pr-bot"
171 os.environ["GIT_COMMITTER_EMAIL"] = "[email protected]"
172
173 source_repository = 'coreruleset/coreruleset'
174 target_repository = source_repository
175 # the cron schedule for the workflow uses UTC
176 today = datetime.datetime.now(datetime.timezone.utc).date()
177
178 if len(sys.argv) > 1:
179 source_repository = sys.argv[1]
180 if len(sys.argv) > 2:
181 target_repository = sys.argv[2]
182 if len(sys.argv) > 3:
183 today = datetime.date.fromisoformat(sys.argv[3])
184 run(source_repository, target_repository, today)
185
[end of .github/create-changelog-prs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/.github/create-changelog-prs.py b/.github/create-changelog-prs.py
--- a/.github/create-changelog-prs.py
+++ b/.github/create-changelog-prs.py
@@ -106,7 +106,7 @@
pr_number = pr["number"]
pr_title = pr["title"]
pr_author = get_pr_author_name(pr["author"]["login"])
- new_line = f"* {pr_title} ({pr_author}) [#{pr_number}]\n"
+ new_line = f" * {pr_title} ({pr_author}) [#{pr_number}]\n"
pr_body += new_line
pr_links += f"- #{pr_number}\n"
|
{"golden_diff": "diff --git a/.github/create-changelog-prs.py b/.github/create-changelog-prs.py\n--- a/.github/create-changelog-prs.py\n+++ b/.github/create-changelog-prs.py\n@@ -106,7 +106,7 @@\n \t\tpr_number = pr[\"number\"]\n \t\tpr_title = pr[\"title\"]\n \t\tpr_author = get_pr_author_name(pr[\"author\"][\"login\"])\n-\t\tnew_line = f\"* {pr_title} ({pr_author}) [#{pr_number}]\\n\"\n+\t\tnew_line = f\" * {pr_title} ({pr_author}) [#{pr_number}]\\n\"\n \t\tpr_body += new_line\n \t\tpr_links += f\"- #{pr_number}\\n\"\n", "issue": ".changes-pending.md lacks space before asterisk\nOur CHANGES.md has a leading space before the bullet / asterisk. The .changes-pending.md does not.\n", "before_files": [{"content": "#! /usr/bin/env python\n\nimport subprocess\nimport json\nimport datetime\nimport sys\nimport os\nimport re\n\nDEVELOPERS = dict()\n\ndef get_pr(repository: str, number: int) -> dict:\n\tcommand = f\"\"\"gh pr view \\\n\t\t--repo \"{repository}\" \\\n\t\t\"{number}\" \\\n\t\t--json mergeCommit,mergedBy,title,author,baseRefName,number\n\t\"\"\"\n\tproc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\tpr_json, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\treturn json.loads(pr_json)\n\ndef get_prs(repository: str, day: datetime.date) -> list:\n\tprint(f\"Fetching PRs for {day}\")\n\tcommand = f\"\"\"gh search prs \\\n\t\t--repo \"{repository}\" \\\n\t\t--merged-at \"{day}\" \\\n\t\t--json number \\\n\t\t-- \\\n\t\t-label:changelog-pr # ignore changelog prs\n\t\"\"\"\n\tproc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\tprs_json, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\tprs = list()\n\tfor result in json.loads(prs_json):\n\t\tprs.append(get_pr(repository, result[\"number\"]))\n\n\treturn prs\n\ndef parse_prs(prs: list) -> dict:\n\tpr_map = dict()\n\tfor pr in prs:\n\t\tmerged_by = pr[\"mergedBy\"][\"login\"]\n\t\tif merged_by not in pr_map:\n\t\t\tpr_list = list()\n\t\t\tpr_map[merged_by] = pr_list\n\t\telse:\n\t\t\tpr_list = pr_map[merged_by]\n\t\tpr_list.append(pr)\n\treturn pr_map\n\n\ndef create_prs(repository: str, merged_by_prs_map: dict, day: datetime.date):\n\tfor author in merged_by_prs_map.keys():\n\t\tcreate_pr(repository, author, merged_by_prs_map[author], day)\n\ndef create_pr(repository: str, merged_by: str, prs: list, day: datetime.date):\n\tif len(prs) == 0:\n\t\treturn\n\tprint(f\"Creating changelog PR for @{merged_by}\")\n\n\tsample_pr = prs[0]\n\tbase_branch = sample_pr[\"baseRefName\"]\n\tpr_branch_name = create_pr_branch(day, merged_by, base_branch)\n\tpr_body, changelog_lines = generate_content(prs, merged_by)\n\tcreate_commit(changelog_lines)\n\tpush_pr_branch(pr_branch_name)\n\n\tcommand = f\"\"\"gh pr create \\\n\t\t--repo \"{repository}\" \\\n\t\t--assignee \"{merged_by}\" \\\n\t\t--base \"{base_branch}\" \\\n\t\t--label \"changelog-pr\" \\\n\t\t--title \"chore: changelog updates for {day}, merged by @{merged_by}\" \\\n\t\t--body-file -\n\t\"\"\"\n\n\tproc = subprocess.Popen(command, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\touts, errors = proc.communicate(input=pr_body.encode())\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\tprint(f\"Created PR: {outs.decode()}\")\n\ndef create_commit(changelog_lines: str):\n\twith open('.changes-pending.md', 'a') as changelog:\n\t\tchangelog.write(changelog_lines)\n\n\tcommand = \"git commit .changes-pending.md -m 'Add pending changelog entries'\"\n\tproc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)\n\t_, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\ndef generate_content(prs: list, merged_by: str) -> (str, str):\n\tchangelog_lines = \"\"\n\tpr_body = f\"This PR was auto-generated to update the changelog with the following entries, merged by @{merged_by}:\\n```\\n\"\n\tpr_links = \"\"\n\tfor pr in prs:\n\t\tpr_number = pr[\"number\"]\n\t\tpr_title = pr[\"title\"]\n\t\tpr_author = get_pr_author_name(pr[\"author\"][\"login\"])\n\t\tnew_line = f\"* {pr_title} ({pr_author}) [#{pr_number}]\\n\"\n\t\tpr_body += new_line\n\t\tpr_links += f\"- #{pr_number}\\n\"\n\n\t\tchangelog_lines += new_line\n\tpr_body += \"```\\n\\n\" + pr_links\n\n\treturn pr_body, changelog_lines\n\ndef get_pr_author_name(login: str) -> str:\n\tif len(DEVELOPERS) == 0:\n\t\tparse_contributors()\n\n\treturn DEVELOPERS[login] if login in DEVELOPERS else f\"@{login}\"\n\ndef parse_contributors():\n\tregex = re.compile(r'^\\s*?-\\s*?\\[([^]]+)\\]\\s*?\\(http.*/([^/]+)\\s*?\\)')\n\twith open('CONTRIBUTORS.md', 'rt') as handle:\n\t\tline = handle.readline()\n\t\twhile not ('##' in line and 'Contributors' in line):\n\t\t\tmatch = regex.match(line)\n\t\t\tif match:\n\t\t\t\tDEVELOPERS[match.group(2)] = match.group(1)\n\t\t\tline = handle.readline()\n\ndef create_pr_branch(day: datetime.date, author: str, base_branch: str) -> str:\n\tbranch_name = f\"changelog-updates-for-{day}-{author} {base_branch}\"\n\tcommand = f\"git checkout -b {branch_name}\"\n\tproc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)\n\t_, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\n\treturn branch_name\n\ndef push_pr_branch(branch_name: str):\n\tcommand = f\"git push -u origin {branch_name}\"\n\tproc = subprocess.Popen(command, shell=True, stderr=subprocess.PIPE)\n\t_, errors = proc.communicate()\n\tif proc.returncode != 0:\n\t\tprint(errors)\n\t\texit(1)\n\ndef run(source_repository: str, target_repository: str, today: datetime.date):\n\tday = today - datetime.timedelta(days=1)\n\tprs = get_prs(source_repository, day)\n\tprs_length = len(prs)\n\tprint(f\"Found {prs_length} PRs\")\n\tif prs_length == 0:\n\t\treturn\n\n\tmerged_by_prs_map = parse_prs(prs)\n\tcreate_prs(target_repository, merged_by_prs_map, day)\n\nif __name__ == \"__main__\":\n\t# disable pager\n\tos.environ[\"GH_PAGER\"] = ''\n\t# set variables for Git\n\tos.environ[\"GIT_AUTHOR_NAME\"] = \"changelog-pr-bot\"\n\tos.environ[\"GIT_AUTHOR_EMAIL\"] = \"[email protected]\"\n\tos.environ[\"GIT_COMMITTER_NAME\"] = \"changelog-pr-bot\"\n\tos.environ[\"GIT_COMMITTER_EMAIL\"] = \"[email protected]\"\n\n\tsource_repository = 'coreruleset/coreruleset'\n\ttarget_repository = source_repository\n\t# the cron schedule for the workflow uses UTC\n\ttoday = datetime.datetime.now(datetime.timezone.utc).date()\n\n\tif len(sys.argv) > 1:\n\t\tsource_repository = sys.argv[1]\n\tif len(sys.argv) > 2:\n\t\ttarget_repository = sys.argv[2]\n\tif len(sys.argv) > 3:\n\t\ttoday = datetime.date.fromisoformat(sys.argv[3])\n\trun(source_repository, target_repository, today)\n", "path": ".github/create-changelog-prs.py"}]}
| 2,670 | 154 |
gh_patches_debug_2375
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-559
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Lutris shortcuts broken
See: https://forums.lutris.net/t/desktop-shortcut-not-work-for-any-game/456
</issue>
<code>
[start of lutris/util/resources.py]
1 import os
2 import re
3 import concurrent.futures
4 from urllib.parse import urlparse, parse_qsl
5
6 from lutris import settings
7 from lutris import api
8 from lutris.util.log import logger
9 from lutris.util.http import Request
10
11 BANNER = "banner"
12 ICON = "icon"
13
14
15 def get_icon_path(game, icon_type):
16 if icon_type == BANNER:
17 return os.path.join(settings.BANNER_PATH, "%s.jpg" % game)
18 if icon_type == ICON:
19 return os.path.join(settings.ICON_PATH, "lutris_%s.png" % game)
20
21
22 def has_icon(game, icon_type):
23 if icon_type == BANNER:
24 icon_path = get_icon_path(game, BANNER)
25 return os.path.exists(icon_path)
26 elif icon_type == ICON:
27 icon_path = get_icon_path(game, ICON)
28 return os.path.exists(icon_path)
29
30
31 def fetch_icons(game_slugs, callback=None):
32 no_banners = [slug for slug in game_slugs if not has_icon(slug, BANNER)]
33 no_icons = [slug for slug in game_slugs if not has_icon(slug, ICON)]
34
35 # Remove duplicate slugs
36 missing_media_slugs = list(set(no_banners) | set(no_icons))
37 if not missing_media_slugs:
38 return
39
40 response = api.get_games(game_slugs=missing_media_slugs)
41 if not response:
42 logger.warning('Unable to get games from API')
43 return
44 results = response['results']
45 while response.get('next'):
46 page_match = re.search(r'page=(\d+)', response['next'])
47 if page_match:
48 page = page_match.group(1)
49 else:
50 logger.error("No page found in %s", response['next'])
51 break
52 response = api.get_games(game_slugs=missing_media_slugs, page=page)
53 if not response:
54 logger.warning("Unable to get response for page %s", page)
55 break
56 else:
57 results += response.get('results', [])
58
59 banner_downloads = []
60 icon_downloads = []
61 updated_slugs = []
62 for game in results:
63 if game['slug'] in no_banners:
64 banner_url = game['banner_url']
65 if banner_url:
66 dest_path = get_icon_path(game['slug'], BANNER)
67 banner_downloads.append((game['banner_url'], dest_path))
68 updated_slugs.append(game['slug'])
69 if game['slug'] in no_icons:
70 icon_url = game['icon_url']
71 if icon_url:
72 dest_path = get_icon_path(game['slug'], ICON)
73 icon_downloads.append((game['icon_url'], dest_path))
74 updated_slugs.append(game['slug'])
75
76 updated_slugs = list(set(updated_slugs)) # Deduplicate slugs
77
78 downloads = banner_downloads + icon_downloads
79 with concurrent.futures.ThreadPoolExecutor(max_workers=16) as executor:
80 for url, dest_path in downloads:
81 executor.submit(download_media, url, dest_path)
82
83 if updated_slugs and callback:
84 callback(updated_slugs)
85
86
87 def download_media(url, dest, overwrite=False):
88 if os.path.exists(dest):
89 if overwrite:
90 os.remove(dest)
91 else:
92 return
93 request = Request(url).get()
94 request.write_to_file(dest)
95
96
97 def parse_installer_url(url):
98 """
99 Parses `lutris:` urls, extracting any info necessary to install or run a game.
100 """
101 try:
102 parsed_url = urlparse(url, scheme="lutris")
103 except:
104 return False
105 if parsed_url.scheme != "lutris":
106 return False
107 game_slug = parsed_url.path
108 if not game_slug:
109 return False
110 revision = None
111 if parsed_url.query:
112 query = dict(parse_qsl(parsed_url.query))
113 revision = query.get('revision')
114 return {
115 'game_slug': game_slug,
116 'revision': revision
117 }
118
[end of lutris/util/resources.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lutris/util/resources.py b/lutris/util/resources.py
--- a/lutris/util/resources.py
+++ b/lutris/util/resources.py
@@ -107,6 +107,8 @@
game_slug = parsed_url.path
if not game_slug:
return False
+ if game_slug.startswith('lutris:'):
+ game_slug = game_slug[7:]
revision = None
if parsed_url.query:
query = dict(parse_qsl(parsed_url.query))
|
{"golden_diff": "diff --git a/lutris/util/resources.py b/lutris/util/resources.py\n--- a/lutris/util/resources.py\n+++ b/lutris/util/resources.py\n@@ -107,6 +107,8 @@\n game_slug = parsed_url.path\n if not game_slug:\n return False\n+ if game_slug.startswith('lutris:'):\n+ game_slug = game_slug[7:]\n revision = None\n if parsed_url.query:\n query = dict(parse_qsl(parsed_url.query))\n", "issue": "Lutris shortcuts broken\nSee: https://forums.lutris.net/t/desktop-shortcut-not-work-for-any-game/456\n", "before_files": [{"content": "import os\nimport re\nimport concurrent.futures\nfrom urllib.parse import urlparse, parse_qsl\n\nfrom lutris import settings\nfrom lutris import api\nfrom lutris.util.log import logger\nfrom lutris.util.http import Request\n\nBANNER = \"banner\"\nICON = \"icon\"\n\n\ndef get_icon_path(game, icon_type):\n if icon_type == BANNER:\n return os.path.join(settings.BANNER_PATH, \"%s.jpg\" % game)\n if icon_type == ICON:\n return os.path.join(settings.ICON_PATH, \"lutris_%s.png\" % game)\n\n\ndef has_icon(game, icon_type):\n if icon_type == BANNER:\n icon_path = get_icon_path(game, BANNER)\n return os.path.exists(icon_path)\n elif icon_type == ICON:\n icon_path = get_icon_path(game, ICON)\n return os.path.exists(icon_path)\n\n\ndef fetch_icons(game_slugs, callback=None):\n no_banners = [slug for slug in game_slugs if not has_icon(slug, BANNER)]\n no_icons = [slug for slug in game_slugs if not has_icon(slug, ICON)]\n\n # Remove duplicate slugs\n missing_media_slugs = list(set(no_banners) | set(no_icons))\n if not missing_media_slugs:\n return\n\n response = api.get_games(game_slugs=missing_media_slugs)\n if not response:\n logger.warning('Unable to get games from API')\n return\n results = response['results']\n while response.get('next'):\n page_match = re.search(r'page=(\\d+)', response['next'])\n if page_match:\n page = page_match.group(1)\n else:\n logger.error(\"No page found in %s\", response['next'])\n break\n response = api.get_games(game_slugs=missing_media_slugs, page=page)\n if not response:\n logger.warning(\"Unable to get response for page %s\", page)\n break\n else:\n results += response.get('results', [])\n\n banner_downloads = []\n icon_downloads = []\n updated_slugs = []\n for game in results:\n if game['slug'] in no_banners:\n banner_url = game['banner_url']\n if banner_url:\n dest_path = get_icon_path(game['slug'], BANNER)\n banner_downloads.append((game['banner_url'], dest_path))\n updated_slugs.append(game['slug'])\n if game['slug'] in no_icons:\n icon_url = game['icon_url']\n if icon_url:\n dest_path = get_icon_path(game['slug'], ICON)\n icon_downloads.append((game['icon_url'], dest_path))\n updated_slugs.append(game['slug'])\n\n updated_slugs = list(set(updated_slugs)) # Deduplicate slugs\n\n downloads = banner_downloads + icon_downloads\n with concurrent.futures.ThreadPoolExecutor(max_workers=16) as executor:\n for url, dest_path in downloads:\n executor.submit(download_media, url, dest_path)\n\n if updated_slugs and callback:\n callback(updated_slugs)\n\n\ndef download_media(url, dest, overwrite=False):\n if os.path.exists(dest):\n if overwrite:\n os.remove(dest)\n else:\n return\n request = Request(url).get()\n request.write_to_file(dest)\n\n\ndef parse_installer_url(url):\n \"\"\"\n Parses `lutris:` urls, extracting any info necessary to install or run a game.\n \"\"\"\n try:\n parsed_url = urlparse(url, scheme=\"lutris\")\n except:\n return False\n if parsed_url.scheme != \"lutris\":\n return False\n game_slug = parsed_url.path\n if not game_slug:\n return False\n revision = None\n if parsed_url.query:\n query = dict(parse_qsl(parsed_url.query))\n revision = query.get('revision')\n return {\n 'game_slug': game_slug,\n 'revision': revision\n }\n", "path": "lutris/util/resources.py"}]}
| 1,659 | 111 |
gh_patches_debug_22277
|
rasdani/github-patches
|
git_diff
|
localstack__localstack-4755
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
feature request: Update Elastic Search to 7.10 for supporting case insensitive term queries
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
As a DEVELOPER for a Bayer project we need LocalStack with Elastic Search version 7.10.
In version 7.10 `case_insensitive` option were be added. [Term Query Doc](https://www.elastic.co/guide/en/elasticsearch/reference/7.10/query-dsl-term-query.html)
## Remarks
- 7.10 is the latest version under free Apache 2.0 license.
- Switching to [Opensearch](https://github.com/opensearch-project/OpenSearch) would be also possible.
### 🧑💻 Implementation
_No response_
### Anything else?
_No response_
</issue>
<code>
[start of localstack/constants.py]
1 import os
2
3 import localstack_client.config
4
5 import localstack
6
7 # LocalStack version
8 VERSION = localstack.__version__
9
10 # constant to represent the "local" region, i.e., local machine
11 REGION_LOCAL = "local"
12
13 # dev environment
14 ENV_DEV = "dev"
15
16 # HTTP headers used to forward proxy request URLs
17 HEADER_LOCALSTACK_EDGE_URL = "x-localstack-edge"
18 HEADER_LOCALSTACK_REQUEST_URL = "x-localstack-request-url"
19 HEADER_LOCALSTACK_TARGET = "x-localstack-target"
20 HEADER_AMZN_ERROR_TYPE = "X-Amzn-Errortype"
21
22 # backend service ports, for services that are behind a proxy (counting down from 4566)
23 DEFAULT_PORT_EDGE = 4566
24
25 # host name for localhost
26 LOCALHOST = "localhost"
27 LOCALHOST_IP = "127.0.0.1"
28 LOCALHOST_HOSTNAME = "localhost.localstack.cloud"
29
30 # version of the Maven dependency with Java utility code
31 LOCALSTACK_MAVEN_VERSION = "0.2.14"
32
33 # map of default service APIs and ports to be spun up (fetch map from localstack_client)
34 DEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()
35
36 # host to bind to when starting the services
37 BIND_HOST = "0.0.0.0"
38
39 # AWS user account ID used for tests
40 if "TEST_AWS_ACCOUNT_ID" not in os.environ:
41 os.environ["TEST_AWS_ACCOUNT_ID"] = "000000000000"
42 TEST_AWS_ACCOUNT_ID = os.environ["TEST_AWS_ACCOUNT_ID"]
43
44 # root code folder
45 MODULE_MAIN_PATH = os.path.dirname(os.path.realpath(__file__))
46 # TODO rename to "ROOT_FOLDER"!
47 LOCALSTACK_ROOT_FOLDER = os.path.realpath(os.path.join(MODULE_MAIN_PATH, ".."))
48 INSTALL_DIR_INFRA = os.path.join(MODULE_MAIN_PATH, "infra")
49
50 # virtualenv folder
51 LOCALSTACK_VENV_FOLDER = os.path.join(LOCALSTACK_ROOT_FOLDER, ".venv")
52 if not os.path.isdir(LOCALSTACK_VENV_FOLDER):
53 # assuming this package lives here: <python>/lib/pythonX.X/site-packages/localstack/
54 LOCALSTACK_VENV_FOLDER = os.path.realpath(
55 os.path.join(LOCALSTACK_ROOT_FOLDER, "..", "..", "..")
56 )
57
58 # API Gateway path to indicate a user request sent to the gateway
59 PATH_USER_REQUEST = "_user_request_"
60
61 # name of LocalStack Docker image
62 DOCKER_IMAGE_NAME = "localstack/localstack"
63 DOCKER_IMAGE_NAME_FULL = "localstack/localstack-full"
64
65 # backdoor API path used to retrieve or update config variables
66 CONFIG_UPDATE_PATH = "/?_config_"
67
68 # environment variable name to tag local test runs
69 ENV_INTERNAL_TEST_RUN = "LOCALSTACK_INTERNAL_TEST_RUN"
70
71 # environment variable that flags whether pro was activated. do not use for security purposes!
72 ENV_PRO_ACTIVATED = "PRO_ACTIVATED"
73
74 # content types
75 HEADER_CONTENT_TYPE = "Content-Type"
76 APPLICATION_AMZ_JSON_1_0 = "application/x-amz-json-1.0"
77 APPLICATION_AMZ_JSON_1_1 = "application/x-amz-json-1.1"
78 APPLICATION_AMZ_CBOR_1_1 = "application/x-amz-cbor-1.1"
79 APPLICATION_CBOR = "application/cbor"
80 APPLICATION_JSON = "application/json"
81 APPLICATION_XML = "application/xml"
82 APPLICATION_OCTET_STREAM = "application/octet-stream"
83 APPLICATION_X_WWW_FORM_URLENCODED = "application/x-www-form-urlencoded"
84
85 # strings to indicate truthy/falsy values
86 TRUE_STRINGS = ("1", "true", "True")
87 FALSE_STRINGS = ("0", "false", "False")
88 # strings with valid log levels for LS_LOG
89 LOG_LEVELS = ("trace-internal", "trace", "debug", "info", "warn", "error", "warning")
90
91 # Lambda defaults
92 LAMBDA_TEST_ROLE = "arn:aws:iam::%s:role/lambda-test-role" % TEST_AWS_ACCOUNT_ID
93
94 # installation constants
95 ELASTICSEARCH_URLS = {
96 "7.7.0": "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.7.0-linux-x86_64.tar.gz",
97 "7.4.0": "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.0-linux-x86_64.tar.gz",
98 "7.1.0": "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.0-linux-x86_64.tar.gz",
99 "6.7.0": "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.zip",
100 }
101 ELASTICSEARCH_DEFAULT_VERSION = "7.7.0"
102 # See https://docs.aws.amazon.com/ja_jp/elasticsearch-service/latest/developerguide/aes-supported-plugins.html
103 ELASTICSEARCH_PLUGIN_LIST = [
104 "analysis-icu",
105 "ingest-attachment",
106 "analysis-kuromoji",
107 "mapper-murmur3",
108 "mapper-size",
109 "analysis-phonetic",
110 "analysis-smartcn",
111 "analysis-stempel",
112 "analysis-ukrainian",
113 ]
114 # Default ES modules to exclude (save apprx 66MB in the final image)
115 ELASTICSEARCH_DELETE_MODULES = ["ingest-geoip"]
116 ELASTICMQ_JAR_URL = (
117 "https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-1.1.0.jar"
118 )
119 STS_JAR_URL = "https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar"
120 STEPFUNCTIONS_ZIP_URL = "https://s3.amazonaws.com/stepfunctionslocal/StepFunctionsLocal.zip"
121 KMS_URL_PATTERN = "https://s3-eu-west-2.amazonaws.com/local-kms/localstack/v3/local-kms.<arch>.bin"
122
123 # TODO: Temporarily using a fixed version of DDB in Alpine, as we're hitting a SIGSEGV JVM crash with latest
124 DYNAMODB_JAR_URL_ALPINE = "https://github.com/localstack/localstack-artifacts/raw/master/dynamodb-local-patch/etc/DynamoDBLocal.zip"
125 DYNAMODB_JAR_URL = "https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip"
126
127 # API endpoint for analytics events
128 API_ENDPOINT = os.environ.get("API_ENDPOINT") or "https://api.localstack.cloud/v1"
129 # new analytics API endpoint
130 ANALYTICS_API = os.environ.get("ANALYTICS_API") or "https://analytics.localstack.cloud/v0"
131
132 # environment variable to indicates that this process is running the Web UI
133 LOCALSTACK_WEB_PROCESS = "LOCALSTACK_WEB_PROCESS"
134 LOCALSTACK_INFRA_PROCESS = "LOCALSTACK_INFRA_PROCESS"
135
136 # hardcoded AWS account ID used by moto
137 MOTO_ACCOUNT_ID = TEST_AWS_ACCOUNT_ID
138 # fix moto account ID - note: keep this at the top level here
139 try:
140 from moto import core as moto_core
141 from moto.core import models as moto_core_models
142
143 moto_core.ACCOUNT_ID = moto_core_models.ACCOUNT_ID = MOTO_ACCOUNT_ID
144 except Exception:
145 # ignore import errors
146 pass
147
148 # default AWS region us-east-1
149 AWS_REGION_US_EAST_1 = "us-east-1"
150
151 # default lambda registry
152 DEFAULT_LAMBDA_CONTAINER_REGISTRY = "lambci/lambda"
153
154 # environment variable to override max pool connections
155 try:
156 MAX_POOL_CONNECTIONS = int(os.environ["MAX_POOL_CONNECTIONS"])
157 except Exception:
158 MAX_POOL_CONNECTIONS = 150
159
160 # test credentials used for generating signature for S3 presigned URLs (to be used by external clients)
161 TEST_AWS_ACCESS_KEY_ID = "test"
162 TEST_AWS_SECRET_ACCESS_KEY = "test"
163
164 # credentials being used for internal calls
165 INTERNAL_AWS_ACCESS_KEY_ID = "__internal_call__"
166 INTERNAL_AWS_SECRET_ACCESS_KEY = "__internal_call__"
167
168 # trace log levels (excluding/including internal API calls), configurable via $LS_LOG
169 LS_LOG_TRACE = "trace"
170 LS_LOG_TRACE_INTERNAL = "trace-internal"
171 TRACE_LOG_LEVELS = [LS_LOG_TRACE, LS_LOG_TRACE_INTERNAL]
172
173 # list of official docker images
174 OFFICIAL_IMAGES = [
175 "localstack/localstack",
176 "localstack/localstack-light",
177 "localstack/localstack-full",
178 ]
179
180 # s3 virtual host name
181 S3_VIRTUAL_HOSTNAME = "s3.%s" % LOCALHOST_HOSTNAME
182 S3_STATIC_WEBSITE_HOSTNAME = "s3-website.%s" % LOCALHOST_HOSTNAME
183
184 # port for debug py
185 DEFAULT_DEVELOP_PORT = 5678
186
187 # Default bucket name of the s3 bucket used for local lambda development
188 DEFAULT_BUCKET_MARKER_LOCAL = "__local__"
189
190 # user that starts the elasticsearch process if the current user is root
191 OS_USER_ELASTICSEARCH = "localstack"
192
[end of localstack/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/localstack/constants.py b/localstack/constants.py
--- a/localstack/constants.py
+++ b/localstack/constants.py
@@ -93,12 +93,13 @@
# installation constants
ELASTICSEARCH_URLS = {
+ "7.10.0": "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-linux-x86_64.tar.gz",
"7.7.0": "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.7.0-linux-x86_64.tar.gz",
"7.4.0": "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.0-linux-x86_64.tar.gz",
"7.1.0": "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.0-linux-x86_64.tar.gz",
"6.7.0": "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.zip",
}
-ELASTICSEARCH_DEFAULT_VERSION = "7.7.0"
+ELASTICSEARCH_DEFAULT_VERSION = "7.10.0"
# See https://docs.aws.amazon.com/ja_jp/elasticsearch-service/latest/developerguide/aes-supported-plugins.html
ELASTICSEARCH_PLUGIN_LIST = [
"analysis-icu",
|
{"golden_diff": "diff --git a/localstack/constants.py b/localstack/constants.py\n--- a/localstack/constants.py\n+++ b/localstack/constants.py\n@@ -93,12 +93,13 @@\n \n # installation constants\n ELASTICSEARCH_URLS = {\n+ \"7.10.0\": \"https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-linux-x86_64.tar.gz\",\n \"7.7.0\": \"https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.7.0-linux-x86_64.tar.gz\",\n \"7.4.0\": \"https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.0-linux-x86_64.tar.gz\",\n \"7.1.0\": \"https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.0-linux-x86_64.tar.gz\",\n \"6.7.0\": \"https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.zip\",\n }\n-ELASTICSEARCH_DEFAULT_VERSION = \"7.7.0\"\n+ELASTICSEARCH_DEFAULT_VERSION = \"7.10.0\"\n # See https://docs.aws.amazon.com/ja_jp/elasticsearch-service/latest/developerguide/aes-supported-plugins.html\n ELASTICSEARCH_PLUGIN_LIST = [\n \"analysis-icu\",\n", "issue": "feature request: Update Elastic Search to 7.10 for supporting case insensitive term queries\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Feature description\n\nAs a DEVELOPER for a Bayer project we need LocalStack with Elastic Search version 7.10.\r\n\r\nIn version 7.10 `case_insensitive` option were be added. [Term Query Doc](https://www.elastic.co/guide/en/elasticsearch/reference/7.10/query-dsl-term-query.html)\r\n\r\n## Remarks\r\n- 7.10 is the latest version under free Apache 2.0 license.\r\n- Switching to [Opensearch](https://github.com/opensearch-project/OpenSearch) would be also possible.\n\n### \ud83e\uddd1\u200d\ud83d\udcbb Implementation\n\n_No response_\n\n### Anything else?\n\n_No response_\n", "before_files": [{"content": "import os\n\nimport localstack_client.config\n\nimport localstack\n\n# LocalStack version\nVERSION = localstack.__version__\n\n# constant to represent the \"local\" region, i.e., local machine\nREGION_LOCAL = \"local\"\n\n# dev environment\nENV_DEV = \"dev\"\n\n# HTTP headers used to forward proxy request URLs\nHEADER_LOCALSTACK_EDGE_URL = \"x-localstack-edge\"\nHEADER_LOCALSTACK_REQUEST_URL = \"x-localstack-request-url\"\nHEADER_LOCALSTACK_TARGET = \"x-localstack-target\"\nHEADER_AMZN_ERROR_TYPE = \"X-Amzn-Errortype\"\n\n# backend service ports, for services that are behind a proxy (counting down from 4566)\nDEFAULT_PORT_EDGE = 4566\n\n# host name for localhost\nLOCALHOST = \"localhost\"\nLOCALHOST_IP = \"127.0.0.1\"\nLOCALHOST_HOSTNAME = \"localhost.localstack.cloud\"\n\n# version of the Maven dependency with Java utility code\nLOCALSTACK_MAVEN_VERSION = \"0.2.14\"\n\n# map of default service APIs and ports to be spun up (fetch map from localstack_client)\nDEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()\n\n# host to bind to when starting the services\nBIND_HOST = \"0.0.0.0\"\n\n# AWS user account ID used for tests\nif \"TEST_AWS_ACCOUNT_ID\" not in os.environ:\n os.environ[\"TEST_AWS_ACCOUNT_ID\"] = \"000000000000\"\nTEST_AWS_ACCOUNT_ID = os.environ[\"TEST_AWS_ACCOUNT_ID\"]\n\n# root code folder\nMODULE_MAIN_PATH = os.path.dirname(os.path.realpath(__file__))\n# TODO rename to \"ROOT_FOLDER\"!\nLOCALSTACK_ROOT_FOLDER = os.path.realpath(os.path.join(MODULE_MAIN_PATH, \"..\"))\nINSTALL_DIR_INFRA = os.path.join(MODULE_MAIN_PATH, \"infra\")\n\n# virtualenv folder\nLOCALSTACK_VENV_FOLDER = os.path.join(LOCALSTACK_ROOT_FOLDER, \".venv\")\nif not os.path.isdir(LOCALSTACK_VENV_FOLDER):\n # assuming this package lives here: <python>/lib/pythonX.X/site-packages/localstack/\n LOCALSTACK_VENV_FOLDER = os.path.realpath(\n os.path.join(LOCALSTACK_ROOT_FOLDER, \"..\", \"..\", \"..\")\n )\n\n# API Gateway path to indicate a user request sent to the gateway\nPATH_USER_REQUEST = \"_user_request_\"\n\n# name of LocalStack Docker image\nDOCKER_IMAGE_NAME = \"localstack/localstack\"\nDOCKER_IMAGE_NAME_FULL = \"localstack/localstack-full\"\n\n# backdoor API path used to retrieve or update config variables\nCONFIG_UPDATE_PATH = \"/?_config_\"\n\n# environment variable name to tag local test runs\nENV_INTERNAL_TEST_RUN = \"LOCALSTACK_INTERNAL_TEST_RUN\"\n\n# environment variable that flags whether pro was activated. do not use for security purposes!\nENV_PRO_ACTIVATED = \"PRO_ACTIVATED\"\n\n# content types\nHEADER_CONTENT_TYPE = \"Content-Type\"\nAPPLICATION_AMZ_JSON_1_0 = \"application/x-amz-json-1.0\"\nAPPLICATION_AMZ_JSON_1_1 = \"application/x-amz-json-1.1\"\nAPPLICATION_AMZ_CBOR_1_1 = \"application/x-amz-cbor-1.1\"\nAPPLICATION_CBOR = \"application/cbor\"\nAPPLICATION_JSON = \"application/json\"\nAPPLICATION_XML = \"application/xml\"\nAPPLICATION_OCTET_STREAM = \"application/octet-stream\"\nAPPLICATION_X_WWW_FORM_URLENCODED = \"application/x-www-form-urlencoded\"\n\n# strings to indicate truthy/falsy values\nTRUE_STRINGS = (\"1\", \"true\", \"True\")\nFALSE_STRINGS = (\"0\", \"false\", \"False\")\n# strings with valid log levels for LS_LOG\nLOG_LEVELS = (\"trace-internal\", \"trace\", \"debug\", \"info\", \"warn\", \"error\", \"warning\")\n\n# Lambda defaults\nLAMBDA_TEST_ROLE = \"arn:aws:iam::%s:role/lambda-test-role\" % TEST_AWS_ACCOUNT_ID\n\n# installation constants\nELASTICSEARCH_URLS = {\n \"7.7.0\": \"https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.7.0-linux-x86_64.tar.gz\",\n \"7.4.0\": \"https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.0-linux-x86_64.tar.gz\",\n \"7.1.0\": \"https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.0-linux-x86_64.tar.gz\",\n \"6.7.0\": \"https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.zip\",\n}\nELASTICSEARCH_DEFAULT_VERSION = \"7.7.0\"\n# See https://docs.aws.amazon.com/ja_jp/elasticsearch-service/latest/developerguide/aes-supported-plugins.html\nELASTICSEARCH_PLUGIN_LIST = [\n \"analysis-icu\",\n \"ingest-attachment\",\n \"analysis-kuromoji\",\n \"mapper-murmur3\",\n \"mapper-size\",\n \"analysis-phonetic\",\n \"analysis-smartcn\",\n \"analysis-stempel\",\n \"analysis-ukrainian\",\n]\n# Default ES modules to exclude (save apprx 66MB in the final image)\nELASTICSEARCH_DELETE_MODULES = [\"ingest-geoip\"]\nELASTICMQ_JAR_URL = (\n \"https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-1.1.0.jar\"\n)\nSTS_JAR_URL = \"https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar\"\nSTEPFUNCTIONS_ZIP_URL = \"https://s3.amazonaws.com/stepfunctionslocal/StepFunctionsLocal.zip\"\nKMS_URL_PATTERN = \"https://s3-eu-west-2.amazonaws.com/local-kms/localstack/v3/local-kms.<arch>.bin\"\n\n# TODO: Temporarily using a fixed version of DDB in Alpine, as we're hitting a SIGSEGV JVM crash with latest\nDYNAMODB_JAR_URL_ALPINE = \"https://github.com/localstack/localstack-artifacts/raw/master/dynamodb-local-patch/etc/DynamoDBLocal.zip\"\nDYNAMODB_JAR_URL = \"https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip\"\n\n# API endpoint for analytics events\nAPI_ENDPOINT = os.environ.get(\"API_ENDPOINT\") or \"https://api.localstack.cloud/v1\"\n# new analytics API endpoint\nANALYTICS_API = os.environ.get(\"ANALYTICS_API\") or \"https://analytics.localstack.cloud/v0\"\n\n# environment variable to indicates that this process is running the Web UI\nLOCALSTACK_WEB_PROCESS = \"LOCALSTACK_WEB_PROCESS\"\nLOCALSTACK_INFRA_PROCESS = \"LOCALSTACK_INFRA_PROCESS\"\n\n# hardcoded AWS account ID used by moto\nMOTO_ACCOUNT_ID = TEST_AWS_ACCOUNT_ID\n# fix moto account ID - note: keep this at the top level here\ntry:\n from moto import core as moto_core\n from moto.core import models as moto_core_models\n\n moto_core.ACCOUNT_ID = moto_core_models.ACCOUNT_ID = MOTO_ACCOUNT_ID\nexcept Exception:\n # ignore import errors\n pass\n\n# default AWS region us-east-1\nAWS_REGION_US_EAST_1 = \"us-east-1\"\n\n# default lambda registry\nDEFAULT_LAMBDA_CONTAINER_REGISTRY = \"lambci/lambda\"\n\n# environment variable to override max pool connections\ntry:\n MAX_POOL_CONNECTIONS = int(os.environ[\"MAX_POOL_CONNECTIONS\"])\nexcept Exception:\n MAX_POOL_CONNECTIONS = 150\n\n# test credentials used for generating signature for S3 presigned URLs (to be used by external clients)\nTEST_AWS_ACCESS_KEY_ID = \"test\"\nTEST_AWS_SECRET_ACCESS_KEY = \"test\"\n\n# credentials being used for internal calls\nINTERNAL_AWS_ACCESS_KEY_ID = \"__internal_call__\"\nINTERNAL_AWS_SECRET_ACCESS_KEY = \"__internal_call__\"\n\n# trace log levels (excluding/including internal API calls), configurable via $LS_LOG\nLS_LOG_TRACE = \"trace\"\nLS_LOG_TRACE_INTERNAL = \"trace-internal\"\nTRACE_LOG_LEVELS = [LS_LOG_TRACE, LS_LOG_TRACE_INTERNAL]\n\n# list of official docker images\nOFFICIAL_IMAGES = [\n \"localstack/localstack\",\n \"localstack/localstack-light\",\n \"localstack/localstack-full\",\n]\n\n# s3 virtual host name\nS3_VIRTUAL_HOSTNAME = \"s3.%s\" % LOCALHOST_HOSTNAME\nS3_STATIC_WEBSITE_HOSTNAME = \"s3-website.%s\" % LOCALHOST_HOSTNAME\n\n# port for debug py\nDEFAULT_DEVELOP_PORT = 5678\n\n# Default bucket name of the s3 bucket used for local lambda development\nDEFAULT_BUCKET_MARKER_LOCAL = \"__local__\"\n\n# user that starts the elasticsearch process if the current user is root\nOS_USER_ELASTICSEARCH = \"localstack\"\n", "path": "localstack/constants.py"}]}
| 3,130 | 308 |
gh_patches_debug_53951
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-985
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cfn-lint is failing because of `pkg_resources.ContextualVersionConflict: (jsonschema 2.6.0)`.
*cfn-lint version: (`0.21.6`)*
*Description of issue.*
cfn-lint(python2) requires jsonschema 2.6.0 but aws-sam-translator which got released today requires jsonschema3.0
https://pypi.org/project/aws-sam-translator/#history
pkg_resources.ContextualVersionConflict: (jsonschema 2.6.0 (/usr/lib/python2.7/site-packages), Requirement.parse('jsonschema~=3.0'), set(['aws-sam-translator']))
</issue>
<code>
[start of setup.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import codecs
18 import re
19 from setuptools import find_packages
20 from setuptools import setup
21
22
23 def get_version(filename):
24 with codecs.open(filename, 'r', 'utf-8') as fp:
25 contents = fp.read()
26 return re.search(r"__version__ = ['\"]([^'\"]+)['\"]", contents).group(1)
27
28
29 version = get_version('src/cfnlint/version.py')
30
31
32 with open('README.md') as f:
33 readme = f.read()
34
35 setup(
36 name='cfn-lint',
37 version=version,
38 description=('checks cloudformation for practices and behaviour \
39 that could potentially be improved'),
40 long_description=readme,
41 long_description_content_type="text/markdown",
42 keywords='aws, lint',
43 author='kddejong',
44 author_email='[email protected]',
45 url='https://github.com/aws-cloudformation/cfn-python-lint',
46 package_dir={'': 'src'},
47 package_data={'cfnlint': [
48 'data/CloudSpecs/*.json',
49 'data/AdditionalSpecs/*.json',
50 'data/Serverless/*.json',
51 'data/ExtendedSpecs/all/*.json',
52 'data/ExtendedSpecs/ap-northeast-1/*.json',
53 'data/ExtendedSpecs/ap-northeast-2/*.json',
54 'data/ExtendedSpecs/ap-northeast-3/*.json',
55 'data/ExtendedSpecs/ap-south-1/*.json',
56 'data/ExtendedSpecs/ap-southeast-1/*.json',
57 'data/ExtendedSpecs/ap-southeast-2/*.json',
58 'data/ExtendedSpecs/ca-central-1/*.json',
59 'data/ExtendedSpecs/eu-central-1/*.json',
60 'data/ExtendedSpecs/eu-north-1/*.json',
61 'data/ExtendedSpecs/eu-west-1/*.json',
62 'data/ExtendedSpecs/eu-west-2/*.json',
63 'data/ExtendedSpecs/eu-west-3/*.json',
64 'data/ExtendedSpecs/sa-east-1/*.json',
65 'data/ExtendedSpecs/us-east-1/*.json',
66 'data/ExtendedSpecs/us-east-2/*.json',
67 'data/ExtendedSpecs/us-gov-east-1/*.json',
68 'data/ExtendedSpecs/us-gov-west-1/*.json',
69 'data/ExtendedSpecs/us-west-1/*.json',
70 'data/ExtendedSpecs/us-west-2/*.json',
71 'data/CfnLintCli/config/schema.json'
72 ]},
73 packages=find_packages('src'),
74 zip_safe=False,
75 install_requires=[
76 'pyyaml',
77 'six~=1.11',
78 'requests>=2.15.0',
79 'aws-sam-translator>=1.10.0',
80 'jsonpatch',
81 'jsonschema~=2.6',
82 'pathlib2>=2.3.0;python_version<"3.4"',
83 'setuptools',
84 ],
85 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
86 entry_points={
87 'console_scripts': [
88 'cfn-lint = cfnlint.__main__:main'
89 ]
90 },
91 license='MIT no attribution',
92 test_suite="unittest",
93 classifiers=[
94 'Development Status :: 5 - Production/Stable',
95 'Intended Audience :: Developers',
96 'License :: OSI Approved :: MIT License',
97 'Natural Language :: English',
98 'Operating System :: OS Independent',
99 'Programming Language :: Python :: 2',
100 'Programming Language :: Python :: 2.7',
101 'Programming Language :: Python :: 3',
102 'Programming Language :: Python :: 3.4',
103 'Programming Language :: Python :: 3.5',
104 'Programming Language :: Python :: 3.6',
105 'Programming Language :: Python :: 3.7',
106 ],
107 )
108
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -78,7 +78,7 @@
'requests>=2.15.0',
'aws-sam-translator>=1.10.0',
'jsonpatch',
- 'jsonschema~=2.6',
+ 'jsonschema~=3.0',
'pathlib2>=2.3.0;python_version<"3.4"',
'setuptools',
],
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -78,7 +78,7 @@\n 'requests>=2.15.0',\n 'aws-sam-translator>=1.10.0',\n 'jsonpatch',\n- 'jsonschema~=2.6',\n+ 'jsonschema~=3.0',\n 'pathlib2>=2.3.0;python_version<\"3.4\"',\n 'setuptools',\n ],\n", "issue": "cfn-lint is failing because of `pkg_resources.ContextualVersionConflict: (jsonschema 2.6.0)`. \n*cfn-lint version: (`0.21.6`)*\r\n\r\n*Description of issue.*\r\ncfn-lint(python2) requires jsonschema 2.6.0 but aws-sam-translator which got released today requires jsonschema3.0\r\n\r\nhttps://pypi.org/project/aws-sam-translator/#history\r\npkg_resources.ContextualVersionConflict: (jsonschema 2.6.0 (/usr/lib/python2.7/site-packages), Requirement.parse('jsonschema~=3.0'), set(['aws-sam-translator']))\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport codecs\nimport re\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\ndef get_version(filename):\n with codecs.open(filename, 'r', 'utf-8') as fp:\n contents = fp.read()\n return re.search(r\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", contents).group(1)\n\n\nversion = get_version('src/cfnlint/version.py')\n\n\nwith open('README.md') as f:\n readme = f.read()\n\nsetup(\n name='cfn-lint',\n version=version,\n description=('checks cloudformation for practices and behaviour \\\n that could potentially be improved'),\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n keywords='aws, lint',\n author='kddejong',\n author_email='[email protected]',\n url='https://github.com/aws-cloudformation/cfn-python-lint',\n package_dir={'': 'src'},\n package_data={'cfnlint': [\n 'data/CloudSpecs/*.json',\n 'data/AdditionalSpecs/*.json',\n 'data/Serverless/*.json',\n 'data/ExtendedSpecs/all/*.json',\n 'data/ExtendedSpecs/ap-northeast-1/*.json',\n 'data/ExtendedSpecs/ap-northeast-2/*.json',\n 'data/ExtendedSpecs/ap-northeast-3/*.json',\n 'data/ExtendedSpecs/ap-south-1/*.json',\n 'data/ExtendedSpecs/ap-southeast-1/*.json',\n 'data/ExtendedSpecs/ap-southeast-2/*.json',\n 'data/ExtendedSpecs/ca-central-1/*.json',\n 'data/ExtendedSpecs/eu-central-1/*.json',\n 'data/ExtendedSpecs/eu-north-1/*.json',\n 'data/ExtendedSpecs/eu-west-1/*.json',\n 'data/ExtendedSpecs/eu-west-2/*.json',\n 'data/ExtendedSpecs/eu-west-3/*.json',\n 'data/ExtendedSpecs/sa-east-1/*.json',\n 'data/ExtendedSpecs/us-east-1/*.json',\n 'data/ExtendedSpecs/us-east-2/*.json',\n 'data/ExtendedSpecs/us-gov-east-1/*.json',\n 'data/ExtendedSpecs/us-gov-west-1/*.json',\n 'data/ExtendedSpecs/us-west-1/*.json',\n 'data/ExtendedSpecs/us-west-2/*.json',\n 'data/CfnLintCli/config/schema.json'\n ]},\n packages=find_packages('src'),\n zip_safe=False,\n install_requires=[\n 'pyyaml',\n 'six~=1.11',\n 'requests>=2.15.0',\n 'aws-sam-translator>=1.10.0',\n 'jsonpatch',\n 'jsonschema~=2.6',\n 'pathlib2>=2.3.0;python_version<\"3.4\"',\n 'setuptools',\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n entry_points={\n 'console_scripts': [\n 'cfn-lint = cfnlint.__main__:main'\n ]\n },\n license='MIT no attribution',\n test_suite=\"unittest\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n)\n", "path": "setup.py"}]}
| 1,946 | 109 |
gh_patches_debug_31925
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-2198
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unknown option `no-edit` in git-mpr
This one is mostly for @Carreau: I just tried git mpr again, and this is what I got. Does it actually work for you on linux? This is on a linux 12.04 box with git 1.7.9.5.
```
(master)longs[ipython]> git mpr -m 2179
error: unknown option `no-edit'
usage: git fetch [<options>] [<repository> [<refspec>...]]
or: git fetch [<options>] <group>
or: git fetch --multiple [<options>] [(<repository> | <group>)...]
or: git fetch --all [<options>]
-v, --verbose be more verbose
-q, --quiet be more quiet
--all fetch from all remotes
-a, --append append to .git/FETCH_HEAD instead of overwriting
--upload-pack <path> path to upload pack on remote end
-f, --force force overwrite of local branch
-m, --multiple fetch from multiple remotes
-t, --tags fetch all tags and associated objects
-n do not fetch all tags (--no-tags)
-p, --prune prune remote-tracking branches no longer on remote
--recurse-submodules[=<on-demand>]
control recursive fetching of submodules
--dry-run dry run
-k, --keep keep downloaded pack
-u, --update-head-ok allow updating of HEAD ref
--progress force progress reporting
--depth <depth> deepen history of shallow clone
fatal: There is no merge to abort (MERGE_HEAD missing).
Traceback (most recent call last):
File "/home/fperez/usr/bin//git-mpr", line 117, in <module>
main()
File "/home/fperez/usr/bin//git-mpr", line 107, in main
merge_pr(num)
File "/home/fperez/usr/bin//git-mpr", line 46, in merge_pr
branch=branch,
File "/home/fperez/usr/bin//git-mpr", line 29, in merge_branch
check_call(['git', 'merge', '--abort'])
File "/usr/lib/python2.7/subprocess.py", line 511, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['git', 'merge', '--abort']' returned non-zero exit status 128
```
</issue>
<code>
[start of tools/git-mpr.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 Usage:
5 python git-mpr.py -m 1657
6 """
7 from __future__ import print_function
8
9 import argparse
10 from subprocess import check_call, CalledProcessError
11
12 import gh_api
13
14 ipy_repository = 'git://github.com/ipython/ipython.git'
15 gh_project = "ipython/ipython"
16 not_merged = {}
17
18 def merge_branch(repo, branch ):
19 """try to merge the givent branch into the current one
20
21 If something does not goes smoothly, merge is aborted
22
23 Returns True if merge sucessfull, False otherwise
24 """
25 # Delete the branch first
26 try :
27 check_call(['git', 'pull', '--no-edit', repo, branch])
28 except CalledProcessError :
29 check_call(['git', 'merge', '--abort'])
30 return False
31 return True
32
33
34 def merge_pr(num):
35 """ try to merge the branch of PR `num` into current branch
36 """
37 # Get Github authorisation first, so that the user is prompted straight away
38 # if their login is needed.
39
40 pr = gh_api.get_pull_request(gh_project, num)
41 repo = pr['head']['repo']['clone_url']
42
43
44 branch = pr['head']['ref']
45 mergeable = merge_branch(repo=repo,
46 branch=branch,
47 )
48 if not mergeable :
49 cmd = "git pull "+repo+" "+branch
50 not_merged[str(num)] = cmd
51 print("==============================================================================")
52 print("Something went wrong merging this branch, you can try it manually by runngin :")
53 print(cmd)
54 print("==============================================================================")
55
56
57 def main(*args):
58 parser = argparse.ArgumentParser(
59 description="""
60 Merge (one|many) github pull request by their number.\
61
62 If pull request can't be merge as is, cancel merge,
63 and continue to the next if any.
64 """
65 )
66 parser.add_argument('-v2', '--githubapiv2', action='store_const', const=2)
67
68 grp = parser.add_mutually_exclusive_group()
69 grp.add_argument(
70 '-l',
71 '--list',
72 action='store_const',
73 const=True,
74 help='list PR, their number and their mergeability')
75 grp.add_argument('-a',
76 '--merge-all',
77 action='store_const',
78 const=True ,
79 help='try to merge as many PR as possible, one by one')
80 grp.add_argument('-m',
81 '--merge',
82 type=int,
83 help="The pull request numbers",
84 nargs='*',
85 metavar='pr-number')
86 args = parser.parse_args()
87
88 if(args.list):
89 pr_list = gh_api.get_pulls_list(gh_project)
90 for pr in pr_list :
91 mergeable = gh_api.get_pull_request(gh_project, pr['number'])['mergeable']
92
93 ismgb = u"√" if mergeable else " "
94 print(u"* #{number} [{ismgb}]: {title}".format(
95 number=pr['number'],
96 title=pr['title'],
97 ismgb=ismgb))
98
99 if(args.merge_all):
100 pr_list = gh_api.get_pulls_list(gh_project)
101 for pr in pr_list :
102 merge_pr(pr['number'])
103
104
105 elif args.merge:
106 for num in args.merge :
107 merge_pr(num)
108
109 if not_merged :
110 print('*************************************************************************************')
111 print('the following branch have not been merged automatically, considere doing it by hand :')
112 for num, cmd in not_merged.items() :
113 print( "PR {num}: {cmd}".format(num=num, cmd=cmd))
114 print('*************************************************************************************')
115
116 if __name__ == '__main__':
117 main()
118
[end of tools/git-mpr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/git-mpr.py b/tools/git-mpr.py
--- a/tools/git-mpr.py
+++ b/tools/git-mpr.py
@@ -6,6 +6,7 @@
"""
from __future__ import print_function
+import io, os
import argparse
from subprocess import check_call, CalledProcessError
@@ -24,7 +25,7 @@
"""
# Delete the branch first
try :
- check_call(['git', 'pull', '--no-edit', repo, branch])
+ check_call(['git', 'pull', repo, branch], stdin=io.open(os.devnull))
except CalledProcessError :
check_call(['git', 'merge', '--abort'])
return False
@@ -57,13 +58,11 @@
def main(*args):
parser = argparse.ArgumentParser(
description="""
- Merge (one|many) github pull request by their number.\
-
- If pull request can't be merge as is, cancel merge,
- and continue to the next if any.
+ Merge one or more github pull requests by their number. If any
+ one pull request can't be merged as is, its merge is ignored
+ and the process continues with the next ones (if any).
"""
)
- parser.add_argument('-v2', '--githubapiv2', action='store_const', const=2)
grp = parser.add_mutually_exclusive_group()
grp.add_argument(
@@ -77,8 +76,7 @@
action='store_const',
const=True ,
help='try to merge as many PR as possible, one by one')
- grp.add_argument('-m',
- '--merge',
+ parser.add_argument('integers',
type=int,
help="The pull request numbers",
nargs='*',
|
{"golden_diff": "diff --git a/tools/git-mpr.py b/tools/git-mpr.py\n--- a/tools/git-mpr.py\n+++ b/tools/git-mpr.py\n@@ -6,6 +6,7 @@\n \"\"\"\n from __future__ import print_function\n \n+import io, os\n import argparse\n from subprocess import check_call, CalledProcessError\n \n@@ -24,7 +25,7 @@\n \"\"\"\n # Delete the branch first\n try :\n- check_call(['git', 'pull', '--no-edit', repo, branch])\n+ check_call(['git', 'pull', repo, branch], stdin=io.open(os.devnull))\n except CalledProcessError :\n check_call(['git', 'merge', '--abort'])\n return False\n@@ -57,13 +58,11 @@\n def main(*args):\n parser = argparse.ArgumentParser(\n description=\"\"\"\n- Merge (one|many) github pull request by their number.\\\n- \n- If pull request can't be merge as is, cancel merge,\n- and continue to the next if any.\n+ Merge one or more github pull requests by their number. If any\n+ one pull request can't be merged as is, its merge is ignored\n+ and the process continues with the next ones (if any).\n \"\"\"\n )\n- parser.add_argument('-v2', '--githubapiv2', action='store_const', const=2)\n \n grp = parser.add_mutually_exclusive_group()\n grp.add_argument(\n@@ -77,8 +76,7 @@\n action='store_const',\n const=True ,\n help='try to merge as many PR as possible, one by one')\n- grp.add_argument('-m',\n- '--merge',\n+ parser.add_argument('integers',\n type=int,\n help=\"The pull request numbers\",\n nargs='*',\n", "issue": "Unknown option `no-edit` in git-mpr\nThis one is mostly for @Carreau: I just tried git mpr again, and this is what I got. Does it actually work for you on linux? This is on a linux 12.04 box with git 1.7.9.5.\n\n```\n(master)longs[ipython]> git mpr -m 2179\nerror: unknown option `no-edit'\nusage: git fetch [<options>] [<repository> [<refspec>...]]\n or: git fetch [<options>] <group>\n or: git fetch --multiple [<options>] [(<repository> | <group>)...]\n or: git fetch --all [<options>]\n\n -v, --verbose be more verbose\n -q, --quiet be more quiet\n --all fetch from all remotes\n -a, --append append to .git/FETCH_HEAD instead of overwriting\n --upload-pack <path> path to upload pack on remote end\n -f, --force force overwrite of local branch\n -m, --multiple fetch from multiple remotes\n -t, --tags fetch all tags and associated objects\n -n do not fetch all tags (--no-tags)\n -p, --prune prune remote-tracking branches no longer on remote\n --recurse-submodules[=<on-demand>]\n control recursive fetching of submodules\n --dry-run dry run\n -k, --keep keep downloaded pack\n -u, --update-head-ok allow updating of HEAD ref\n --progress force progress reporting\n --depth <depth> deepen history of shallow clone\n\nfatal: There is no merge to abort (MERGE_HEAD missing).\nTraceback (most recent call last):\n File \"/home/fperez/usr/bin//git-mpr\", line 117, in <module>\n main()\n File \"/home/fperez/usr/bin//git-mpr\", line 107, in main\n merge_pr(num)\n File \"/home/fperez/usr/bin//git-mpr\", line 46, in merge_pr\n branch=branch,\n File \"/home/fperez/usr/bin//git-mpr\", line 29, in merge_branch\n check_call(['git', 'merge', '--abort'])\n File \"/usr/lib/python2.7/subprocess.py\", line 511, in check_call\n raise CalledProcessError(retcode, cmd)\nsubprocess.CalledProcessError: Command '['git', 'merge', '--abort']' returned non-zero exit status 128\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nUsage:\n python git-mpr.py -m 1657\n\"\"\"\nfrom __future__ import print_function\n\nimport argparse\nfrom subprocess import check_call, CalledProcessError\n\nimport gh_api\n\nipy_repository = 'git://github.com/ipython/ipython.git'\ngh_project = \"ipython/ipython\"\nnot_merged = {}\n\ndef merge_branch(repo, branch ):\n \"\"\"try to merge the givent branch into the current one\n \n If something does not goes smoothly, merge is aborted\n \n Returns True if merge sucessfull, False otherwise\n \"\"\"\n # Delete the branch first\n try :\n check_call(['git', 'pull', '--no-edit', repo, branch])\n except CalledProcessError :\n check_call(['git', 'merge', '--abort'])\n return False\n return True\n\n\ndef merge_pr(num):\n \"\"\" try to merge the branch of PR `num` into current branch\n \"\"\"\n # Get Github authorisation first, so that the user is prompted straight away\n # if their login is needed.\n \n pr = gh_api.get_pull_request(gh_project, num)\n repo = pr['head']['repo']['clone_url']\n\n\n branch = pr['head']['ref']\n mergeable = merge_branch(repo=repo, \n branch=branch,\n )\n if not mergeable :\n cmd = \"git pull \"+repo+\" \"+branch\n not_merged[str(num)] = cmd\n print(\"==============================================================================\")\n print(\"Something went wrong merging this branch, you can try it manually by runngin :\")\n print(cmd)\n print(\"==============================================================================\")\n \n \ndef main(*args):\n parser = argparse.ArgumentParser(\n description=\"\"\"\n Merge (one|many) github pull request by their number.\\\n \n If pull request can't be merge as is, cancel merge,\n and continue to the next if any.\n \"\"\"\n )\n parser.add_argument('-v2', '--githubapiv2', action='store_const', const=2)\n\n grp = parser.add_mutually_exclusive_group()\n grp.add_argument(\n '-l',\n '--list',\n action='store_const',\n const=True,\n help='list PR, their number and their mergeability')\n grp.add_argument('-a',\n '--merge-all',\n action='store_const',\n const=True ,\n help='try to merge as many PR as possible, one by one')\n grp.add_argument('-m',\n '--merge',\n type=int,\n help=\"The pull request numbers\",\n nargs='*',\n metavar='pr-number')\n args = parser.parse_args()\n\n if(args.list):\n pr_list = gh_api.get_pulls_list(gh_project)\n for pr in pr_list :\n mergeable = gh_api.get_pull_request(gh_project, pr['number'])['mergeable']\n\n ismgb = u\"\u221a\" if mergeable else \" \"\n print(u\"* #{number} [{ismgb}]: {title}\".format(\n number=pr['number'],\n title=pr['title'],\n ismgb=ismgb))\n\n if(args.merge_all):\n pr_list = gh_api.get_pulls_list(gh_project)\n for pr in pr_list :\n merge_pr(pr['number'])\n\n\n elif args.merge:\n for num in args.merge :\n merge_pr(num)\n\n if not_merged :\n print('*************************************************************************************')\n print('the following branch have not been merged automatically, considere doing it by hand :')\n for num, cmd in not_merged.items() :\n print( \"PR {num}: {cmd}\".format(num=num, cmd=cmd))\n print('*************************************************************************************')\n\nif __name__ == '__main__':\n main()\n", "path": "tools/git-mpr.py"}]}
| 2,163 | 397 |
gh_patches_debug_2013
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-8555
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
with setuptools v70.0.0: `ModuleNotFoundError: No module named 'pkg_resources.extern'`
<!--
Welcome to the PyInstaller issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to PyInstaller
- For questions and general support, use the discussions forum.
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
3. When making a bug report, make sure you provide all required information. The easier it is for
maintainers to reproduce, the faster it'll be fixed.
-->
<!-- +++ ONLY TEXT +++ DO NOT POST IMAGES +++ -->
## Description of the issue
I have added some TODO notes below but wanted to submit this sooner than later for any other users running into this issue today to be able to find it.
This morning I noticed an error in my tests that exercise PyInstaller generated Windows .exe's in CI.
https://github.com/Chia-Network/chia-blockchain/actions/runs/9175546125/job/25229722015?pr=16898
```
Traceback (most recent call last):
File "Lib\site-packages\PyInstaller\hooks\rthooks\pyi_rth_pkgres.py", line 158, in <module>
File "Lib\site-packages\PyInstaller\hooks\rthooks\pyi_rth_pkgres.py", line 36, in _pyi_rthook
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "pkg_resources\__init__.py", line [7](https://github.com/Chia-Network/chia-blockchain/actions/runs/9175546125/job/25229722015?pr=16898#step:6:8)7, in <module>
ModuleNotFoundError: No module named 'pkg_resources.extern'
[2148] Failed to execute script 'pyi_rth_pkgres' due to unhandled exception!
```
First I correlated this with [the release of setuptools v70.0.0](https://pypi.org/project/setuptools/70.0.0/#history) a few hours earlier (and not a new PyInstaller release `:]`). After looking here and finding no issues reported I checked over at setuptools and found https://github.com/pypa/setuptools/issues/4374. In that discussion I noted that the issue appears with https://github.com/pypa/setuptools/commit/e9995828311c5e0c843622ca2be85e7f09f1ff0d and not its parent commit. That commit does indeed change how some of the `pkg_resources.extern` imports are handled inside `pkg_resources`. Another developer provided an example, though that example has not yet resulted in recreation of the issue.
### Context information (for bug reports)
* Output of `pyinstaller --version`: ```(paste here)``` (TODO: add this to my CI run)
* Version of Python: 3.10
* Platform: Windows (GitHub Actions runner)
* How you installed Python: in-house action https://github.com/chia-network/actions/setup-python that should, in this case, pass through to upstream https://github.com/actions/setup-python
* Did you also try this on another platform? Does it work there?
~Similar Linux (Rocky and Ubuntu) builds as well as macOS (Intel and ARM) builds and tests seem to continue to work fine. I could afford to review these runs in more detail for other relevant changes, but have not quite yet.~
When forcing the setuptools to be installed, this does happen on all of the platforms we build executables for (Rocky, Ubuntu (Intel and ARM), macOS (Intel and ARM), Windows)
* try the latest development version, using the following command:
https://github.com/Chia-Network/chia-blockchain/actions/runs/9179289212/job/25241848658?pr=18051 shows the failure using `develop`, specifically 676584885f2dfa1f885ab6155a5eda9150892c03.
* follow *all* the instructions in our "If Things Go Wrong" Guide
(https://github.com/pyinstaller/pyinstaller/wiki/If-Things-Go-Wrong) and
### Make sure [everything is packaged correctly](https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs#make-sure-everything-is-packaged-correctly)
* [x] start with clean installation
* [ ] use the latest development version
* [x] Run your frozen program **from a command window (shell)** — instead of double-clicking on it
* [ ] Package your program in **--onedir mode**
* [ ] Package **without UPX**, say: use the option `--noupx` or set `upx=False` in your .spec-file
* [ ] Repackage you application in **verbose/debug mode**. For this, pass the option `--debug` to `pyi-makespec` or `pyinstaller` or use `EXE(..., debug=1, ...)` in your .spec file.
### A minimal example program which shows the error
TODO: I will try to create this
```
(paste text here)
“Minimal“ means: remove everything from your code which is not relevant for this bug,
esp. don't use external programs, remote requests, etc.
A very good example is https://gist.github.com/ronen/024cdae9ff2d50488438. This one helped
us reproducing and fixing a quite complex problem within approx 1 hour.
```
### Stacktrace / full error message
```
(paste text here)
```
Please also see <https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs>
for more about what would use to solve the issue.
</issue>
<code>
[start of PyInstaller/hooks/hook-pkg_resources.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2023, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License (version 2
5 # or later) with exception for distributing the bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)
10 #-----------------------------------------------------------------------------
11
12 from PyInstaller.utils.hooks import collect_submodules, check_requirement, can_import_module
13
14 # pkg_resources keeps vendored modules in its _vendor subpackage, and does sys.meta_path based import magic to expose
15 # them as pkg_resources.extern.*
16
17 # The `railroad` package is an optional requirement for `pyparsing`. `pyparsing.diagrams` depends on `railroad`, so
18 # filter it out when `railroad` is not available.
19 if can_import_module('railroad'):
20 hiddenimports = collect_submodules('pkg_resources._vendor')
21 else:
22 hiddenimports = collect_submodules(
23 'pkg_resources._vendor', filter=lambda name: 'pkg_resources._vendor.pyparsing.diagram' not in name
24 )
25
26 # pkg_resources v45.0 dropped support for Python 2 and added this module printing a warning. We could save some bytes if
27 # we would replace this by a fake module.
28 if check_requirement('setuptools >= 45.0.0, < 49.1.1'):
29 hiddenimports.append('pkg_resources.py2_warn')
30
31 excludedimports = ['__main__']
32
33 # Some more hidden imports. See:
34 # https://github.com/pyinstaller/pyinstaller-hooks-contrib/issues/15#issuecomment-663699288 `packaging` can either be
35 # its own package, or embedded in `pkg_resources._vendor.packaging`, or both.
36 hiddenimports += collect_submodules('packaging')
37
38 # As of v60.7, setuptools vendored jaraco and has pkg_resources use it. Currently, the pkg_resources._vendor.jaraco
39 # namespace package cannot be automatically scanned due to limited support for pure namespace packages in our hook
40 # utilities.
41 #
42 # In setuptools 60.7.0, the vendored jaraco.text package included "Lorem Ipsum.txt" data file, which also has to be
43 # collected. However, the presence of the data file (and the resulting directory hierarchy) confuses the importer's
44 # redirection logic; instead of trying to work-around that, tell user to upgrade or downgrade their setuptools.
45 if check_requirement("setuptools == 60.7.0"):
46 raise SystemExit(
47 "ERROR: Setuptools 60.7.0 is incompatible with PyInstaller. "
48 "Downgrade to an earlier version or upgrade to a later version."
49 )
50 # In setuptools 60.7.1, the "Lorem Ipsum.txt" data file was dropped from the vendored jaraco.text package, so we can
51 # accommodate it with couple of hidden imports.
52 elif check_requirement("setuptools >= 60.7.1"):
53 hiddenimports += [
54 'pkg_resources._vendor.jaraco.functools',
55 'pkg_resources._vendor.jaraco.context',
56 'pkg_resources._vendor.jaraco.text',
57 ]
58
[end of PyInstaller/hooks/hook-pkg_resources.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/PyInstaller/hooks/hook-pkg_resources.py b/PyInstaller/hooks/hook-pkg_resources.py
--- a/PyInstaller/hooks/hook-pkg_resources.py
+++ b/PyInstaller/hooks/hook-pkg_resources.py
@@ -55,3 +55,9 @@
'pkg_resources._vendor.jaraco.context',
'pkg_resources._vendor.jaraco.text',
]
+
+# As of setuptools 70.0.0, we need pkg_resources.extern added to hidden imports.
+if check_requirement("setuptools >= 70.0.0"):
+ hiddenimports += [
+ 'pkg_resources.extern',
+ ]
|
{"golden_diff": "diff --git a/PyInstaller/hooks/hook-pkg_resources.py b/PyInstaller/hooks/hook-pkg_resources.py\n--- a/PyInstaller/hooks/hook-pkg_resources.py\n+++ b/PyInstaller/hooks/hook-pkg_resources.py\n@@ -55,3 +55,9 @@\n 'pkg_resources._vendor.jaraco.context',\n 'pkg_resources._vendor.jaraco.text',\n ]\n+\n+# As of setuptools 70.0.0, we need pkg_resources.extern added to hidden imports.\n+if check_requirement(\"setuptools >= 70.0.0\"):\n+ hiddenimports += [\n+ 'pkg_resources.extern',\n+ ]\n", "issue": "with setuptools v70.0.0: `ModuleNotFoundError: No module named 'pkg_resources.extern'`\n<!--\r\nWelcome to the PyInstaller issue tracker! Before creating an issue, please heed the following:\r\n\r\n1. This tracker should only be used to report bugs and request features / enhancements to PyInstaller\r\n - For questions and general support, use the discussions forum.\r\n2. Use the search function before creating a new issue. Duplicates will be closed and directed to\r\n the original discussion.\r\n3. When making a bug report, make sure you provide all required information. The easier it is for\r\n maintainers to reproduce, the faster it'll be fixed.\r\n-->\r\n\r\n<!-- +++ ONLY TEXT +++ DO NOT POST IMAGES +++ -->\r\n\r\n## Description of the issue\r\n\r\nI have added some TODO notes below but wanted to submit this sooner than later for any other users running into this issue today to be able to find it.\r\n\r\nThis morning I noticed an error in my tests that exercise PyInstaller generated Windows .exe's in CI.\r\n\r\nhttps://github.com/Chia-Network/chia-blockchain/actions/runs/9175546125/job/25229722015?pr=16898\r\n```\r\nTraceback (most recent call last):\r\n File \"Lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_pkgres.py\", line 158, in <module>\r\n File \"Lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_pkgres.py\", line 36, in _pyi_rthook\r\n File \"PyInstaller\\loader\\pyimod02_importers.py\", line 419, in exec_module\r\n File \"pkg_resources\\__init__.py\", line [7](https://github.com/Chia-Network/chia-blockchain/actions/runs/9175546125/job/25229722015?pr=16898#step:6:8)7, in <module>\r\nModuleNotFoundError: No module named 'pkg_resources.extern'\r\n[2148] Failed to execute script 'pyi_rth_pkgres' due to unhandled exception!\r\n```\r\n\r\nFirst I correlated this with [the release of setuptools v70.0.0](https://pypi.org/project/setuptools/70.0.0/#history) a few hours earlier (and not a new PyInstaller release `:]`). After looking here and finding no issues reported I checked over at setuptools and found https://github.com/pypa/setuptools/issues/4374. In that discussion I noted that the issue appears with https://github.com/pypa/setuptools/commit/e9995828311c5e0c843622ca2be85e7f09f1ff0d and not its parent commit. That commit does indeed change how some of the `pkg_resources.extern` imports are handled inside `pkg_resources`. Another developer provided an example, though that example has not yet resulted in recreation of the issue.\r\n\r\n### Context information (for bug reports)\r\n\r\n* Output of `pyinstaller --version`: ```(paste here)``` (TODO: add this to my CI run)\r\n* Version of Python: 3.10\r\n* Platform: Windows (GitHub Actions runner)\r\n* How you installed Python: in-house action https://github.com/chia-network/actions/setup-python that should, in this case, pass through to upstream https://github.com/actions/setup-python\r\n* Did you also try this on another platform? Does it work there?\r\n\r\n~Similar Linux (Rocky and Ubuntu) builds as well as macOS (Intel and ARM) builds and tests seem to continue to work fine. I could afford to review these runs in more detail for other relevant changes, but have not quite yet.~\r\n\r\nWhen forcing the setuptools to be installed, this does happen on all of the platforms we build executables for (Rocky, Ubuntu (Intel and ARM), macOS (Intel and ARM), Windows)\r\n\r\n* try the latest development version, using the following command:\r\n\r\nhttps://github.com/Chia-Network/chia-blockchain/actions/runs/9179289212/job/25241848658?pr=18051 shows the failure using `develop`, specifically 676584885f2dfa1f885ab6155a5eda9150892c03.\r\n\r\n* follow *all* the instructions in our \"If Things Go Wrong\" Guide\r\n (https://github.com/pyinstaller/pyinstaller/wiki/If-Things-Go-Wrong) and\r\n\r\n### Make sure [everything is packaged correctly](https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs#make-sure-everything-is-packaged-correctly)\r\n\r\n * [x] start with clean installation\r\n * [ ] use the latest development version\r\n * [x] Run your frozen program **from a command window (shell)** \u2014 instead of double-clicking on it\r\n * [ ] Package your program in **--onedir mode**\r\n * [ ] Package **without UPX**, say: use the option `--noupx` or set `upx=False` in your .spec-file\r\n * [ ] Repackage you application in **verbose/debug mode**. For this, pass the option `--debug` to `pyi-makespec` or `pyinstaller` or use `EXE(..., debug=1, ...)` in your .spec file.\r\n\r\n\r\n### A minimal example program which shows the error\r\n\r\nTODO: I will try to create this\r\n\r\n```\r\n(paste text here)\r\n\u201cMinimal\u201c means: remove everything from your code which is not relevant for this bug,\r\nesp. don't use external programs, remote requests, etc.\r\nA very good example is https://gist.github.com/ronen/024cdae9ff2d50488438. This one helped\r\nus reproducing and fixing a quite complex problem within approx 1 hour.\r\n```\r\n\r\n### Stacktrace / full error message\r\n\r\n\r\n```\r\n(paste text here)\r\n```\r\n\r\nPlease also see <https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs>\r\nfor more about what would use to solve the issue.\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2023, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License (version 2\n# or later) with exception for distributing the bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: (GPL-2.0-or-later WITH Bootloader-exception)\n#-----------------------------------------------------------------------------\n\nfrom PyInstaller.utils.hooks import collect_submodules, check_requirement, can_import_module\n\n# pkg_resources keeps vendored modules in its _vendor subpackage, and does sys.meta_path based import magic to expose\n# them as pkg_resources.extern.*\n\n# The `railroad` package is an optional requirement for `pyparsing`. `pyparsing.diagrams` depends on `railroad`, so\n# filter it out when `railroad` is not available.\nif can_import_module('railroad'):\n hiddenimports = collect_submodules('pkg_resources._vendor')\nelse:\n hiddenimports = collect_submodules(\n 'pkg_resources._vendor', filter=lambda name: 'pkg_resources._vendor.pyparsing.diagram' not in name\n )\n\n# pkg_resources v45.0 dropped support for Python 2 and added this module printing a warning. We could save some bytes if\n# we would replace this by a fake module.\nif check_requirement('setuptools >= 45.0.0, < 49.1.1'):\n hiddenimports.append('pkg_resources.py2_warn')\n\nexcludedimports = ['__main__']\n\n# Some more hidden imports. See:\n# https://github.com/pyinstaller/pyinstaller-hooks-contrib/issues/15#issuecomment-663699288 `packaging` can either be\n# its own package, or embedded in `pkg_resources._vendor.packaging`, or both.\nhiddenimports += collect_submodules('packaging')\n\n# As of v60.7, setuptools vendored jaraco and has pkg_resources use it. Currently, the pkg_resources._vendor.jaraco\n# namespace package cannot be automatically scanned due to limited support for pure namespace packages in our hook\n# utilities.\n#\n# In setuptools 60.7.0, the vendored jaraco.text package included \"Lorem Ipsum.txt\" data file, which also has to be\n# collected. However, the presence of the data file (and the resulting directory hierarchy) confuses the importer's\n# redirection logic; instead of trying to work-around that, tell user to upgrade or downgrade their setuptools.\nif check_requirement(\"setuptools == 60.7.0\"):\n raise SystemExit(\n \"ERROR: Setuptools 60.7.0 is incompatible with PyInstaller. \"\n \"Downgrade to an earlier version or upgrade to a later version.\"\n )\n# In setuptools 60.7.1, the \"Lorem Ipsum.txt\" data file was dropped from the vendored jaraco.text package, so we can\n# accommodate it with couple of hidden imports.\nelif check_requirement(\"setuptools >= 60.7.1\"):\n hiddenimports += [\n 'pkg_resources._vendor.jaraco.functools',\n 'pkg_resources._vendor.jaraco.context',\n 'pkg_resources._vendor.jaraco.text',\n ]\n", "path": "PyInstaller/hooks/hook-pkg_resources.py"}]}
| 2,713 | 142 |
gh_patches_debug_18201
|
rasdani/github-patches
|
git_diff
|
microsoft__torchgeo-539
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docs failing due to pytorch-lightning deprecation
Docs are failing due to the recent deprecation of `core/lightning.py` in favor of `core/module.py` in pytorch-lightning. Refer PyTorchLightning/pytorch-lightning#12740.
</issue>
<code>
[start of docs/conf.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 # Configuration file for the Sphinx documentation builder.
5 #
6 # This file only contains a selection of the most common options. For a full
7 # list see the documentation:
8 # https://www.sphinx-doc.org/en/master/usage/configuration.html
9
10 # -- Path setup --------------------------------------------------------------
11
12 import os
13 import sys
14
15 import pytorch_sphinx_theme
16
17 # If extensions (or modules to document with autodoc) are in another directory,
18 # add these directories to sys.path here. If the directory is relative to the
19 # documentation root, use os.path.abspath to make it absolute, like shown here.
20 sys.path.insert(0, os.path.abspath(".."))
21
22 import torchgeo # noqa: E402
23
24 # -- Project information -----------------------------------------------------
25
26 project = "torchgeo"
27 copyright = "2021, Microsoft Corporation"
28 author = torchgeo.__author__
29 version = ".".join(torchgeo.__version__.split(".")[:2])
30 release = torchgeo.__version__
31
32
33 # -- General configuration ---------------------------------------------------
34
35 # Add any Sphinx extension module names here, as strings. They can be
36 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
37 # ones.
38 extensions = [
39 "sphinx.ext.autodoc",
40 "sphinx.ext.intersphinx",
41 "sphinx.ext.mathjax",
42 "sphinx.ext.napoleon",
43 "sphinx.ext.todo",
44 "sphinx.ext.viewcode",
45 "nbsphinx",
46 ]
47
48 # List of patterns, relative to source directory, that match files and
49 # directories to ignore when looking for source files.
50 # This pattern also affects html_static_path and html_extra_path.
51 exclude_patterns = ["_build"]
52
53 # Sphinx 3.0+ required for:
54 # autodoc_typehints_description_target = "documented"
55 needs_sphinx = "4.0"
56
57 nitpicky = True
58 nitpick_ignore = [
59 # https://github.com/sphinx-doc/sphinx/issues/8127
60 ("py:class", ".."),
61 # TODO: can't figure out why this isn't found
62 ("py:class", "LightningDataModule"),
63 # Undocumented class
64 ("py:class", "torchvision.models.resnet.ResNet"),
65 ("py:class", "segmentation_models_pytorch.base.model.SegmentationModel"),
66 ]
67
68
69 # -- Options for HTML output -------------------------------------------------
70
71 # The theme to use for HTML and HTML Help pages. See the documentation for
72 # a list of builtin themes.
73 html_theme = "pytorch_sphinx_theme"
74 html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
75
76 # Theme options are theme-specific and customize the look and feel of a theme
77 # further. For a list of options available for each theme, see the
78 # documentation.
79 html_theme_options = {
80 "collapse_navigation": False,
81 "display_version": True,
82 "logo_only": True,
83 "pytorch_project": "docs",
84 "navigation_with_keys": True,
85 "analytics_id": "UA-209075005-1",
86 }
87
88 html_favicon = os.path.join("..", "logo", "favicon.ico")
89
90 html_static_path = ["_static"]
91 html_css_files = ["workaround.css"]
92
93 # -- Extension configuration -------------------------------------------------
94
95 # sphinx.ext.autodoc
96 autodoc_default_options = {
97 "members": True,
98 "special-members": True,
99 "show-inheritance": True,
100 }
101 autodoc_member_order = "bysource"
102 autodoc_typehints = "description"
103 autodoc_typehints_description_target = "documented"
104
105 # sphinx.ext.intersphinx
106 intersphinx_mapping = {
107 "matplotlib": ("https://matplotlib.org/stable/", None),
108 "numpy": ("https://numpy.org/doc/stable/", None),
109 "python": ("https://docs.python.org/3", None),
110 "pytorch-lightning": ("https://pytorch-lightning.readthedocs.io/en/latest/", None),
111 "rasterio": ("https://rasterio.readthedocs.io/en/latest/", None),
112 "rtree": ("https://rtree.readthedocs.io/en/latest/", None),
113 "segmentation_models_pytorch": ("https://smp.readthedocs.io/en/latest/", None),
114 "torch": ("https://pytorch.org/docs/stable", None),
115 "torchvision": ("https://pytorch.org/vision/stable", None),
116 }
117
118 # nbsphinx
119 nbsphinx_execute = "never"
120 # TODO: branch/tag should change depending on which version of docs you look at
121 # TODO: width option of image directive is broken, see:
122 # https://github.com/pytorch/pytorch_sphinx_theme/issues/140
123 nbsphinx_prolog = """
124 {% set host = "https://colab.research.google.com" %}
125 {% set repo = "microsoft/torchgeo" %}
126 {% set urlpath = "docs/" ~ env.docname ~ ".ipynb" %}
127 {% if "dev" in env.config.release %}
128 {% set branch = "main" %}
129 {% else %}
130 {% set branch = "releases/v" ~ env.config.version %}
131 {% endif %}
132
133 .. image:: {{ host }}/assets/colab-badge.svg
134 :class: colabbadge
135 :alt: Open in Colab
136 :target: {{ host }}/github/{{ repo }}/blob/{{ branch }}/{{ urlpath }}
137
138 {% set host = "https://pccompute.westeurope.cloudapp.azure.com" %}
139 {% set host = host ~ "/compute/hub/user-redirect/git-pull" %}
140 {% set repo = "https%3A%2F%2Fgithub.com%2Fmicrosoft%2Ftorchgeo" %}
141 {% set urlpath = "tree%2Ftorchgeo%2Fdocs%2F" %}
142 {% set urlpath = urlpath ~ env.docname | replace("/", "%2F") ~ ".ipynb" %}
143 {% if "dev" in env.config.release %}
144 {% set branch = "main" %}
145 {% else %}
146 {% set branch = "releases%2Fv" ~ env.config.version %}
147 {% endif %}
148
149 .. image:: https://img.shields.io/badge/-Open%20on%20Planetary%20Computer-blue
150 :class: colabbadge
151 :alt: Open on Planetary Computer
152 :target: {{ host }}?repo={{ repo }}&urlpath={{ urlpath }}&branch={{ branch }}
153 """
154
155 # Disables requirejs in nbsphinx to enable compatibility with the pytorch_sphinx_theme
156 # See more information here https://github.com/spatialaudio/nbsphinx/issues/599
157 # NOTE: This will likely break nbsphinx widgets
158 nbsphinx_requirejs_path = ""
159
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -107,10 +107,10 @@
"matplotlib": ("https://matplotlib.org/stable/", None),
"numpy": ("https://numpy.org/doc/stable/", None),
"python": ("https://docs.python.org/3", None),
- "pytorch-lightning": ("https://pytorch-lightning.readthedocs.io/en/latest/", None),
- "rasterio": ("https://rasterio.readthedocs.io/en/latest/", None),
- "rtree": ("https://rtree.readthedocs.io/en/latest/", None),
- "segmentation_models_pytorch": ("https://smp.readthedocs.io/en/latest/", None),
+ "pytorch-lightning": ("https://pytorch-lightning.readthedocs.io/en/stable/", None),
+ "rasterio": ("https://rasterio.readthedocs.io/en/stable/", None),
+ "rtree": ("https://rtree.readthedocs.io/en/stable/", None),
+ "segmentation_models_pytorch": ("https://smp.readthedocs.io/en/stable/", None),
"torch": ("https://pytorch.org/docs/stable", None),
"torchvision": ("https://pytorch.org/vision/stable", None),
}
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -107,10 +107,10 @@\n \"matplotlib\": (\"https://matplotlib.org/stable/\", None),\n \"numpy\": (\"https://numpy.org/doc/stable/\", None),\n \"python\": (\"https://docs.python.org/3\", None),\n- \"pytorch-lightning\": (\"https://pytorch-lightning.readthedocs.io/en/latest/\", None),\n- \"rasterio\": (\"https://rasterio.readthedocs.io/en/latest/\", None),\n- \"rtree\": (\"https://rtree.readthedocs.io/en/latest/\", None),\n- \"segmentation_models_pytorch\": (\"https://smp.readthedocs.io/en/latest/\", None),\n+ \"pytorch-lightning\": (\"https://pytorch-lightning.readthedocs.io/en/stable/\", None),\n+ \"rasterio\": (\"https://rasterio.readthedocs.io/en/stable/\", None),\n+ \"rtree\": (\"https://rtree.readthedocs.io/en/stable/\", None),\n+ \"segmentation_models_pytorch\": (\"https://smp.readthedocs.io/en/stable/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable\", None),\n \"torchvision\": (\"https://pytorch.org/vision/stable\", None),\n }\n", "issue": "Docs failing due to pytorch-lightning deprecation\nDocs are failing due to the recent deprecation of `core/lightning.py` in favor of `core/module.py` in pytorch-lightning. Refer PyTorchLightning/pytorch-lightning#12740. \r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\n\nimport pytorch_sphinx_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nimport torchgeo # noqa: E402\n\n# -- Project information -----------------------------------------------------\n\nproject = \"torchgeo\"\ncopyright = \"2021, Microsoft Corporation\"\nauthor = torchgeo.__author__\nversion = \".\".join(torchgeo.__version__.split(\".\")[:2])\nrelease = torchgeo.__version__\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n \"nbsphinx\",\n]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\"]\n\n# Sphinx 3.0+ required for:\n# autodoc_typehints_description_target = \"documented\"\nneeds_sphinx = \"4.0\"\n\nnitpicky = True\nnitpick_ignore = [\n # https://github.com/sphinx-doc/sphinx/issues/8127\n (\"py:class\", \"..\"),\n # TODO: can't figure out why this isn't found\n (\"py:class\", \"LightningDataModule\"),\n # Undocumented class\n (\"py:class\", \"torchvision.models.resnet.ResNet\"),\n (\"py:class\", \"segmentation_models_pytorch.base.model.SegmentationModel\"),\n]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n \"pytorch_project\": \"docs\",\n \"navigation_with_keys\": True,\n \"analytics_id\": \"UA-209075005-1\",\n}\n\nhtml_favicon = os.path.join(\"..\", \"logo\", \"favicon.ico\")\n\nhtml_static_path = [\"_static\"]\nhtml_css_files = [\"workaround.css\"]\n\n# -- Extension configuration -------------------------------------------------\n\n# sphinx.ext.autodoc\nautodoc_default_options = {\n \"members\": True,\n \"special-members\": True,\n \"show-inheritance\": True,\n}\nautodoc_member_order = \"bysource\"\nautodoc_typehints = \"description\"\nautodoc_typehints_description_target = \"documented\"\n\n# sphinx.ext.intersphinx\nintersphinx_mapping = {\n \"matplotlib\": (\"https://matplotlib.org/stable/\", None),\n \"numpy\": (\"https://numpy.org/doc/stable/\", None),\n \"python\": (\"https://docs.python.org/3\", None),\n \"pytorch-lightning\": (\"https://pytorch-lightning.readthedocs.io/en/latest/\", None),\n \"rasterio\": (\"https://rasterio.readthedocs.io/en/latest/\", None),\n \"rtree\": (\"https://rtree.readthedocs.io/en/latest/\", None),\n \"segmentation_models_pytorch\": (\"https://smp.readthedocs.io/en/latest/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable\", None),\n \"torchvision\": (\"https://pytorch.org/vision/stable\", None),\n}\n\n# nbsphinx\nnbsphinx_execute = \"never\"\n# TODO: branch/tag should change depending on which version of docs you look at\n# TODO: width option of image directive is broken, see:\n# https://github.com/pytorch/pytorch_sphinx_theme/issues/140\nnbsphinx_prolog = \"\"\"\n{% set host = \"https://colab.research.google.com\" %}\n{% set repo = \"microsoft/torchgeo\" %}\n{% set urlpath = \"docs/\" ~ env.docname ~ \".ipynb\" %}\n{% if \"dev\" in env.config.release %}\n {% set branch = \"main\" %}\n{% else %}\n {% set branch = \"releases/v\" ~ env.config.version %}\n{% endif %}\n\n.. image:: {{ host }}/assets/colab-badge.svg\n :class: colabbadge\n :alt: Open in Colab\n :target: {{ host }}/github/{{ repo }}/blob/{{ branch }}/{{ urlpath }}\n\n{% set host = \"https://pccompute.westeurope.cloudapp.azure.com\" %}\n{% set host = host ~ \"/compute/hub/user-redirect/git-pull\" %}\n{% set repo = \"https%3A%2F%2Fgithub.com%2Fmicrosoft%2Ftorchgeo\" %}\n{% set urlpath = \"tree%2Ftorchgeo%2Fdocs%2F\" %}\n{% set urlpath = urlpath ~ env.docname | replace(\"/\", \"%2F\") ~ \".ipynb\" %}\n{% if \"dev\" in env.config.release %}\n {% set branch = \"main\" %}\n{% else %}\n {% set branch = \"releases%2Fv\" ~ env.config.version %}\n{% endif %}\n\n.. image:: https://img.shields.io/badge/-Open%20on%20Planetary%20Computer-blue\n :class: colabbadge\n :alt: Open on Planetary Computer\n :target: {{ host }}?repo={{ repo }}&urlpath={{ urlpath }}&branch={{ branch }}\n\"\"\"\n\n# Disables requirejs in nbsphinx to enable compatibility with the pytorch_sphinx_theme\n# See more information here https://github.com/spatialaudio/nbsphinx/issues/599\n# NOTE: This will likely break nbsphinx widgets\nnbsphinx_requirejs_path = \"\"\n", "path": "docs/conf.py"}]}
| 2,415 | 301 |
gh_patches_debug_9578
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-3623
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
0.5.9 Gamescope should default to fullscreen
Currently it defaults to a Gamescope window when the game is launched.
The Gamescope documentation states that the argument `-f` needs to be passed for the window to start in fullscreen
</issue>
<code>
[start of lutris/runner_interpreter.py]
1 """Transform runner parameters to data usable for runtime execution"""
2 import os
3 import shlex
4 import stat
5
6 from lutris.util import system
7 from lutris.util.linux import LINUX_SYSTEM
8 from lutris.util.log import logger
9
10
11 def get_mangohud_conf(system_config):
12 """Return correct launch arguments and environment variables for Mangohud."""
13 env = {"MANGOHUD": "1"}
14 mango_args = []
15 mangohud = system_config.get("mangohud") or ""
16 if mangohud and system.find_executable("mangohud"):
17 if mangohud == "gl64":
18 mango_args = ["mangohud"]
19 env["MANGOHUD_DLSYM"] = "1"
20 elif mangohud == "gl32":
21 mango_args = ["mangohud.x86"]
22 env["MANGOHUD_DLSYM"] = "1"
23 else:
24 mango_args = ["mangohud"]
25 return mango_args, env
26
27
28 def get_launch_parameters(runner, gameplay_info):
29 system_config = runner.system_config
30 launch_arguments = gameplay_info["command"]
31 env = {
32 "DISABLE_LAYER_AMD_SWITCHABLE_GRAPHICS_1": "1"
33 }
34
35 # Steam compatibility
36 if os.environ.get("SteamAppId"):
37 logger.info("Game launched from steam (AppId: %s)", os.environ["SteamAppId"])
38 env["LC_ALL"] = ""
39
40 # Optimus
41 optimus = system_config.get("optimus")
42 if optimus == "primusrun" and system.find_executable("primusrun"):
43 launch_arguments.insert(0, "primusrun")
44 elif optimus == "optirun" and system.find_executable("optirun"):
45 launch_arguments.insert(0, "virtualgl")
46 launch_arguments.insert(0, "-b")
47 launch_arguments.insert(0, "optirun")
48 elif optimus == "pvkrun" and system.find_executable("pvkrun"):
49 launch_arguments.insert(0, "pvkrun")
50
51 mango_args, mango_env = get_mangohud_conf(system_config)
52 if mango_args:
53 launch_arguments = mango_args + launch_arguments
54 env.update(mango_env)
55
56 # Libstrangle
57 fps_limit = system_config.get("fps_limit") or ""
58 if fps_limit:
59 strangle_cmd = system.find_executable("strangle")
60 if strangle_cmd:
61 launch_arguments = [strangle_cmd, fps_limit] + launch_arguments
62 else:
63 logger.warning("libstrangle is not available on this system, FPS limiter disabled")
64
65 prefix_command = system_config.get("prefix_command") or ""
66 if prefix_command:
67 launch_arguments = (shlex.split(os.path.expandvars(prefix_command)) + launch_arguments)
68
69 single_cpu = system_config.get("single_cpu") or False
70 if single_cpu:
71 logger.info("The game will run on a single CPU core")
72 launch_arguments.insert(0, "0")
73 launch_arguments.insert(0, "-c")
74 launch_arguments.insert(0, "taskset")
75
76 env.update(runner.get_env())
77
78 env.update(gameplay_info.get("env") or {})
79
80 # Set environment variables dependent on gameplay info
81
82 # LD_PRELOAD
83 ld_preload = gameplay_info.get("ld_preload")
84 if ld_preload:
85 env["LD_PRELOAD"] = ld_preload
86
87 # LD_LIBRARY_PATH
88 game_ld_libary_path = gameplay_info.get("ld_library_path")
89 if game_ld_libary_path:
90 ld_library_path = env.get("LD_LIBRARY_PATH")
91 if not ld_library_path:
92 ld_library_path = "$LD_LIBRARY_PATH"
93 env["LD_LIBRARY_PATH"] = ":".join([game_ld_libary_path, ld_library_path])
94
95 # Feral gamemode
96 gamemode = system_config.get("gamemode") and LINUX_SYSTEM.gamemode_available()
97 if gamemode:
98 launch_arguments.insert(0, "gamemoderun")
99
100 # Gamescope
101 gamescope = system_config.get("gamescope") and system.find_executable("gamescope")
102 if gamescope:
103 launch_arguments = get_gamescope_args(launch_arguments, system_config)
104
105 return launch_arguments, env
106
107
108 def get_gamescope_args(launch_arguments, system_config):
109 """Insert gamescope at the start of the launch arguments"""
110 launch_arguments.insert(0, "--")
111 if system_config.get("gamescope_output_res"):
112 output_width, output_height = system_config["gamescope_output_res"].lower().split("x")
113 launch_arguments.insert(0, output_height)
114 launch_arguments.insert(0, "-H")
115 launch_arguments.insert(0, output_width)
116 launch_arguments.insert(0, "-W")
117 if system_config.get("gamescope_game_res"):
118 game_width, game_height = system_config["gamescope_game_res"].lower().split("x")
119 launch_arguments.insert(0, game_height)
120 launch_arguments.insert(0, "-h")
121 launch_arguments.insert(0, game_width)
122 launch_arguments.insert(0, "-w")
123 launch_arguments.insert(0, "gamescope")
124 return launch_arguments
125
126
127 def export_bash_script(runner, gameplay_info, script_path):
128 """Convert runner configuration into a bash script"""
129 command, env = get_launch_parameters(runner, gameplay_info)
130 # Override TERM otherwise the script might not run
131 env["TERM"] = "xterm"
132 script_content = "#!/bin/bash\n\n\n"
133 script_content += "# Environment variables\n"
134 for env_var in env:
135 script_content += "export %s=\"%s\"\n" % (env_var, env[env_var])
136 script_content += "\n# Command\n"
137 script_content += " ".join([shlex.quote(c) for c in command])
138 with open(script_path, "w") as script_file:
139 script_file.write(script_content)
140
141 os.chmod(script_path, os.stat(script_path).st_mode | stat.S_IEXEC)
142
[end of lutris/runner_interpreter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lutris/runner_interpreter.py b/lutris/runner_interpreter.py
--- a/lutris/runner_interpreter.py
+++ b/lutris/runner_interpreter.py
@@ -107,7 +107,7 @@
def get_gamescope_args(launch_arguments, system_config):
"""Insert gamescope at the start of the launch arguments"""
- launch_arguments.insert(0, "--")
+ launch_arguments.insert(0, "-f --")
if system_config.get("gamescope_output_res"):
output_width, output_height = system_config["gamescope_output_res"].lower().split("x")
launch_arguments.insert(0, output_height)
|
{"golden_diff": "diff --git a/lutris/runner_interpreter.py b/lutris/runner_interpreter.py\n--- a/lutris/runner_interpreter.py\n+++ b/lutris/runner_interpreter.py\n@@ -107,7 +107,7 @@\n \n def get_gamescope_args(launch_arguments, system_config):\n \"\"\"Insert gamescope at the start of the launch arguments\"\"\"\n- launch_arguments.insert(0, \"--\")\n+ launch_arguments.insert(0, \"-f --\")\n if system_config.get(\"gamescope_output_res\"):\n output_width, output_height = system_config[\"gamescope_output_res\"].lower().split(\"x\")\n launch_arguments.insert(0, output_height)\n", "issue": "0.5.9 Gamescope should default to fullscreen\nCurrently it defaults to a Gamescope window when the game is launched.\r\n\r\nThe Gamescope documentation states that the argument `-f` needs to be passed for the window to start in fullscreen\n", "before_files": [{"content": "\"\"\"Transform runner parameters to data usable for runtime execution\"\"\"\nimport os\nimport shlex\nimport stat\n\nfrom lutris.util import system\nfrom lutris.util.linux import LINUX_SYSTEM\nfrom lutris.util.log import logger\n\n\ndef get_mangohud_conf(system_config):\n \"\"\"Return correct launch arguments and environment variables for Mangohud.\"\"\"\n env = {\"MANGOHUD\": \"1\"}\n mango_args = []\n mangohud = system_config.get(\"mangohud\") or \"\"\n if mangohud and system.find_executable(\"mangohud\"):\n if mangohud == \"gl64\":\n mango_args = [\"mangohud\"]\n env[\"MANGOHUD_DLSYM\"] = \"1\"\n elif mangohud == \"gl32\":\n mango_args = [\"mangohud.x86\"]\n env[\"MANGOHUD_DLSYM\"] = \"1\"\n else:\n mango_args = [\"mangohud\"]\n return mango_args, env\n\n\ndef get_launch_parameters(runner, gameplay_info):\n system_config = runner.system_config\n launch_arguments = gameplay_info[\"command\"]\n env = {\n \"DISABLE_LAYER_AMD_SWITCHABLE_GRAPHICS_1\": \"1\"\n }\n\n # Steam compatibility\n if os.environ.get(\"SteamAppId\"):\n logger.info(\"Game launched from steam (AppId: %s)\", os.environ[\"SteamAppId\"])\n env[\"LC_ALL\"] = \"\"\n\n # Optimus\n optimus = system_config.get(\"optimus\")\n if optimus == \"primusrun\" and system.find_executable(\"primusrun\"):\n launch_arguments.insert(0, \"primusrun\")\n elif optimus == \"optirun\" and system.find_executable(\"optirun\"):\n launch_arguments.insert(0, \"virtualgl\")\n launch_arguments.insert(0, \"-b\")\n launch_arguments.insert(0, \"optirun\")\n elif optimus == \"pvkrun\" and system.find_executable(\"pvkrun\"):\n launch_arguments.insert(0, \"pvkrun\")\n\n mango_args, mango_env = get_mangohud_conf(system_config)\n if mango_args:\n launch_arguments = mango_args + launch_arguments\n env.update(mango_env)\n\n # Libstrangle\n fps_limit = system_config.get(\"fps_limit\") or \"\"\n if fps_limit:\n strangle_cmd = system.find_executable(\"strangle\")\n if strangle_cmd:\n launch_arguments = [strangle_cmd, fps_limit] + launch_arguments\n else:\n logger.warning(\"libstrangle is not available on this system, FPS limiter disabled\")\n\n prefix_command = system_config.get(\"prefix_command\") or \"\"\n if prefix_command:\n launch_arguments = (shlex.split(os.path.expandvars(prefix_command)) + launch_arguments)\n\n single_cpu = system_config.get(\"single_cpu\") or False\n if single_cpu:\n logger.info(\"The game will run on a single CPU core\")\n launch_arguments.insert(0, \"0\")\n launch_arguments.insert(0, \"-c\")\n launch_arguments.insert(0, \"taskset\")\n\n env.update(runner.get_env())\n\n env.update(gameplay_info.get(\"env\") or {})\n\n # Set environment variables dependent on gameplay info\n\n # LD_PRELOAD\n ld_preload = gameplay_info.get(\"ld_preload\")\n if ld_preload:\n env[\"LD_PRELOAD\"] = ld_preload\n\n # LD_LIBRARY_PATH\n game_ld_libary_path = gameplay_info.get(\"ld_library_path\")\n if game_ld_libary_path:\n ld_library_path = env.get(\"LD_LIBRARY_PATH\")\n if not ld_library_path:\n ld_library_path = \"$LD_LIBRARY_PATH\"\n env[\"LD_LIBRARY_PATH\"] = \":\".join([game_ld_libary_path, ld_library_path])\n\n # Feral gamemode\n gamemode = system_config.get(\"gamemode\") and LINUX_SYSTEM.gamemode_available()\n if gamemode:\n launch_arguments.insert(0, \"gamemoderun\")\n\n # Gamescope\n gamescope = system_config.get(\"gamescope\") and system.find_executable(\"gamescope\")\n if gamescope:\n launch_arguments = get_gamescope_args(launch_arguments, system_config)\n\n return launch_arguments, env\n\n\ndef get_gamescope_args(launch_arguments, system_config):\n \"\"\"Insert gamescope at the start of the launch arguments\"\"\"\n launch_arguments.insert(0, \"--\")\n if system_config.get(\"gamescope_output_res\"):\n output_width, output_height = system_config[\"gamescope_output_res\"].lower().split(\"x\")\n launch_arguments.insert(0, output_height)\n launch_arguments.insert(0, \"-H\")\n launch_arguments.insert(0, output_width)\n launch_arguments.insert(0, \"-W\")\n if system_config.get(\"gamescope_game_res\"):\n game_width, game_height = system_config[\"gamescope_game_res\"].lower().split(\"x\")\n launch_arguments.insert(0, game_height)\n launch_arguments.insert(0, \"-h\")\n launch_arguments.insert(0, game_width)\n launch_arguments.insert(0, \"-w\")\n launch_arguments.insert(0, \"gamescope\")\n return launch_arguments\n\n\ndef export_bash_script(runner, gameplay_info, script_path):\n \"\"\"Convert runner configuration into a bash script\"\"\"\n command, env = get_launch_parameters(runner, gameplay_info)\n # Override TERM otherwise the script might not run\n env[\"TERM\"] = \"xterm\"\n script_content = \"#!/bin/bash\\n\\n\\n\"\n script_content += \"# Environment variables\\n\"\n for env_var in env:\n script_content += \"export %s=\\\"%s\\\"\\n\" % (env_var, env[env_var])\n script_content += \"\\n# Command\\n\"\n script_content += \" \".join([shlex.quote(c) for c in command])\n with open(script_path, \"w\") as script_file:\n script_file.write(script_content)\n\n os.chmod(script_path, os.stat(script_path).st_mode | stat.S_IEXEC)\n", "path": "lutris/runner_interpreter.py"}]}
| 2,202 | 148 |
gh_patches_debug_10736
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-612
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BaseWeightsScalarHandler has side effects on random number generation
### Problem
BaseWeightsScalarHandler uses `torch.rand` to test the reduction function that is passed as an argument.
This is very confusing, because experiments are not reproducible when tensorboard logging is switched on or off, e.g. in debug sessions.
### Solution
Replace the random tensor with a non random tensor.
</issue>
<code>
[start of ignite/contrib/handlers/base_logger.py]
1 from abc import ABCMeta, abstractmethod
2 import numbers
3 import warnings
4
5 import torch
6
7 from ignite.engine import State, Engine
8 from ignite._six import with_metaclass
9
10
11 class BaseLogger(object):
12 """
13 Base logger handler. See implementations: TensorboardLogger, VisdomLogger, PolyaxonLogger
14
15 """
16 def attach(self, engine, log_handler, event_name):
17 """Attach the logger to the engine and execute `log_handler` function at `event_name` events.
18
19 Args:
20 engine (Engine): engine object.
21 log_handler (callable): a logging handler to execute
22 event_name: event to attach the logging handler to. Valid events are from :class:`~ignite.engine.Events`
23 or any `event_name` added by :meth:`~ignite.engine.Engine.register_events`.
24
25 """
26 if event_name not in State.event_to_attr:
27 raise RuntimeError("Unknown event name '{}'".format(event_name))
28
29 engine.add_event_handler(event_name, log_handler, self, event_name)
30
31 def __enter__(self):
32 return self
33
34 def __exit__(self, type, value, traceback):
35 self.close()
36
37 def close(self):
38 pass
39
40
41 class BaseHandler(with_metaclass(ABCMeta, object)):
42
43 @abstractmethod
44 def __call__(self, *args, **kwargs):
45 pass
46
47
48 class BaseOptimizerParamsHandler(BaseHandler):
49 """
50 Base handler for logging optimizer parameters
51 """
52
53 def __init__(self, optimizer, param_name="lr", tag=None):
54 if not isinstance(optimizer, torch.optim.Optimizer):
55 raise TypeError("Argument optimizer should be of type torch.optim.Optimizer, "
56 "but given {}".format(type(optimizer)))
57
58 self.optimizer = optimizer
59 self.param_name = param_name
60 self.tag = tag
61
62
63 class BaseOutputHandler(BaseHandler):
64 """
65 Helper handler to log engine's output and/or metrics
66 """
67
68 def __init__(self, tag, metric_names=None, output_transform=None, another_engine=None, global_step_transform=None):
69
70 if metric_names is not None:
71 if not (isinstance(metric_names, list) or (isinstance(metric_names, str) and metric_names == "all")):
72 raise TypeError("metric_names should be either a list or equal 'all', "
73 "got {} instead.".format(type(metric_names)))
74
75 if output_transform is not None and not callable(output_transform):
76 raise TypeError("output_transform should be a function, got {} instead."
77 .format(type(output_transform)))
78
79 if output_transform is None and metric_names is None:
80 raise ValueError("Either metric_names or output_transform should be defined")
81
82 if another_engine is not None:
83 if not isinstance(another_engine, Engine):
84 raise TypeError("Argument another_engine should be of type Engine, "
85 "but given {}".format(type(another_engine)))
86 warnings.warn("Use of another_engine is deprecated and will be removed in 0.2.1. "
87 "Please use global_step_transform instead.", DeprecationWarning)
88
89 if global_step_transform is not None and not callable(global_step_transform):
90 raise TypeError("global_step_transform should be a function, got {} instead."
91 .format(type(global_step_transform)))
92
93 if global_step_transform is None:
94 def global_step_transform(engine, event_name):
95 return engine.state.get_event_attrib_value(event_name)
96
97 self.tag = tag
98 self.metric_names = metric_names
99 self.output_transform = output_transform
100 self.another_engine = another_engine
101 self.global_step_transform = global_step_transform
102
103 def _setup_output_metrics(self, engine):
104 """Helper method to setup metrics to log
105 """
106 metrics = {}
107 if self.metric_names is not None:
108 if isinstance(self.metric_names, str) and self.metric_names == "all":
109 metrics = engine.state.metrics
110 else:
111 for name in self.metric_names:
112 if name not in engine.state.metrics:
113 warnings.warn("Provided metric name '{}' is missing "
114 "in engine's state metrics: {}".format(name, list(engine.state.metrics.keys())))
115 continue
116 metrics[name] = engine.state.metrics[name]
117
118 if self.output_transform is not None:
119 output_dict = self.output_transform(engine.state.output)
120
121 if not isinstance(output_dict, dict):
122 output_dict = {"output": output_dict}
123
124 metrics.update({name: value for name, value in output_dict.items()})
125 return metrics
126
127
128 class BaseWeightsScalarHandler(BaseHandler):
129 """
130 Helper handler to log model's weights as scalars.
131 """
132
133 def __init__(self, model, reduction=torch.norm, tag=None):
134 if not isinstance(model, torch.nn.Module):
135 raise TypeError("Argument model should be of type torch.nn.Module, "
136 "but given {}".format(type(model)))
137
138 if not callable(reduction):
139 raise TypeError("Argument reduction should be callable, "
140 "but given {}".format(type(reduction)))
141
142 def _is_0D_tensor(t):
143 return isinstance(t, torch.Tensor) and t.ndimension() == 0
144
145 # Test reduction function on a random tensor
146 o = reduction(torch.rand(4, 2))
147 if not (isinstance(o, numbers.Number) or _is_0D_tensor(o)):
148 raise ValueError("Output of the reduction function should be a scalar, but got {}".format(type(o)))
149
150 self.model = model
151 self.reduction = reduction
152 self.tag = tag
153
154
155 class BaseWeightsHistHandler(BaseHandler):
156 """
157 Helper handler to log model's weights as histograms.
158 """
159
160 def __init__(self, model, tag=None):
161 if not isinstance(model, torch.nn.Module):
162 raise TypeError("Argument model should be of type torch.nn.Module, "
163 "but given {}".format(type(model)))
164
165 self.model = model
166 self.tag = tag
167
[end of ignite/contrib/handlers/base_logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ignite/contrib/handlers/base_logger.py b/ignite/contrib/handlers/base_logger.py
--- a/ignite/contrib/handlers/base_logger.py
+++ b/ignite/contrib/handlers/base_logger.py
@@ -142,8 +142,8 @@
def _is_0D_tensor(t):
return isinstance(t, torch.Tensor) and t.ndimension() == 0
- # Test reduction function on a random tensor
- o = reduction(torch.rand(4, 2))
+ # Test reduction function on a tensor
+ o = reduction(torch.ones(4, 2))
if not (isinstance(o, numbers.Number) or _is_0D_tensor(o)):
raise ValueError("Output of the reduction function should be a scalar, but got {}".format(type(o)))
|
{"golden_diff": "diff --git a/ignite/contrib/handlers/base_logger.py b/ignite/contrib/handlers/base_logger.py\n--- a/ignite/contrib/handlers/base_logger.py\n+++ b/ignite/contrib/handlers/base_logger.py\n@@ -142,8 +142,8 @@\n def _is_0D_tensor(t):\n return isinstance(t, torch.Tensor) and t.ndimension() == 0\n \n- # Test reduction function on a random tensor\n- o = reduction(torch.rand(4, 2))\n+ # Test reduction function on a tensor\n+ o = reduction(torch.ones(4, 2))\n if not (isinstance(o, numbers.Number) or _is_0D_tensor(o)):\n raise ValueError(\"Output of the reduction function should be a scalar, but got {}\".format(type(o)))\n", "issue": "BaseWeightsScalarHandler has side effects on random number generation\n### Problem\r\nBaseWeightsScalarHandler uses `torch.rand` to test the reduction function that is passed as an argument. \r\nThis is very confusing, because experiments are not reproducible when tensorboard logging is switched on or off, e.g. in debug sessions. \r\n\r\n### Solution\r\nReplace the random tensor with a non random tensor. \r\n\r\n\n", "before_files": [{"content": "from abc import ABCMeta, abstractmethod\nimport numbers\nimport warnings\n\nimport torch\n\nfrom ignite.engine import State, Engine\nfrom ignite._six import with_metaclass\n\n\nclass BaseLogger(object):\n \"\"\"\n Base logger handler. See implementations: TensorboardLogger, VisdomLogger, PolyaxonLogger\n\n \"\"\"\n def attach(self, engine, log_handler, event_name):\n \"\"\"Attach the logger to the engine and execute `log_handler` function at `event_name` events.\n\n Args:\n engine (Engine): engine object.\n log_handler (callable): a logging handler to execute\n event_name: event to attach the logging handler to. Valid events are from :class:`~ignite.engine.Events`\n or any `event_name` added by :meth:`~ignite.engine.Engine.register_events`.\n\n \"\"\"\n if event_name not in State.event_to_attr:\n raise RuntimeError(\"Unknown event name '{}'\".format(event_name))\n\n engine.add_event_handler(event_name, log_handler, self, event_name)\n\n def __enter__(self):\n return self\n\n def __exit__(self, type, value, traceback):\n self.close()\n\n def close(self):\n pass\n\n\nclass BaseHandler(with_metaclass(ABCMeta, object)):\n\n @abstractmethod\n def __call__(self, *args, **kwargs):\n pass\n\n\nclass BaseOptimizerParamsHandler(BaseHandler):\n \"\"\"\n Base handler for logging optimizer parameters\n \"\"\"\n\n def __init__(self, optimizer, param_name=\"lr\", tag=None):\n if not isinstance(optimizer, torch.optim.Optimizer):\n raise TypeError(\"Argument optimizer should be of type torch.optim.Optimizer, \"\n \"but given {}\".format(type(optimizer)))\n\n self.optimizer = optimizer\n self.param_name = param_name\n self.tag = tag\n\n\nclass BaseOutputHandler(BaseHandler):\n \"\"\"\n Helper handler to log engine's output and/or metrics\n \"\"\"\n\n def __init__(self, tag, metric_names=None, output_transform=None, another_engine=None, global_step_transform=None):\n\n if metric_names is not None:\n if not (isinstance(metric_names, list) or (isinstance(metric_names, str) and metric_names == \"all\")):\n raise TypeError(\"metric_names should be either a list or equal 'all', \"\n \"got {} instead.\".format(type(metric_names)))\n\n if output_transform is not None and not callable(output_transform):\n raise TypeError(\"output_transform should be a function, got {} instead.\"\n .format(type(output_transform)))\n\n if output_transform is None and metric_names is None:\n raise ValueError(\"Either metric_names or output_transform should be defined\")\n\n if another_engine is not None:\n if not isinstance(another_engine, Engine):\n raise TypeError(\"Argument another_engine should be of type Engine, \"\n \"but given {}\".format(type(another_engine)))\n warnings.warn(\"Use of another_engine is deprecated and will be removed in 0.2.1. \"\n \"Please use global_step_transform instead.\", DeprecationWarning)\n\n if global_step_transform is not None and not callable(global_step_transform):\n raise TypeError(\"global_step_transform should be a function, got {} instead.\"\n .format(type(global_step_transform)))\n\n if global_step_transform is None:\n def global_step_transform(engine, event_name):\n return engine.state.get_event_attrib_value(event_name)\n\n self.tag = tag\n self.metric_names = metric_names\n self.output_transform = output_transform\n self.another_engine = another_engine\n self.global_step_transform = global_step_transform\n\n def _setup_output_metrics(self, engine):\n \"\"\"Helper method to setup metrics to log\n \"\"\"\n metrics = {}\n if self.metric_names is not None:\n if isinstance(self.metric_names, str) and self.metric_names == \"all\":\n metrics = engine.state.metrics\n else:\n for name in self.metric_names:\n if name not in engine.state.metrics:\n warnings.warn(\"Provided metric name '{}' is missing \"\n \"in engine's state metrics: {}\".format(name, list(engine.state.metrics.keys())))\n continue\n metrics[name] = engine.state.metrics[name]\n\n if self.output_transform is not None:\n output_dict = self.output_transform(engine.state.output)\n\n if not isinstance(output_dict, dict):\n output_dict = {\"output\": output_dict}\n\n metrics.update({name: value for name, value in output_dict.items()})\n return metrics\n\n\nclass BaseWeightsScalarHandler(BaseHandler):\n \"\"\"\n Helper handler to log model's weights as scalars.\n \"\"\"\n\n def __init__(self, model, reduction=torch.norm, tag=None):\n if not isinstance(model, torch.nn.Module):\n raise TypeError(\"Argument model should be of type torch.nn.Module, \"\n \"but given {}\".format(type(model)))\n\n if not callable(reduction):\n raise TypeError(\"Argument reduction should be callable, \"\n \"but given {}\".format(type(reduction)))\n\n def _is_0D_tensor(t):\n return isinstance(t, torch.Tensor) and t.ndimension() == 0\n\n # Test reduction function on a random tensor\n o = reduction(torch.rand(4, 2))\n if not (isinstance(o, numbers.Number) or _is_0D_tensor(o)):\n raise ValueError(\"Output of the reduction function should be a scalar, but got {}\".format(type(o)))\n\n self.model = model\n self.reduction = reduction\n self.tag = tag\n\n\nclass BaseWeightsHistHandler(BaseHandler):\n \"\"\"\n Helper handler to log model's weights as histograms.\n \"\"\"\n\n def __init__(self, model, tag=None):\n if not isinstance(model, torch.nn.Module):\n raise TypeError(\"Argument model should be of type torch.nn.Module, \"\n \"but given {}\".format(type(model)))\n\n self.model = model\n self.tag = tag\n", "path": "ignite/contrib/handlers/base_logger.py"}]}
| 2,266 | 181 |
gh_patches_debug_13447
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-419
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[2.9.0] UnicodeEncodeError raised on create_filtered_path
Hi,
so we still use Python2, and got an UnicodeErrorException in create_filtered_path, ligne 53:
```python
return path + "?" + urlencode(filtered_params)
```
filtered_params value:
```python
filtered_params = [[u'search', u'Clos du Chêne, Montévrain, France'],]
```
query_params has the same value
As it is the output from Sentry, I do not know whether the strings were really Unicode as implied by the "u".
</issue>
<code>
[start of src/scout_apm/core/web_requests.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import time
5
6 from scout_apm.compat import datetime_to_timestamp, urlencode
7 from scout_apm.core.config import scout_config
8
9 # Originally derived from:
10 # 1. Rails:
11 # https://github.com/rails/rails/blob/0196551e6039ca864d1eee1e01819fcae12c1dc9/railties/lib/rails/generators/rails/app/templates/config/initializers/filter_parameter_logging.rb.tt # noqa
12 # 2. Sentry server side scrubbing:
13 # https://docs.sentry.io/data-management/sensitive-data/#server-side-scrubbing
14 FILTER_PARAMETERS = frozenset(
15 [
16 "access",
17 "access_token",
18 "api_key",
19 "apikey",
20 "auth",
21 "auth_token",
22 "card[number]",
23 "certificate",
24 "credentials",
25 "crypt",
26 "key",
27 "mysql_pwd",
28 "otp",
29 "passwd",
30 "password",
31 "private",
32 "protected",
33 "salt",
34 "secret",
35 "ssn",
36 "stripetoken",
37 "token",
38 ]
39 )
40
41
42 def create_filtered_path(path, query_params):
43 if scout_config.value("uri_reporting") == "path":
44 return path
45 filtered_params = sorted(
46 (
47 (k, "[FILTERED]" if k.lower() in FILTER_PARAMETERS else v)
48 for k, v in query_params
49 )
50 )
51 if not filtered_params:
52 return path
53 return path + "?" + urlencode(filtered_params)
54
55
56 def ignore_path(path):
57 ignored_paths = scout_config.value("ignore")
58 for ignored in ignored_paths:
59 if path.startswith(ignored):
60 return True
61 return False
62
63
64 def track_request_queue_time(header_value, tracked_request):
65 if header_value.startswith("t="):
66 header_value = header_value[2:]
67
68 try:
69 first_char = header_value[0]
70 except IndexError:
71 return False
72
73 if not first_char.isdigit(): # filter out negatives, nan, inf, etc.
74 return False
75
76 try:
77 ambiguous_start_timestamp = float(header_value)
78 except ValueError:
79 return False
80
81 start_timestamp_ns = convert_ambiguous_timestamp_to_ns(ambiguous_start_timestamp)
82 if start_timestamp_ns == 0.0:
83 return False
84
85 tr_start_timestamp_ns = datetime_to_timestamp(tracked_request.start_time) * 1e9
86
87 # Ignore if in the future
88 if start_timestamp_ns > tr_start_timestamp_ns:
89 return False
90
91 queue_time_ns = int(tr_start_timestamp_ns - start_timestamp_ns)
92 tracked_request.tag("scout.queue_time_ns", queue_time_ns)
93 return True
94
95
96 def track_amazon_request_queue_time(header_value, tracked_request):
97 items = header_value.split(";")
98 found_item = None
99 for item in items:
100 if found_item is None and item.startswith("Root="):
101 found_item = item
102 elif item.startswith("Self="):
103 found_item = item
104
105 if found_item is None:
106 return False
107
108 pieces = found_item.split("-")
109 if len(pieces) != 3:
110 return False
111
112 timestamp_str = pieces[1]
113
114 try:
115 first_char = timestamp_str[0]
116 except IndexError:
117 return False
118
119 if not first_char.isdigit():
120 return False
121
122 try:
123 start_timestamp_ns = int(timestamp_str) * 1000000000.0
124 except ValueError:
125 return False
126
127 if start_timestamp_ns == 0:
128 return False
129
130 tr_start_timestamp_ns = datetime_to_timestamp(tracked_request.start_time) * 1e9
131
132 # Ignore if in the futuren
133 if start_timestamp_ns > tr_start_timestamp_ns:
134 return False
135
136 queue_time_ns = int(tr_start_timestamp_ns - start_timestamp_ns)
137 tracked_request.tag("scout.queue_time_ns", queue_time_ns)
138 return True
139
140
141 # Cutoff epoch is used for determining ambiguous timestamp boundaries, and is
142 # just over 10 years ago at time of writing
143 CUTOFF_EPOCH_S = time.mktime((2009, 6, 1, 0, 0, 0, 0, 0, 0))
144 CUTOFF_EPOCH_MS = CUTOFF_EPOCH_S * 1000.0
145 CUTOFF_EPOCH_US = CUTOFF_EPOCH_S * 1000000.0
146 CUTOFF_EPOCH_NS = CUTOFF_EPOCH_S * 1000000000.0
147
148
149 def convert_ambiguous_timestamp_to_ns(timestamp):
150 """
151 Convert an ambiguous float timestamp that could be in nanoseconds,
152 microseconds, milliseconds, or seconds to nanoseconds. Return 0.0 for
153 values in the more than 10 years ago.
154 """
155 if timestamp > CUTOFF_EPOCH_NS:
156 converted_timestamp = timestamp
157 elif timestamp > CUTOFF_EPOCH_US:
158 converted_timestamp = timestamp * 1000.0
159 elif timestamp > CUTOFF_EPOCH_MS:
160 converted_timestamp = timestamp * 1000000.0
161 elif timestamp > CUTOFF_EPOCH_S:
162 converted_timestamp = timestamp * 1000000000.0
163 else:
164 return 0.0
165 return converted_timestamp
166
167
168 def werkzeug_track_request_data(werkzeug_request, tracked_request):
169 """
170 Several integrations use Werkzeug requests, so share the code for
171 extracting common data here.
172 """
173 path = werkzeug_request.path
174 tracked_request.tag(
175 "path", create_filtered_path(path, werkzeug_request.args.items(multi=True))
176 )
177 if ignore_path(path):
178 tracked_request.tag("ignore_transaction", True)
179
180 # Determine a remote IP to associate with the request. The value is
181 # spoofable by the requester so this is not suitable to use in any
182 # security sensitive context.
183 user_ip = (
184 werkzeug_request.headers.get("x-forwarded-for", default="").split(",")[0]
185 or werkzeug_request.headers.get("client-ip", default="").split(",")[0]
186 or werkzeug_request.remote_addr
187 )
188 tracked_request.tag("user_ip", user_ip)
189
190 queue_time = werkzeug_request.headers.get(
191 "x-queue-start", default=""
192 ) or werkzeug_request.headers.get("x-request-start", default="")
193 tracked_queue_time = track_request_queue_time(queue_time, tracked_request)
194 if not tracked_queue_time:
195 amazon_queue_time = werkzeug_request.headers.get("x-amzn-trace-id", default="")
196 track_amazon_request_queue_time(amazon_queue_time, tracked_request)
197
[end of src/scout_apm/core/web_requests.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/scout_apm/core/web_requests.py b/src/scout_apm/core/web_requests.py
--- a/src/scout_apm/core/web_requests.py
+++ b/src/scout_apm/core/web_requests.py
@@ -42,10 +42,18 @@
def create_filtered_path(path, query_params):
if scout_config.value("uri_reporting") == "path":
return path
+ # Python 2 unicode compatibility: force all keys and values to bytes
filtered_params = sorted(
(
- (k, "[FILTERED]" if k.lower() in FILTER_PARAMETERS else v)
- for k, v in query_params
+ (
+ key.encode("utf-8"),
+ (
+ b"[FILTERED]"
+ if key.lower() in FILTER_PARAMETERS
+ else value.encode("utf-8")
+ ),
+ )
+ for key, value in query_params
)
)
if not filtered_params:
|
{"golden_diff": "diff --git a/src/scout_apm/core/web_requests.py b/src/scout_apm/core/web_requests.py\n--- a/src/scout_apm/core/web_requests.py\n+++ b/src/scout_apm/core/web_requests.py\n@@ -42,10 +42,18 @@\n def create_filtered_path(path, query_params):\n if scout_config.value(\"uri_reporting\") == \"path\":\n return path\n+ # Python 2 unicode compatibility: force all keys and values to bytes\n filtered_params = sorted(\n (\n- (k, \"[FILTERED]\" if k.lower() in FILTER_PARAMETERS else v)\n- for k, v in query_params\n+ (\n+ key.encode(\"utf-8\"),\n+ (\n+ b\"[FILTERED]\"\n+ if key.lower() in FILTER_PARAMETERS\n+ else value.encode(\"utf-8\")\n+ ),\n+ )\n+ for key, value in query_params\n )\n )\n if not filtered_params:\n", "issue": "[2.9.0] UnicodeEncodeError raised on create_filtered_path\nHi,\r\n\r\nso we still use Python2, and got an UnicodeErrorException in create_filtered_path, ligne 53: \r\n\r\n```python\r\nreturn path + \"?\" + urlencode(filtered_params)\r\n```\r\n\r\nfiltered_params value:\r\n```python\r\nfiltered_params = [[u'search', u'Clos du Ch\u00eane, Mont\u00e9vrain, France'],]\r\n```\r\nquery_params has the same value\r\n\r\nAs it is the output from Sentry, I do not know whether the strings were really Unicode as implied by the \"u\".\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport time\n\nfrom scout_apm.compat import datetime_to_timestamp, urlencode\nfrom scout_apm.core.config import scout_config\n\n# Originally derived from:\n# 1. Rails:\n# https://github.com/rails/rails/blob/0196551e6039ca864d1eee1e01819fcae12c1dc9/railties/lib/rails/generators/rails/app/templates/config/initializers/filter_parameter_logging.rb.tt # noqa\n# 2. Sentry server side scrubbing:\n# https://docs.sentry.io/data-management/sensitive-data/#server-side-scrubbing\nFILTER_PARAMETERS = frozenset(\n [\n \"access\",\n \"access_token\",\n \"api_key\",\n \"apikey\",\n \"auth\",\n \"auth_token\",\n \"card[number]\",\n \"certificate\",\n \"credentials\",\n \"crypt\",\n \"key\",\n \"mysql_pwd\",\n \"otp\",\n \"passwd\",\n \"password\",\n \"private\",\n \"protected\",\n \"salt\",\n \"secret\",\n \"ssn\",\n \"stripetoken\",\n \"token\",\n ]\n)\n\n\ndef create_filtered_path(path, query_params):\n if scout_config.value(\"uri_reporting\") == \"path\":\n return path\n filtered_params = sorted(\n (\n (k, \"[FILTERED]\" if k.lower() in FILTER_PARAMETERS else v)\n for k, v in query_params\n )\n )\n if not filtered_params:\n return path\n return path + \"?\" + urlencode(filtered_params)\n\n\ndef ignore_path(path):\n ignored_paths = scout_config.value(\"ignore\")\n for ignored in ignored_paths:\n if path.startswith(ignored):\n return True\n return False\n\n\ndef track_request_queue_time(header_value, tracked_request):\n if header_value.startswith(\"t=\"):\n header_value = header_value[2:]\n\n try:\n first_char = header_value[0]\n except IndexError:\n return False\n\n if not first_char.isdigit(): # filter out negatives, nan, inf, etc.\n return False\n\n try:\n ambiguous_start_timestamp = float(header_value)\n except ValueError:\n return False\n\n start_timestamp_ns = convert_ambiguous_timestamp_to_ns(ambiguous_start_timestamp)\n if start_timestamp_ns == 0.0:\n return False\n\n tr_start_timestamp_ns = datetime_to_timestamp(tracked_request.start_time) * 1e9\n\n # Ignore if in the future\n if start_timestamp_ns > tr_start_timestamp_ns:\n return False\n\n queue_time_ns = int(tr_start_timestamp_ns - start_timestamp_ns)\n tracked_request.tag(\"scout.queue_time_ns\", queue_time_ns)\n return True\n\n\ndef track_amazon_request_queue_time(header_value, tracked_request):\n items = header_value.split(\";\")\n found_item = None\n for item in items:\n if found_item is None and item.startswith(\"Root=\"):\n found_item = item\n elif item.startswith(\"Self=\"):\n found_item = item\n\n if found_item is None:\n return False\n\n pieces = found_item.split(\"-\")\n if len(pieces) != 3:\n return False\n\n timestamp_str = pieces[1]\n\n try:\n first_char = timestamp_str[0]\n except IndexError:\n return False\n\n if not first_char.isdigit():\n return False\n\n try:\n start_timestamp_ns = int(timestamp_str) * 1000000000.0\n except ValueError:\n return False\n\n if start_timestamp_ns == 0:\n return False\n\n tr_start_timestamp_ns = datetime_to_timestamp(tracked_request.start_time) * 1e9\n\n # Ignore if in the futuren\n if start_timestamp_ns > tr_start_timestamp_ns:\n return False\n\n queue_time_ns = int(tr_start_timestamp_ns - start_timestamp_ns)\n tracked_request.tag(\"scout.queue_time_ns\", queue_time_ns)\n return True\n\n\n# Cutoff epoch is used for determining ambiguous timestamp boundaries, and is\n# just over 10 years ago at time of writing\nCUTOFF_EPOCH_S = time.mktime((2009, 6, 1, 0, 0, 0, 0, 0, 0))\nCUTOFF_EPOCH_MS = CUTOFF_EPOCH_S * 1000.0\nCUTOFF_EPOCH_US = CUTOFF_EPOCH_S * 1000000.0\nCUTOFF_EPOCH_NS = CUTOFF_EPOCH_S * 1000000000.0\n\n\ndef convert_ambiguous_timestamp_to_ns(timestamp):\n \"\"\"\n Convert an ambiguous float timestamp that could be in nanoseconds,\n microseconds, milliseconds, or seconds to nanoseconds. Return 0.0 for\n values in the more than 10 years ago.\n \"\"\"\n if timestamp > CUTOFF_EPOCH_NS:\n converted_timestamp = timestamp\n elif timestamp > CUTOFF_EPOCH_US:\n converted_timestamp = timestamp * 1000.0\n elif timestamp > CUTOFF_EPOCH_MS:\n converted_timestamp = timestamp * 1000000.0\n elif timestamp > CUTOFF_EPOCH_S:\n converted_timestamp = timestamp * 1000000000.0\n else:\n return 0.0\n return converted_timestamp\n\n\ndef werkzeug_track_request_data(werkzeug_request, tracked_request):\n \"\"\"\n Several integrations use Werkzeug requests, so share the code for\n extracting common data here.\n \"\"\"\n path = werkzeug_request.path\n tracked_request.tag(\n \"path\", create_filtered_path(path, werkzeug_request.args.items(multi=True))\n )\n if ignore_path(path):\n tracked_request.tag(\"ignore_transaction\", True)\n\n # Determine a remote IP to associate with the request. The value is\n # spoofable by the requester so this is not suitable to use in any\n # security sensitive context.\n user_ip = (\n werkzeug_request.headers.get(\"x-forwarded-for\", default=\"\").split(\",\")[0]\n or werkzeug_request.headers.get(\"client-ip\", default=\"\").split(\",\")[0]\n or werkzeug_request.remote_addr\n )\n tracked_request.tag(\"user_ip\", user_ip)\n\n queue_time = werkzeug_request.headers.get(\n \"x-queue-start\", default=\"\"\n ) or werkzeug_request.headers.get(\"x-request-start\", default=\"\")\n tracked_queue_time = track_request_queue_time(queue_time, tracked_request)\n if not tracked_queue_time:\n amazon_queue_time = werkzeug_request.headers.get(\"x-amzn-trace-id\", default=\"\")\n track_amazon_request_queue_time(amazon_queue_time, tracked_request)\n", "path": "src/scout_apm/core/web_requests.py"}]}
| 2,637 | 210 |
gh_patches_debug_25217
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-2023
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Linter does not catch MalformedPolicyDocument syntax error
*cfn-lint version: 0.25.0*
*Description of issue.*
`cfn-lint` fails to catch the following syntax error pre-deployment, which is caught by CloudFormation during deployment:
```
Syntax errors in policy. (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: ccd05e19-aa60-4b26-bfcc-8fbafca40c61; Proxy: null)
```
The policy in question has the following template:
```yaml
APILambdaExecutionPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
Description: Allows API Gateway triggered Lambdas access to service resources
Path: /
ManagedPolicyName:
Fn::Sub: ${Project}-APILambdaExecutionPolicy-${AWS::Region}
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:*Get*
- dynamodb:*Item*
- dynamodb:Describe*
- dynamodb:Query
Resource:
- Fn::GetAtt:
- DynamoDB
- Outputs.TaskDBTableARN
- Fn::Sub: ${DynamoDB.Outputs.TaskDBTableARN}/*
- Effect: Allow
Action:
- cloudwatch:PutMetricAlarm
- cloudwatch:PutMetricData
Resource:
- '*'
- Effect: Allow
Action:
- xray:PutTraceSegments
- xray:PutTelemetryRecords
Resource:
- '*'
- Effect: Allow
Action:
- sqs:GetQueueUrl
- sqs:SendMessage
Resource:
- Fn::GetAtt:
- SQS
- Outputs.QueueARN
- Effect: Allow
Action:
- sns:Publish
Resource:
- arn:aws:sns:*:*:*
```
</issue>
<code>
[start of src/cfnlint/rules/resources/iam/Policy.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import json
6 from datetime import date
7 import six
8 from cfnlint.helpers import convert_dict
9 from cfnlint.rules import CloudFormationLintRule
10 from cfnlint.rules import RuleMatch
11
12
13 class Policy(CloudFormationLintRule):
14 """Check if IAM Policy JSON is correct"""
15 id = 'E2507'
16 shortdesc = 'Check if IAM Policies are properly configured'
17 description = 'See if there elements inside an IAM policy are correct'
18 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html'
19 tags = ['properties', 'iam']
20
21 def __init__(self):
22 """Init"""
23 super(Policy, self).__init__()
24 self.resource_exceptions = {
25 'AWS::ECR::Repository': 'RepositoryPolicyText',
26 }
27 self.resources_and_keys = {
28 'AWS::ECR::Repository': 'RepositoryPolicyText',
29 'AWS::Elasticsearch::Domain': 'AccessPolicies',
30 'AWS::KMS::Key': 'KeyPolicy',
31 'AWS::S3::BucketPolicy': 'PolicyDocument',
32 'AWS::SNS::TopicPolicy': 'PolicyDocument',
33 'AWS::SQS::QueuePolicy': 'PolicyDocument',
34 }
35 self.idp_and_keys = {
36 'AWS::IAM::Group': 'Policies',
37 'AWS::IAM::ManagedPolicy': 'PolicyDocument',
38 'AWS::IAM::Policy': 'PolicyDocument',
39 'AWS::IAM::Role': 'Policies',
40 'AWS::IAM::User': 'Policies',
41 'AWS::SSO::PermissionSet': 'InlinePolicy',
42 }
43 for resource_type in self.resources_and_keys:
44 self.resource_property_types.append(resource_type)
45 for resource_type in self.idp_and_keys:
46 self.resource_property_types.append(resource_type)
47
48 def check_policy_document(self, value, path, is_identity_policy, resource_exceptions, start_mark, end_mark):
49 """Check policy document"""
50 matches = []
51
52 valid_keys = [
53 'Version',
54 'Id',
55 'Statement',
56 ]
57 valid_versions = ['2012-10-17', '2008-10-17', date(2012, 10, 17), date(2008, 10, 17)]
58
59 if isinstance(value, six.string_types):
60 try:
61 value = convert_dict(json.loads(value), start_mark, end_mark)
62 except Exception as ex: # pylint: disable=W0703,W0612
63 message = 'IAM Policy Documents need to be JSON'
64 matches.append(RuleMatch(path[:], message))
65 return matches
66
67 if not isinstance(value, dict):
68 message = 'IAM Policy Documents needs to be JSON'
69 matches.append(
70 RuleMatch(path[:], message))
71 return matches
72
73 for p_vs, p_p in value.items_safe(path[:], (dict)):
74 for parent_key, parent_value in p_vs.items():
75 if parent_key not in valid_keys:
76 message = 'IAM Policy key %s doesn\'t exist.' % (parent_key)
77 matches.append(
78 RuleMatch(path[:] + p_p + [parent_key], message))
79 if parent_key == 'Version':
80 if parent_value not in valid_versions:
81 message = 'IAM Policy Version needs to be one of (%s).' % (
82 ', '.join(map(str, ['2012-10-17', '2008-10-17'])))
83 matches.append(
84 RuleMatch(p_p + [parent_key], message))
85 if parent_key == 'Statement':
86 if isinstance(parent_value, list):
87 for i_s_v, i_s_p in parent_value.items_safe(p_p + ['Statement'], (dict)):
88 matches.extend(
89 self._check_policy_statement(
90 i_s_p, i_s_v, is_identity_policy, resource_exceptions
91 )
92 )
93 elif isinstance(parent_value, dict):
94 for i_s_v, i_s_p in parent_value.items_safe(p_p + ['Statement']):
95 matches.extend(
96 self._check_policy_statement(
97 i_s_p, i_s_v, is_identity_policy, resource_exceptions
98 )
99 )
100 else:
101 message = 'IAM Policy statement should be of list.'
102 matches.append(
103 RuleMatch(p_p + [parent_key], message))
104 return matches
105
106 def _check_policy_statement(self, branch, statement, is_identity_policy, resource_exceptions):
107 """Check statements"""
108 matches = []
109 statement_valid_keys = [
110 'Action',
111 'Condition',
112 'Effect',
113 'NotAction',
114 'NotPrincipal',
115 'NotResource',
116 'Principal',
117 'Resource',
118 'Sid',
119 ]
120
121 for key, _ in statement.items():
122 if key not in statement_valid_keys:
123 message = 'IAM Policy statement key %s isn\'t valid' % (key)
124 matches.append(
125 RuleMatch(branch[:] + [key], message))
126 if 'Effect' not in statement:
127 message = 'IAM Policy statement missing Effect'
128 matches.append(
129 RuleMatch(branch[:], message))
130 else:
131 for effect, effect_path in statement.get_safe('Effect'):
132 if isinstance(effect, six.string_types):
133 if effect not in ['Allow', 'Deny']:
134 message = 'IAM Policy Effect should be Allow or Deny'
135 matches.append(
136 RuleMatch(branch[:] + effect_path, message))
137 if 'Action' not in statement and 'NotAction' not in statement:
138 message = 'IAM Policy statement missing Action or NotAction'
139 matches.append(
140 RuleMatch(branch[:], message))
141 if is_identity_policy:
142 if 'Principal' in statement or 'NotPrincipal' in statement:
143 message = 'IAM Resource Policy statement shouldn\'t have Principal or NotPrincipal'
144 matches.append(
145 RuleMatch(branch[:], message))
146 else:
147 if 'Principal' not in statement and 'NotPrincipal' not in statement:
148 message = 'IAM Resource Policy statement should have Principal or NotPrincipal'
149 matches.append(
150 RuleMatch(branch[:] + ['Principal'], message))
151 if not resource_exceptions:
152 if 'Resource' not in statement and 'NotResource' not in statement:
153 message = 'IAM Policy statement missing Resource or NotResource'
154 matches.append(
155 RuleMatch(branch[:], message))
156
157 return(matches)
158
159 def match_resource_properties(self, properties, resourcetype, path, cfn):
160 """Check CloudFormation Properties"""
161 matches = []
162
163 is_identity_policy = True
164 if resourcetype in self.resources_and_keys:
165 is_identity_policy = False
166
167 key = None
168 if resourcetype in self.resources_and_keys:
169 key = self.resources_and_keys.get(resourcetype)
170 else:
171 key = self.idp_and_keys.get(resourcetype)
172
173 if not key:
174 # Key isn't defined return nothing
175 return matches
176
177 resource_exceptions = False
178 if key == self.resource_exceptions.get(resourcetype):
179 resource_exceptions = True
180
181 other_keys = []
182 for key, value in self.resources_and_keys.items():
183 if value != 'Policies':
184 other_keys.append(key)
185 for key, value in self.idp_and_keys.items():
186 if value != 'Policies':
187 other_keys.append(key)
188
189 for key, value in properties.items():
190 if key == 'Policies' and isinstance(value, list):
191 for index, policy in enumerate(properties.get(key, [])):
192 matches.extend(
193 cfn.check_value(
194 obj=policy, key='PolicyDocument',
195 path=path[:] + ['Policies', index],
196 check_value=self.check_policy_document,
197 is_identity_policy=is_identity_policy,
198 resource_exceptions=resource_exceptions,
199 start_mark=key.start_mark, end_mark=key.end_mark,
200 ))
201 elif key in ['KeyPolicy', 'PolicyDocument', 'RepositoryPolicyText', 'AccessPolicies', 'InlinePolicy']:
202 matches.extend(
203 cfn.check_value(
204 obj=properties, key=key,
205 path=path[:],
206 check_value=self.check_policy_document,
207 is_identity_policy=is_identity_policy,
208 resource_exceptions=resource_exceptions,
209 start_mark=key.start_mark, end_mark=key.end_mark,
210 ))
211
212 return matches
213
[end of src/cfnlint/rules/resources/iam/Policy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cfnlint/rules/resources/iam/Policy.py b/src/cfnlint/rules/resources/iam/Policy.py
--- a/src/cfnlint/rules/resources/iam/Policy.py
+++ b/src/cfnlint/rules/resources/iam/Policy.py
@@ -5,7 +5,7 @@
import json
from datetime import date
import six
-from cfnlint.helpers import convert_dict
+from cfnlint.helpers import convert_dict, FUNCTIONS_SINGLE
from cfnlint.rules import CloudFormationLintRule
from cfnlint.rules import RuleMatch
@@ -154,6 +154,23 @@
matches.append(
RuleMatch(branch[:], message))
+ resources = statement.get('Resource', [])
+ if isinstance(resources, six.string_types):
+ resources = [resources]
+
+ for index, resource in enumerate(resources):
+ if isinstance(resource, dict):
+ if len(resource) == 1:
+ for k in resource.keys():
+ if k not in FUNCTIONS_SINGLE:
+ message = 'IAM Policy statement Resource incorrectly formatted'
+ matches.append(
+ RuleMatch(branch[:] + ['Resource', index], message))
+ else:
+ message = 'IAM Policy statement Resource incorrectly formatted'
+ matches.append(
+ RuleMatch(branch[:] + ['Resource', index], message))
+
return(matches)
def match_resource_properties(self, properties, resourcetype, path, cfn):
|
{"golden_diff": "diff --git a/src/cfnlint/rules/resources/iam/Policy.py b/src/cfnlint/rules/resources/iam/Policy.py\n--- a/src/cfnlint/rules/resources/iam/Policy.py\n+++ b/src/cfnlint/rules/resources/iam/Policy.py\n@@ -5,7 +5,7 @@\n import json\n from datetime import date\n import six\n-from cfnlint.helpers import convert_dict\n+from cfnlint.helpers import convert_dict, FUNCTIONS_SINGLE\n from cfnlint.rules import CloudFormationLintRule\n from cfnlint.rules import RuleMatch\n \n@@ -154,6 +154,23 @@\n matches.append(\n RuleMatch(branch[:], message))\n \n+ resources = statement.get('Resource', [])\n+ if isinstance(resources, six.string_types):\n+ resources = [resources]\n+\n+ for index, resource in enumerate(resources):\n+ if isinstance(resource, dict):\n+ if len(resource) == 1:\n+ for k in resource.keys():\n+ if k not in FUNCTIONS_SINGLE:\n+ message = 'IAM Policy statement Resource incorrectly formatted'\n+ matches.append(\n+ RuleMatch(branch[:] + ['Resource', index], message))\n+ else:\n+ message = 'IAM Policy statement Resource incorrectly formatted'\n+ matches.append(\n+ RuleMatch(branch[:] + ['Resource', index], message))\n+\n return(matches)\n \n def match_resource_properties(self, properties, resourcetype, path, cfn):\n", "issue": "Linter does not catch MalformedPolicyDocument syntax error\n*cfn-lint version: 0.25.0*\r\n\r\n*Description of issue.*\r\n`cfn-lint` fails to catch the following syntax error pre-deployment, which is caught by CloudFormation during deployment:\r\n```\r\nSyntax errors in policy. (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: ccd05e19-aa60-4b26-bfcc-8fbafca40c61; Proxy: null)\r\n```\r\n\r\nThe policy in question has the following template:\r\n```yaml\r\nAPILambdaExecutionPolicy:\r\n Type: AWS::IAM::ManagedPolicy\r\n Properties:\r\n Description: Allows API Gateway triggered Lambdas access to service resources\r\n Path: /\r\n ManagedPolicyName:\r\n Fn::Sub: ${Project}-APILambdaExecutionPolicy-${AWS::Region}\r\n PolicyDocument:\r\n Version: '2012-10-17'\r\n Statement:\r\n - Effect: Allow\r\n Action:\r\n - dynamodb:*Get*\r\n - dynamodb:*Item*\r\n - dynamodb:Describe*\r\n - dynamodb:Query\r\n Resource:\r\n - Fn::GetAtt:\r\n - DynamoDB\r\n - Outputs.TaskDBTableARN\r\n - Fn::Sub: ${DynamoDB.Outputs.TaskDBTableARN}/*\r\n - Effect: Allow\r\n Action:\r\n - cloudwatch:PutMetricAlarm\r\n - cloudwatch:PutMetricData\r\n Resource:\r\n - '*'\r\n - Effect: Allow\r\n Action:\r\n - xray:PutTraceSegments\r\n - xray:PutTelemetryRecords\r\n Resource:\r\n - '*'\r\n - Effect: Allow\r\n Action:\r\n - sqs:GetQueueUrl\r\n - sqs:SendMessage\r\n Resource:\r\n - Fn::GetAtt:\r\n - SQS\r\n - Outputs.QueueARN\r\n - Effect: Allow\r\n Action:\r\n - sns:Publish\r\n Resource:\r\n - arn:aws:sns:*:*:* \r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport json\nfrom datetime import date\nimport six\nfrom cfnlint.helpers import convert_dict\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass Policy(CloudFormationLintRule):\n \"\"\"Check if IAM Policy JSON is correct\"\"\"\n id = 'E2507'\n shortdesc = 'Check if IAM Policies are properly configured'\n description = 'See if there elements inside an IAM policy are correct'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-iam-policy.html'\n tags = ['properties', 'iam']\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super(Policy, self).__init__()\n self.resource_exceptions = {\n 'AWS::ECR::Repository': 'RepositoryPolicyText',\n }\n self.resources_and_keys = {\n 'AWS::ECR::Repository': 'RepositoryPolicyText',\n 'AWS::Elasticsearch::Domain': 'AccessPolicies',\n 'AWS::KMS::Key': 'KeyPolicy',\n 'AWS::S3::BucketPolicy': 'PolicyDocument',\n 'AWS::SNS::TopicPolicy': 'PolicyDocument',\n 'AWS::SQS::QueuePolicy': 'PolicyDocument',\n }\n self.idp_and_keys = {\n 'AWS::IAM::Group': 'Policies',\n 'AWS::IAM::ManagedPolicy': 'PolicyDocument',\n 'AWS::IAM::Policy': 'PolicyDocument',\n 'AWS::IAM::Role': 'Policies',\n 'AWS::IAM::User': 'Policies',\n 'AWS::SSO::PermissionSet': 'InlinePolicy',\n }\n for resource_type in self.resources_and_keys:\n self.resource_property_types.append(resource_type)\n for resource_type in self.idp_and_keys:\n self.resource_property_types.append(resource_type)\n\n def check_policy_document(self, value, path, is_identity_policy, resource_exceptions, start_mark, end_mark):\n \"\"\"Check policy document\"\"\"\n matches = []\n\n valid_keys = [\n 'Version',\n 'Id',\n 'Statement',\n ]\n valid_versions = ['2012-10-17', '2008-10-17', date(2012, 10, 17), date(2008, 10, 17)]\n\n if isinstance(value, six.string_types):\n try:\n value = convert_dict(json.loads(value), start_mark, end_mark)\n except Exception as ex: # pylint: disable=W0703,W0612\n message = 'IAM Policy Documents need to be JSON'\n matches.append(RuleMatch(path[:], message))\n return matches\n\n if not isinstance(value, dict):\n message = 'IAM Policy Documents needs to be JSON'\n matches.append(\n RuleMatch(path[:], message))\n return matches\n\n for p_vs, p_p in value.items_safe(path[:], (dict)):\n for parent_key, parent_value in p_vs.items():\n if parent_key not in valid_keys:\n message = 'IAM Policy key %s doesn\\'t exist.' % (parent_key)\n matches.append(\n RuleMatch(path[:] + p_p + [parent_key], message))\n if parent_key == 'Version':\n if parent_value not in valid_versions:\n message = 'IAM Policy Version needs to be one of (%s).' % (\n ', '.join(map(str, ['2012-10-17', '2008-10-17'])))\n matches.append(\n RuleMatch(p_p + [parent_key], message))\n if parent_key == 'Statement':\n if isinstance(parent_value, list):\n for i_s_v, i_s_p in parent_value.items_safe(p_p + ['Statement'], (dict)):\n matches.extend(\n self._check_policy_statement(\n i_s_p, i_s_v, is_identity_policy, resource_exceptions\n )\n )\n elif isinstance(parent_value, dict):\n for i_s_v, i_s_p in parent_value.items_safe(p_p + ['Statement']):\n matches.extend(\n self._check_policy_statement(\n i_s_p, i_s_v, is_identity_policy, resource_exceptions\n )\n )\n else:\n message = 'IAM Policy statement should be of list.'\n matches.append(\n RuleMatch(p_p + [parent_key], message))\n return matches\n\n def _check_policy_statement(self, branch, statement, is_identity_policy, resource_exceptions):\n \"\"\"Check statements\"\"\"\n matches = []\n statement_valid_keys = [\n 'Action',\n 'Condition',\n 'Effect',\n 'NotAction',\n 'NotPrincipal',\n 'NotResource',\n 'Principal',\n 'Resource',\n 'Sid',\n ]\n\n for key, _ in statement.items():\n if key not in statement_valid_keys:\n message = 'IAM Policy statement key %s isn\\'t valid' % (key)\n matches.append(\n RuleMatch(branch[:] + [key], message))\n if 'Effect' not in statement:\n message = 'IAM Policy statement missing Effect'\n matches.append(\n RuleMatch(branch[:], message))\n else:\n for effect, effect_path in statement.get_safe('Effect'):\n if isinstance(effect, six.string_types):\n if effect not in ['Allow', 'Deny']:\n message = 'IAM Policy Effect should be Allow or Deny'\n matches.append(\n RuleMatch(branch[:] + effect_path, message))\n if 'Action' not in statement and 'NotAction' not in statement:\n message = 'IAM Policy statement missing Action or NotAction'\n matches.append(\n RuleMatch(branch[:], message))\n if is_identity_policy:\n if 'Principal' in statement or 'NotPrincipal' in statement:\n message = 'IAM Resource Policy statement shouldn\\'t have Principal or NotPrincipal'\n matches.append(\n RuleMatch(branch[:], message))\n else:\n if 'Principal' not in statement and 'NotPrincipal' not in statement:\n message = 'IAM Resource Policy statement should have Principal or NotPrincipal'\n matches.append(\n RuleMatch(branch[:] + ['Principal'], message))\n if not resource_exceptions:\n if 'Resource' not in statement and 'NotResource' not in statement:\n message = 'IAM Policy statement missing Resource or NotResource'\n matches.append(\n RuleMatch(branch[:], message))\n\n return(matches)\n\n def match_resource_properties(self, properties, resourcetype, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n is_identity_policy = True\n if resourcetype in self.resources_and_keys:\n is_identity_policy = False\n\n key = None\n if resourcetype in self.resources_and_keys:\n key = self.resources_and_keys.get(resourcetype)\n else:\n key = self.idp_and_keys.get(resourcetype)\n\n if not key:\n # Key isn't defined return nothing\n return matches\n\n resource_exceptions = False\n if key == self.resource_exceptions.get(resourcetype):\n resource_exceptions = True\n\n other_keys = []\n for key, value in self.resources_and_keys.items():\n if value != 'Policies':\n other_keys.append(key)\n for key, value in self.idp_and_keys.items():\n if value != 'Policies':\n other_keys.append(key)\n\n for key, value in properties.items():\n if key == 'Policies' and isinstance(value, list):\n for index, policy in enumerate(properties.get(key, [])):\n matches.extend(\n cfn.check_value(\n obj=policy, key='PolicyDocument',\n path=path[:] + ['Policies', index],\n check_value=self.check_policy_document,\n is_identity_policy=is_identity_policy,\n resource_exceptions=resource_exceptions,\n start_mark=key.start_mark, end_mark=key.end_mark,\n ))\n elif key in ['KeyPolicy', 'PolicyDocument', 'RepositoryPolicyText', 'AccessPolicies', 'InlinePolicy']:\n matches.extend(\n cfn.check_value(\n obj=properties, key=key,\n path=path[:],\n check_value=self.check_policy_document,\n is_identity_policy=is_identity_policy,\n resource_exceptions=resource_exceptions,\n start_mark=key.start_mark, end_mark=key.end_mark,\n ))\n\n return matches\n", "path": "src/cfnlint/rules/resources/iam/Policy.py"}]}
| 3,321 | 311 |
gh_patches_debug_29440
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-1493
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
/api/v1/auth broke with django-oauth-toolkit 0.9.0
https://github.com/evonove/django-oauth-toolkit/compare/0.8.1...0.9.0
</issue>
<code>
[start of apps/sso/views.py]
1 # -*- encoding: utf-8 -*-
2
3 import logging
4
5 from braces.views import CsrfExemptMixin, LoginRequiredMixin
6 from django.contrib.auth.decorators import login_required
7 from django.http import HttpResponse
8 from django.shortcuts import render
9 from django.utils import timezone
10 from django.utils.decorators import method_decorator
11 from django.views.decorators.debug import sensitive_post_parameters
12 from django.views.generic import FormView, View
13 from oauth2_provider.backends import OAuth2Backend
14 from oauth2_provider.exceptions import OAuthToolkitError
15 from oauth2_provider.forms import AllowForm
16 from oauth2_provider.http import HttpResponseUriRedirect
17 from oauth2_provider.models import get_application_model
18 from oauth2_provider.oauth2_validators import OAuth2Validator
19 from oauth2_provider.settings import oauth2_settings
20 from oauth2_provider.views.mixins import OAuthLibMixin
21 from oauthlib.oauth2 import Server
22
23 from apps.sso.models import Client
24
25 _log = logging.getLogger('SSO')
26
27
28 @login_required
29 def index(request):
30 """
31 This is the main SSO view
32 """
33
34 context = {}
35
36 return render(request, 'sso/index.html', context)
37
38
39 class BaseAuthorizationView(LoginRequiredMixin, OAuthLibMixin, View):
40 """
41 Implements a generic endpoint to handle *Authorization Requests* as in :rfc:`4.1.1`. The view
42 does not implement any strategy to determine *authorize/do not authorize* logic.
43 The endpoint is used in the following flows:
44 * Authorization code
45 * Implicit grant
46 """
47
48 def dispatch(self, request, *args, **kwargs):
49 self.oauth2_data = {}
50 return super(BaseAuthorizationView, self).dispatch(request, *args, **kwargs)
51
52 def error_response(self, error, **kwargs):
53 """
54 Handle errors either by redirecting to redirect_uri with a json in the body containing
55 error details or providing an error response
56 """
57 redirect, error_response = super(BaseAuthorizationView, self).error_response(error, **kwargs)
58
59 if redirect:
60 return HttpResponseUriRedirect(error_response['url'])
61
62 status = error_response['error'].status_code
63 return self.render_to_response(error_response, status=status)
64
65
66 class AuthorizationView(BaseAuthorizationView, FormView):
67 """
68 Implements and endpoint to handle *Authorization Requests* as in :rfc:`4.1.1` and prompting the
69 user with a form to determine if she authorizes the client application to access her data.
70 This endpoint is reached two times during the authorization process:
71 * first receive a ``GET`` request from user asking authorization for a certain client
72 application, a form is served possibly showing some useful info and prompting for
73 *authorize/do not authorize*.
74 * then receive a ``POST`` request possibly after user authorized the access
75 Some informations contained in the ``GET`` request and needed to create a Grant token during
76 the ``POST`` request would be lost between the two steps above, so they are temporary stored in
77 hidden fields on the form.
78 A possible alternative could be keeping such informations in the session.
79 The endpoint is used in the followin flows:
80 * Authorization code
81 * Implicit grant
82 """
83 template_name = 'sso/authorize.html'
84 form_class = AllowForm
85
86 server_class = Server
87 validator_class = OAuth2Validator
88 oauthlib_backend_class = OAuth2Backend
89
90 skip_authorization_completely = False
91
92 def get_initial(self):
93 scopes = self.oauth2_data.get('scope', self.oauth2_data.get('scopes', []))
94 initial_data = {
95 'redirect_uri': self.oauth2_data.get('redirect_uri', None),
96 'scope': ' '.join(scopes),
97 'client_id': self.oauth2_data.get('client_id', None),
98 'state': self.oauth2_data.get('state', None),
99 'response_type': self.oauth2_data.get('response_type', None),
100 }
101 return initial_data
102
103 def form_valid(self, form):
104 try:
105 credentials = {
106 'client_id': form.cleaned_data.get('client_id'),
107 'redirect_uri': form.cleaned_data.get('redirect_uri'),
108 'response_type': form.cleaned_data.get('response_type', None),
109 'state': form.cleaned_data.get('state', None),
110 }
111
112 scopes = Client.objects.get(client_id=credentials['client_id']).scopes
113 if not scopes:
114 scopes = 'null'
115 allow = form.cleaned_data.get('allow')
116 uri, headers, body, status = self.create_authorization_response(
117 request=self.request, scopes=scopes, credentials=credentials, allow=allow)
118 self.success_url = uri
119 _log.debug("Success url for the request: {0}".format(self.success_url))
120 return HttpResponseUriRedirect(self.success_url)
121
122 except OAuthToolkitError as error:
123 return self.error_response(error)
124
125 def get(self, request, *args, **kwargs):
126 try:
127 scopes, credentials = self.validate_authorization_request(request)
128 scopes = Client.objects.get(client_id=credentials['client_id']).get_scopes()
129 if not scopes:
130 scopes = ['null']
131 kwargs['scopes_descriptions'] = [oauth2_settings.SCOPES[scope] for scope in scopes]
132 kwargs['scopes'] = scopes
133 # at this point we know an Application instance with such client_id exists in the database
134 application = get_application_model().objects.get(client_id=credentials['client_id']) # TODO: cache it!
135 kwargs['application'] = application
136 kwargs.update(credentials)
137 self.oauth2_data = kwargs
138 # following two loc are here only because of https://code.djangoproject.com/ticket/17795
139 form = self.get_form(self.get_form_class())
140 kwargs['form'] = form
141
142 # Check to see if the user has already granted access and return
143 # a successful response depending on 'approval_prompt' url parameter
144 require_approval = request.GET.get('approval_prompt', oauth2_settings.REQUEST_APPROVAL_PROMPT)
145
146 # If skip_authorization field is True, skip the authorization screen even
147 # if this is the first use of the application and there was no previous authorization.
148 # This is useful for in-house applications-> assume an in-house applications
149 # are already approved.
150 if application.skip_authorization:
151 uri, headers, body, status = self.create_authorization_response(
152 request=self.request, scopes=" ".join(scopes),
153 credentials=credentials, allow=True)
154 return HttpResponseUriRedirect(uri)
155
156 elif require_approval == 'auto':
157 tokens = request.user.accesstoken_set.filter(application=kwargs['application'],
158 expires__gt=timezone.now()).all()
159 # check past authorizations regarded the same scopes as the current one
160 for token in tokens:
161 if token.allow_scopes(scopes):
162 uri, headers, body, status = self.create_authorization_response(
163 request=self.request, scopes=" ".join(scopes),
164 credentials=credentials, allow=True)
165 return HttpResponseUriRedirect(uri)
166
167 return self.render_to_response(self.get_context_data(**kwargs))
168
169 except OAuthToolkitError as error:
170 return self.error_response(error)
171
172
173 class TokenView(CsrfExemptMixin, OAuthLibMixin, View):
174 """
175 Implements an endpoint to provide access tokens
176 The endpoint is used in the following flows:
177 * Authorization code
178 * Password
179 * Client credentials
180 """
181 server_class = Server
182 validator_class = OAuth2Validator
183 oauthlib_backend_class = OAuth2Backend
184
185 @method_decorator(sensitive_post_parameters('password'))
186 def post(self, request, *args, **kwargs):
187 url, headers, body, status = self.create_token_response(request)
188 response = HttpResponse(content=body, status=status)
189
190 for k, v in headers.items():
191 response[k] = v
192 return response
193
194
195 class RevokeTokenView(CsrfExemptMixin, OAuthLibMixin, View):
196 """
197 Implements an endpoint to revoke access or refresh tokens
198 """
199 server_class = Server
200 validator_class = OAuth2Validator
201 oauthlib_backend_class = OAuth2Backend
202
203 def post(self, request, *args, **kwargs):
204 url, headers, body, status = self.create_revocation_response(request)
205 response = HttpResponse(content=body or '', status=status)
206
207 for k, v in headers.items():
208 response[k] = v
209 return response
210
[end of apps/sso/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/sso/views.py b/apps/sso/views.py
--- a/apps/sso/views.py
+++ b/apps/sso/views.py
@@ -10,7 +10,6 @@
from django.utils.decorators import method_decorator
from django.views.decorators.debug import sensitive_post_parameters
from django.views.generic import FormView, View
-from oauth2_provider.backends import OAuth2Backend
from oauth2_provider.exceptions import OAuthToolkitError
from oauth2_provider.forms import AllowForm
from oauth2_provider.http import HttpResponseUriRedirect
@@ -85,7 +84,6 @@
server_class = Server
validator_class = OAuth2Validator
- oauthlib_backend_class = OAuth2Backend
skip_authorization_completely = False
@@ -180,7 +178,6 @@
"""
server_class = Server
validator_class = OAuth2Validator
- oauthlib_backend_class = OAuth2Backend
@method_decorator(sensitive_post_parameters('password'))
def post(self, request, *args, **kwargs):
@@ -198,7 +195,6 @@
"""
server_class = Server
validator_class = OAuth2Validator
- oauthlib_backend_class = OAuth2Backend
def post(self, request, *args, **kwargs):
url, headers, body, status = self.create_revocation_response(request)
|
{"golden_diff": "diff --git a/apps/sso/views.py b/apps/sso/views.py\n--- a/apps/sso/views.py\n+++ b/apps/sso/views.py\n@@ -10,7 +10,6 @@\n from django.utils.decorators import method_decorator\n from django.views.decorators.debug import sensitive_post_parameters\n from django.views.generic import FormView, View\n-from oauth2_provider.backends import OAuth2Backend\n from oauth2_provider.exceptions import OAuthToolkitError\n from oauth2_provider.forms import AllowForm\n from oauth2_provider.http import HttpResponseUriRedirect\n@@ -85,7 +84,6 @@\n \n server_class = Server\n validator_class = OAuth2Validator\n- oauthlib_backend_class = OAuth2Backend\n \n skip_authorization_completely = False\n \n@@ -180,7 +178,6 @@\n \"\"\"\n server_class = Server\n validator_class = OAuth2Validator\n- oauthlib_backend_class = OAuth2Backend\n \n @method_decorator(sensitive_post_parameters('password'))\n def post(self, request, *args, **kwargs):\n@@ -198,7 +195,6 @@\n \"\"\"\n server_class = Server\n validator_class = OAuth2Validator\n- oauthlib_backend_class = OAuth2Backend\n \n def post(self, request, *args, **kwargs):\n url, headers, body, status = self.create_revocation_response(request)\n", "issue": "/api/v1/auth broke with django-oauth-toolkit 0.9.0\nhttps://github.com/evonove/django-oauth-toolkit/compare/0.8.1...0.9.0\n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport logging\n\nfrom braces.views import CsrfExemptMixin, LoginRequiredMixin\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import HttpResponse\nfrom django.shortcuts import render\nfrom django.utils import timezone\nfrom django.utils.decorators import method_decorator\nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.views.generic import FormView, View\nfrom oauth2_provider.backends import OAuth2Backend\nfrom oauth2_provider.exceptions import OAuthToolkitError\nfrom oauth2_provider.forms import AllowForm\nfrom oauth2_provider.http import HttpResponseUriRedirect\nfrom oauth2_provider.models import get_application_model\nfrom oauth2_provider.oauth2_validators import OAuth2Validator\nfrom oauth2_provider.settings import oauth2_settings\nfrom oauth2_provider.views.mixins import OAuthLibMixin\nfrom oauthlib.oauth2 import Server\n\nfrom apps.sso.models import Client\n\n_log = logging.getLogger('SSO')\n\n\n@login_required\ndef index(request):\n \"\"\"\n This is the main SSO view\n \"\"\"\n\n context = {}\n\n return render(request, 'sso/index.html', context)\n\n\nclass BaseAuthorizationView(LoginRequiredMixin, OAuthLibMixin, View):\n \"\"\"\n Implements a generic endpoint to handle *Authorization Requests* as in :rfc:`4.1.1`. The view\n does not implement any strategy to determine *authorize/do not authorize* logic.\n The endpoint is used in the following flows:\n * Authorization code\n * Implicit grant\n \"\"\"\n\n def dispatch(self, request, *args, **kwargs):\n self.oauth2_data = {}\n return super(BaseAuthorizationView, self).dispatch(request, *args, **kwargs)\n\n def error_response(self, error, **kwargs):\n \"\"\"\n Handle errors either by redirecting to redirect_uri with a json in the body containing\n error details or providing an error response\n \"\"\"\n redirect, error_response = super(BaseAuthorizationView, self).error_response(error, **kwargs)\n\n if redirect:\n return HttpResponseUriRedirect(error_response['url'])\n\n status = error_response['error'].status_code\n return self.render_to_response(error_response, status=status)\n\n\nclass AuthorizationView(BaseAuthorizationView, FormView):\n \"\"\"\n Implements and endpoint to handle *Authorization Requests* as in :rfc:`4.1.1` and prompting the\n user with a form to determine if she authorizes the client application to access her data.\n This endpoint is reached two times during the authorization process:\n * first receive a ``GET`` request from user asking authorization for a certain client\n application, a form is served possibly showing some useful info and prompting for\n *authorize/do not authorize*.\n * then receive a ``POST`` request possibly after user authorized the access\n Some informations contained in the ``GET`` request and needed to create a Grant token during\n the ``POST`` request would be lost between the two steps above, so they are temporary stored in\n hidden fields on the form.\n A possible alternative could be keeping such informations in the session.\n The endpoint is used in the followin flows:\n * Authorization code\n * Implicit grant\n \"\"\"\n template_name = 'sso/authorize.html'\n form_class = AllowForm\n\n server_class = Server\n validator_class = OAuth2Validator\n oauthlib_backend_class = OAuth2Backend\n\n skip_authorization_completely = False\n\n def get_initial(self):\n scopes = self.oauth2_data.get('scope', self.oauth2_data.get('scopes', []))\n initial_data = {\n 'redirect_uri': self.oauth2_data.get('redirect_uri', None),\n 'scope': ' '.join(scopes),\n 'client_id': self.oauth2_data.get('client_id', None),\n 'state': self.oauth2_data.get('state', None),\n 'response_type': self.oauth2_data.get('response_type', None),\n }\n return initial_data\n\n def form_valid(self, form):\n try:\n credentials = {\n 'client_id': form.cleaned_data.get('client_id'),\n 'redirect_uri': form.cleaned_data.get('redirect_uri'),\n 'response_type': form.cleaned_data.get('response_type', None),\n 'state': form.cleaned_data.get('state', None),\n }\n\n scopes = Client.objects.get(client_id=credentials['client_id']).scopes\n if not scopes:\n scopes = 'null'\n allow = form.cleaned_data.get('allow')\n uri, headers, body, status = self.create_authorization_response(\n request=self.request, scopes=scopes, credentials=credentials, allow=allow)\n self.success_url = uri\n _log.debug(\"Success url for the request: {0}\".format(self.success_url))\n return HttpResponseUriRedirect(self.success_url)\n\n except OAuthToolkitError as error:\n return self.error_response(error)\n\n def get(self, request, *args, **kwargs):\n try:\n scopes, credentials = self.validate_authorization_request(request)\n scopes = Client.objects.get(client_id=credentials['client_id']).get_scopes()\n if not scopes:\n scopes = ['null']\n kwargs['scopes_descriptions'] = [oauth2_settings.SCOPES[scope] for scope in scopes]\n kwargs['scopes'] = scopes\n # at this point we know an Application instance with such client_id exists in the database\n application = get_application_model().objects.get(client_id=credentials['client_id']) # TODO: cache it!\n kwargs['application'] = application\n kwargs.update(credentials)\n self.oauth2_data = kwargs\n # following two loc are here only because of https://code.djangoproject.com/ticket/17795\n form = self.get_form(self.get_form_class())\n kwargs['form'] = form\n\n # Check to see if the user has already granted access and return\n # a successful response depending on 'approval_prompt' url parameter\n require_approval = request.GET.get('approval_prompt', oauth2_settings.REQUEST_APPROVAL_PROMPT)\n\n # If skip_authorization field is True, skip the authorization screen even\n # if this is the first use of the application and there was no previous authorization.\n # This is useful for in-house applications-> assume an in-house applications\n # are already approved.\n if application.skip_authorization:\n uri, headers, body, status = self.create_authorization_response(\n request=self.request, scopes=\" \".join(scopes),\n credentials=credentials, allow=True)\n return HttpResponseUriRedirect(uri)\n\n elif require_approval == 'auto':\n tokens = request.user.accesstoken_set.filter(application=kwargs['application'],\n expires__gt=timezone.now()).all()\n # check past authorizations regarded the same scopes as the current one\n for token in tokens:\n if token.allow_scopes(scopes):\n uri, headers, body, status = self.create_authorization_response(\n request=self.request, scopes=\" \".join(scopes),\n credentials=credentials, allow=True)\n return HttpResponseUriRedirect(uri)\n\n return self.render_to_response(self.get_context_data(**kwargs))\n\n except OAuthToolkitError as error:\n return self.error_response(error)\n\n\nclass TokenView(CsrfExemptMixin, OAuthLibMixin, View):\n \"\"\"\n Implements an endpoint to provide access tokens\n The endpoint is used in the following flows:\n * Authorization code\n * Password\n * Client credentials\n \"\"\"\n server_class = Server\n validator_class = OAuth2Validator\n oauthlib_backend_class = OAuth2Backend\n\n @method_decorator(sensitive_post_parameters('password'))\n def post(self, request, *args, **kwargs):\n url, headers, body, status = self.create_token_response(request)\n response = HttpResponse(content=body, status=status)\n\n for k, v in headers.items():\n response[k] = v\n return response\n\n\nclass RevokeTokenView(CsrfExemptMixin, OAuthLibMixin, View):\n \"\"\"\n Implements an endpoint to revoke access or refresh tokens\n \"\"\"\n server_class = Server\n validator_class = OAuth2Validator\n oauthlib_backend_class = OAuth2Backend\n\n def post(self, request, *args, **kwargs):\n url, headers, body, status = self.create_revocation_response(request)\n response = HttpResponse(content=body or '', status=status)\n\n for k, v in headers.items():\n response[k] = v\n return response\n", "path": "apps/sso/views.py"}]}
| 2,892 | 293 |
gh_patches_debug_26401
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-567
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Refactor Profile to be used with locale: how
I got this idea but i'm not sure it would be the simplest: the actual profile.py becomes something like "internal_profile.py", its methods are renamed "internal_simple_profile()" and "internal_profile()", and is removed from the list of standard providers. Then we will have a standard profile.py that simply calls self.generator.internal_profile(). For each locale instead, we will be able to add more logic, for example to customize field names and eventually values.
Do you think there would be a simpler way to do it?
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # coding=utf-8
3
4 import os
5 import io
6
7 from setuptools import setup, find_packages
8
9 here = os.path.abspath(os.path.dirname(__file__))
10 README = io.open(os.path.join(here, 'README.rst'), encoding="utf8").read()
11
12
13 version = '0.7.18'
14
15 # this module can be zip-safe if the zipimporter implements iter_modules or if
16 # pkgutil.iter_importer_modules has registered a dispatch for the zipimporter.
17 try:
18 import pkgutil
19 import zipimport
20 zip_safe = hasattr(zipimport.zipimporter, "iter_modules") or \
21 zipimport.zipimporter in pkgutil.iter_importer_modules.registry.keys()
22 except (ImportError, AttributeError):
23 zip_safe = False
24
25 setup(
26 name='Faker',
27 version=version,
28 description="Faker is a Python package that generates fake data for you.",
29 long_description=README,
30 entry_points={
31 'console_scripts': ['faker=faker.cli:execute_from_command_line'],
32 },
33 classifiers=[
34 # See https://pypi.python.org/pypi?%3Aaction=list_classifiers
35 'Development Status :: 3 - Alpha',
36 'Environment :: Console',
37 'Intended Audience :: Developers',
38 'Programming Language :: Python',
39 'Programming Language :: Python :: 2',
40 'Programming Language :: Python :: 2.7',
41 'Programming Language :: Python :: 3',
42 'Programming Language :: Python :: 3.3',
43 'Programming Language :: Python :: 3.4',
44 'Programming Language :: Python :: 3.5',
45 'Programming Language :: Python :: 3.6',
46 'Programming Language :: Python :: Implementation :: CPython',
47 'Programming Language :: Python :: Implementation :: PyPy',
48 'Topic :: Software Development :: Libraries :: Python Modules',
49 'Topic :: Software Development :: Testing',
50 'Topic :: Utilities',
51 'License :: OSI Approved :: MIT License'
52 ],
53 keywords='faker fixtures data test mock generator',
54 author='joke2k',
55 author_email='[email protected]',
56 url='https://github.com/joke2k/faker',
57 license='MIT License',
58 packages=find_packages(exclude=["docs", "tests", "tests.*"]),
59 platforms=["any"],
60 test_suite='tests',
61 zip_safe=zip_safe,
62 install_requires=[
63 "python-dateutil>=2.4",
64 "six",
65 ],
66 extras_require={
67 ':python_version=="2.7"': [
68 'ipaddress',
69 ],
70 ':python_version=="3.0"': [
71 'importlib',
72 ],
73 ':python_version=="3.2"': [
74 'ipaddress',
75 ],
76 }
77 )
78
[end of setup.py]
[start of faker/generator.py]
1 # coding=utf-8
2
3 from __future__ import unicode_literals
4
5 import re
6 import random
7
8
9 _re_token = re.compile(r'\{\{(\s?)(\w+)(\s?)\}\}')
10 random = random.Random()
11
12
13 class Generator(object):
14
15 __config = {}
16
17 def __init__(self, **config):
18 self.providers = []
19 self.__config = dict(
20 list(self.__config.items()) + list(config.items()))
21
22 def add_provider(self, provider):
23
24 if type(provider) is type:
25 provider = provider(self)
26
27 self.providers.insert(0, provider)
28
29 for method_name in dir(provider):
30 # skip 'private' method
31 if method_name.startswith('_'):
32 continue
33
34 faker_function = getattr(provider, method_name)
35
36 if hasattr(faker_function, '__call__') or \
37 isinstance(faker_function, (classmethod, staticmethod)):
38 # add all faker method to generator
39 self.set_formatter(method_name, faker_function)
40
41 def provider(self, name):
42 try:
43 lst = [p for p in self.get_providers()
44 if p.__provider__ == name.lower()]
45 return lst[0]
46 except IndexError:
47 return None
48
49 def get_providers(self):
50 """Returns added providers."""
51 return self.providers
52
53 @property
54 def random(self):
55 return random
56
57 def seed(self, seed=None):
58 """Calls random.seed"""
59 random.seed(seed)
60
61 def format(self, formatter, *args, **kwargs):
62 """
63 This is a secure way to make a fake from another Provider.
64 """
65 # TODO: data export?
66 return self.get_formatter(formatter)(*args, **kwargs)
67
68 def get_formatter(self, formatter):
69 try:
70 return getattr(self, formatter)
71 except AttributeError:
72 raise AttributeError('Unknown formatter "{0}"'.format(formatter))
73
74 def set_formatter(self, name, method):
75 """
76 This method adds a provider method to generator.
77 Override this method to add some decoration or logging stuff.
78 """
79 setattr(self, name, method)
80
81 def parse(self, text):
82 """
83 Replaces tokens (like '{{ tokenName }}' or '{{tokenName}}')
84 with the result from the token method call.
85 """
86 return _re_token.sub(self.__format_token, text)
87
88 def __format_token(self, matches):
89 formatter = list(matches.groups())
90 formatter[1] = self.format(formatter[1])
91 return ''.join(formatter)
92
[end of faker/generator.py]
[start of faker/factory.py]
1 # coding=utf-8
2
3 from __future__ import unicode_literals
4 from __future__ import absolute_import
5
6 from importlib import import_module
7 import locale as pylocale
8
9 from faker import Generator
10 from faker.config import DEFAULT_LOCALE, PROVIDERS, AVAILABLE_LOCALES
11 from faker.utils.loading import list_module
12
13
14 class Factory(object):
15
16 @classmethod
17 def create(cls, locale=None, providers=None, generator=None, includes=None, **config):
18 if includes is None:
19 includes = []
20
21 # fix locale to package name
22 locale = locale.replace('-', '_') if locale else DEFAULT_LOCALE
23 locale = pylocale.normalize(locale).split('.')[0]
24 if locale not in AVAILABLE_LOCALES:
25 msg = 'Invalid configuration for faker locale "{0}"'.format(locale)
26 raise AttributeError(msg)
27
28 providers = providers or PROVIDERS
29
30 providers += includes
31
32 faker = generator or Generator(**config)
33
34 for prov_name in providers:
35 if prov_name == 'faker.providers':
36 continue
37
38 prov_cls, lang_found = cls._get_provider_class(prov_name, locale)
39 provider = prov_cls(faker)
40 provider.__provider__ = prov_name
41 provider.__lang__ = lang_found
42 faker.add_provider(provider)
43
44 return faker
45
46 @classmethod
47 def _get_provider_class(cls, provider, locale=''):
48
49 provider_class = cls._find_provider_class(provider, locale)
50
51 if provider_class:
52 return provider_class, locale
53
54 if locale and locale != DEFAULT_LOCALE:
55 # fallback to default locale
56 provider_class = cls._find_provider_class(provider, DEFAULT_LOCALE)
57 if provider_class:
58 return provider_class, DEFAULT_LOCALE
59
60 # fallback to no locale
61 provider_class = cls._find_provider_class(provider)
62 if provider_class:
63 return provider_class, None
64
65 msg = 'Unable to find provider "{0}" with locale "{1}"'.format(
66 provider, locale)
67 raise ValueError(msg)
68
69 @classmethod
70 def _find_provider_class(cls, provider_path, locale=None):
71 provider_module = import_module(provider_path)
72
73 if getattr(provider_module, 'localized', False):
74 available_locales = list_module(provider_module)
75 if not locale or locale not in available_locales:
76 locale = getattr(provider_module, 'default_locale', DEFAULT_LOCALE)
77
78 path = "{provider_path}.{locale}".format(
79 provider_path=provider_path,
80 locale=locale,
81 )
82 provider_module = import_module(path)
83 else:
84 if locale is not None:
85 provider_module = import_module(provider_path)
86
87 return provider_module.Provider
88
[end of faker/factory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/faker/factory.py b/faker/factory.py
--- a/faker/factory.py
+++ b/faker/factory.py
@@ -25,6 +25,7 @@
msg = 'Invalid configuration for faker locale "{0}"'.format(locale)
raise AttributeError(msg)
+ config['locale'] = locale
providers = providers or PROVIDERS
providers += includes
diff --git a/faker/generator.py b/faker/generator.py
--- a/faker/generator.py
+++ b/faker/generator.py
@@ -69,7 +69,15 @@
try:
return getattr(self, formatter)
except AttributeError:
- raise AttributeError('Unknown formatter "{0}"'.format(formatter))
+ if 'locale' in self.__config:
+ msg = 'Unknown formatter "{0}" with locale "{1}"'.format(
+ formatter, self.__config['locale']
+ )
+ else:
+ raise AttributeError('Unknown formatter "{0}"'.format(
+ formatter
+ ))
+ raise AttributeError(msg)
def set_formatter(self, name, method):
"""
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -60,8 +60,10 @@
test_suite='tests',
zip_safe=zip_safe,
install_requires=[
+ "email_validator>=1.0.2",
"python-dateutil>=2.4",
"six",
+ "ukpostcodeparser>=1.1.1",
],
extras_require={
':python_version=="2.7"': [
|
{"golden_diff": "diff --git a/faker/factory.py b/faker/factory.py\n--- a/faker/factory.py\n+++ b/faker/factory.py\n@@ -25,6 +25,7 @@\n msg = 'Invalid configuration for faker locale \"{0}\"'.format(locale)\n raise AttributeError(msg)\n \n+ config['locale'] = locale\n providers = providers or PROVIDERS\n \n providers += includes\ndiff --git a/faker/generator.py b/faker/generator.py\n--- a/faker/generator.py\n+++ b/faker/generator.py\n@@ -69,7 +69,15 @@\n try:\n return getattr(self, formatter)\n except AttributeError:\n- raise AttributeError('Unknown formatter \"{0}\"'.format(formatter))\n+ if 'locale' in self.__config:\n+ msg = 'Unknown formatter \"{0}\" with locale \"{1}\"'.format(\n+ formatter, self.__config['locale']\n+ )\n+ else:\n+ raise AttributeError('Unknown formatter \"{0}\"'.format(\n+ formatter\n+ ))\n+ raise AttributeError(msg)\n \n def set_formatter(self, name, method):\n \"\"\"\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -60,8 +60,10 @@\n test_suite='tests',\n zip_safe=zip_safe,\n install_requires=[\n+ \"email_validator>=1.0.2\",\n \"python-dateutil>=2.4\",\n \"six\",\n+ \"ukpostcodeparser>=1.1.1\",\n ],\n extras_require={\n ':python_version==\"2.7\"': [\n", "issue": "Refactor Profile to be used with locale: how\nI got this idea but i'm not sure it would be the simplest: the actual profile.py becomes something like \"internal_profile.py\", its methods are renamed \"internal_simple_profile()\" and \"internal_profile()\", and is removed from the list of standard providers. Then we will have a standard profile.py that simply calls self.generator.internal_profile(). For each locale instead, we will be able to add more logic, for example to customize field names and eventually values. \n\nDo you think there would be a simpler way to do it?\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# coding=utf-8\n\nimport os\nimport io\n\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\nREADME = io.open(os.path.join(here, 'README.rst'), encoding=\"utf8\").read()\n\n\nversion = '0.7.18'\n\n# this module can be zip-safe if the zipimporter implements iter_modules or if\n# pkgutil.iter_importer_modules has registered a dispatch for the zipimporter.\ntry:\n import pkgutil\n import zipimport\n zip_safe = hasattr(zipimport.zipimporter, \"iter_modules\") or \\\n zipimport.zipimporter in pkgutil.iter_importer_modules.registry.keys()\nexcept (ImportError, AttributeError):\n zip_safe = False\n\nsetup(\n name='Faker',\n version=version,\n description=\"Faker is a Python package that generates fake data for you.\",\n long_description=README,\n entry_points={\n 'console_scripts': ['faker=faker.cli:execute_from_command_line'],\n },\n classifiers=[\n # See https://pypi.python.org/pypi?%3Aaction=list_classifiers\n 'Development Status :: 3 - Alpha',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Testing',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: MIT License'\n ],\n keywords='faker fixtures data test mock generator',\n author='joke2k',\n author_email='[email protected]',\n url='https://github.com/joke2k/faker',\n license='MIT License',\n packages=find_packages(exclude=[\"docs\", \"tests\", \"tests.*\"]),\n platforms=[\"any\"],\n test_suite='tests',\n zip_safe=zip_safe,\n install_requires=[\n \"python-dateutil>=2.4\",\n \"six\",\n ],\n extras_require={\n ':python_version==\"2.7\"': [\n 'ipaddress',\n ],\n ':python_version==\"3.0\"': [\n 'importlib',\n ],\n ':python_version==\"3.2\"': [\n 'ipaddress',\n ],\n }\n)\n", "path": "setup.py"}, {"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\n\nimport re\nimport random\n\n\n_re_token = re.compile(r'\\{\\{(\\s?)(\\w+)(\\s?)\\}\\}')\nrandom = random.Random()\n\n\nclass Generator(object):\n\n __config = {}\n\n def __init__(self, **config):\n self.providers = []\n self.__config = dict(\n list(self.__config.items()) + list(config.items()))\n\n def add_provider(self, provider):\n\n if type(provider) is type:\n provider = provider(self)\n\n self.providers.insert(0, provider)\n\n for method_name in dir(provider):\n # skip 'private' method\n if method_name.startswith('_'):\n continue\n\n faker_function = getattr(provider, method_name)\n\n if hasattr(faker_function, '__call__') or \\\n isinstance(faker_function, (classmethod, staticmethod)):\n # add all faker method to generator\n self.set_formatter(method_name, faker_function)\n\n def provider(self, name):\n try:\n lst = [p for p in self.get_providers()\n if p.__provider__ == name.lower()]\n return lst[0]\n except IndexError:\n return None\n\n def get_providers(self):\n \"\"\"Returns added providers.\"\"\"\n return self.providers\n\n @property\n def random(self):\n return random\n\n def seed(self, seed=None):\n \"\"\"Calls random.seed\"\"\"\n random.seed(seed)\n\n def format(self, formatter, *args, **kwargs):\n \"\"\"\n This is a secure way to make a fake from another Provider.\n \"\"\"\n # TODO: data export?\n return self.get_formatter(formatter)(*args, **kwargs)\n\n def get_formatter(self, formatter):\n try:\n return getattr(self, formatter)\n except AttributeError:\n raise AttributeError('Unknown formatter \"{0}\"'.format(formatter))\n\n def set_formatter(self, name, method):\n \"\"\"\n This method adds a provider method to generator.\n Override this method to add some decoration or logging stuff.\n \"\"\"\n setattr(self, name, method)\n\n def parse(self, text):\n \"\"\"\n Replaces tokens (like '{{ tokenName }}' or '{{tokenName}}')\n with the result from the token method call.\n \"\"\"\n return _re_token.sub(self.__format_token, text)\n\n def __format_token(self, matches):\n formatter = list(matches.groups())\n formatter[1] = self.format(formatter[1])\n return ''.join(formatter)\n", "path": "faker/generator.py"}, {"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\nfrom __future__ import absolute_import\n\nfrom importlib import import_module\nimport locale as pylocale\n\nfrom faker import Generator\nfrom faker.config import DEFAULT_LOCALE, PROVIDERS, AVAILABLE_LOCALES\nfrom faker.utils.loading import list_module\n\n\nclass Factory(object):\n\n @classmethod\n def create(cls, locale=None, providers=None, generator=None, includes=None, **config):\n if includes is None:\n includes = []\n\n # fix locale to package name\n locale = locale.replace('-', '_') if locale else DEFAULT_LOCALE\n locale = pylocale.normalize(locale).split('.')[0]\n if locale not in AVAILABLE_LOCALES:\n msg = 'Invalid configuration for faker locale \"{0}\"'.format(locale)\n raise AttributeError(msg)\n\n providers = providers or PROVIDERS\n\n providers += includes\n\n faker = generator or Generator(**config)\n\n for prov_name in providers:\n if prov_name == 'faker.providers':\n continue\n\n prov_cls, lang_found = cls._get_provider_class(prov_name, locale)\n provider = prov_cls(faker)\n provider.__provider__ = prov_name\n provider.__lang__ = lang_found\n faker.add_provider(provider)\n\n return faker\n\n @classmethod\n def _get_provider_class(cls, provider, locale=''):\n\n provider_class = cls._find_provider_class(provider, locale)\n\n if provider_class:\n return provider_class, locale\n\n if locale and locale != DEFAULT_LOCALE:\n # fallback to default locale\n provider_class = cls._find_provider_class(provider, DEFAULT_LOCALE)\n if provider_class:\n return provider_class, DEFAULT_LOCALE\n\n # fallback to no locale\n provider_class = cls._find_provider_class(provider)\n if provider_class:\n return provider_class, None\n\n msg = 'Unable to find provider \"{0}\" with locale \"{1}\"'.format(\n provider, locale)\n raise ValueError(msg)\n\n @classmethod\n def _find_provider_class(cls, provider_path, locale=None):\n provider_module = import_module(provider_path)\n\n if getattr(provider_module, 'localized', False):\n available_locales = list_module(provider_module)\n if not locale or locale not in available_locales:\n locale = getattr(provider_module, 'default_locale', DEFAULT_LOCALE)\n\n path = \"{provider_path}.{locale}\".format(\n provider_path=provider_path,\n locale=locale,\n )\n provider_module = import_module(path)\n else:\n if locale is not None:\n provider_module = import_module(provider_path)\n\n return provider_module.Provider\n", "path": "faker/factory.py"}]}
| 2,854 | 352 |
gh_patches_debug_29599
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-293
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Relative links in pages below the top-level document folder point to the wrong path
Relative image links in pages below the top-level document folder seem to point to the wrong path.
## Reproduction
I've created a minimal reproduction case:
https://github.com/jdhoek/mkdocs-test-relative-image-links
The reproduction case contains this file structure:
```
├── docs
│ ├── index.md
│ ├── subdir
│ │ ├── sub.md
│ │ └── sub.png
│ └── toplevel.png
└── mkdocs.yml
```
`mkdocs serve` and `mkdocs build` correctly process the Markdown files in the top-level folder `docs_dir`, but with the Markdown file in the sub-folder the relative image links are broken.

</issue>
<code>
[start of mkdocs/utils.py]
1 # coding: utf-8
2
3 """
4 Standalone file utils.
5
6 Nothing in this module should have an knowledge of config or the layout
7 and structure of the site and pages in the site.
8 """
9
10 import os
11 import shutil
12
13 from mkdocs.compat import urlparse
14
15
16 def copy_file(source_path, output_path):
17 """
18 Copy source_path to output_path, making sure any parent directories exist.
19 """
20 output_dir = os.path.dirname(output_path)
21 if not os.path.exists(output_dir):
22 os.makedirs(output_dir)
23 shutil.copy(source_path, output_path)
24
25
26 def write_file(content, output_path):
27 """
28 Write content to output_path, making sure any parent directories exist.
29 """
30 output_dir = os.path.dirname(output_path)
31 if not os.path.exists(output_dir):
32 os.makedirs(output_dir)
33 open(output_path, 'wb').write(content)
34
35
36 def clean_directory(directory):
37 """
38 Remove the content of a directory recursively but not the directory itself.
39 """
40 if os.path.exists(directory):
41 for entry in os.listdir(directory):
42 path = os.path.join(directory, entry)
43 if os.path.isdir(path):
44 shutil.rmtree(path, True)
45 else:
46 os.unlink(path)
47
48
49 def copy_media_files(from_dir, to_dir):
50 """
51 Recursively copy all files except markdown and HTML into another directory.
52 """
53 for (source_dir, dirnames, filenames) in os.walk(from_dir):
54 relative_path = os.path.relpath(source_dir, from_dir)
55 output_dir = os.path.normpath(os.path.join(to_dir, relative_path))
56
57 # Filter filenames starting with a '.'
58 filenames = [f for f in filenames if not f.startswith('.')]
59
60 # Filter the dirnames that start with a '.' and update the list in
61 # place to prevent us walking these.
62 dirnames[:] = [d for d in dirnames if not d.startswith('.')]
63
64 for filename in filenames:
65 if not is_markdown_file(filename) and not is_html_file(filename):
66 source_path = os.path.join(source_dir, filename)
67 output_path = os.path.join(output_dir, filename)
68 copy_file(source_path, output_path)
69
70
71 def get_html_path(path):
72 """
73 Map a source file path to an output html path.
74
75 Paths like 'index.md' will be converted to 'index.html'
76 Paths like 'about.md' will be converted to 'about/index.html'
77 Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'
78 """
79 path = os.path.splitext(path)[0]
80 if os.path.basename(path) == 'index':
81 return path + '.html'
82 return "/".join((path, 'index.html'))
83
84
85 def get_url_path(path, use_directory_urls=True):
86 """
87 Map a source file path to an output html path.
88
89 Paths like 'index.md' will be converted to '/'
90 Paths like 'about.md' will be converted to '/about/'
91 Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'
92
93 If `use_directory_urls` is `False`, returned URLs will include the a trailing
94 `index.html` rather than just returning the directory path.
95 """
96 path = get_html_path(path)
97 url = '/' + path.replace(os.path.sep, '/')
98 if use_directory_urls:
99 return url[:-len('index.html')]
100 return url
101
102
103 def is_homepage(path):
104 return os.path.splitext(path)[0] == 'index'
105
106
107 def is_markdown_file(path):
108 """
109 Return True if the given file path is a Markdown file.
110
111 http://superuser.com/questions/249436/file-extension-for-markdown-files
112 """
113 ext = os.path.splitext(path)[1].lower()
114 return ext in [
115 '.markdown',
116 '.mdown',
117 '.mkdn',
118 '.mkd',
119 '.md',
120 ]
121
122
123 def is_css_file(path):
124 """
125 Return True if the given file path is a CSS file.
126 """
127 ext = os.path.splitext(path)[1].lower()
128 return ext in [
129 '.css',
130 ]
131
132
133 def is_javascript_file(path):
134 """
135 Return True if the given file path is a Javascript file.
136 """
137 ext = os.path.splitext(path)[1].lower()
138 return ext in [
139 '.js',
140 '.javascript'
141 ]
142
143
144 def is_html_file(path):
145 """
146 Return True if the given file path is an HTML file.
147 """
148 ext = os.path.splitext(path)[1].lower()
149 return ext in [
150 '.html',
151 '.htm',
152 ]
153
154
155 def create_media_urls(nav, url_list):
156 """
157 Return a list of URLs that have been processed correctly for inclusion in a page.
158 """
159 final_urls = []
160 for url in url_list:
161 # Allow links to fully qualified URL's
162 parsed = urlparse(url)
163 if parsed.netloc:
164 final_urls.append(url)
165 else:
166 relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)
167 final_urls.append(relative_url)
168 return final_urls
169
[end of mkdocs/utils.py]
[start of mkdocs/relative_path_ext.py]
1 """
2 # Relative Path Markdown Extension
3
4 During the MkDocs build we rewrite URLs that link to local
5 Markdown or media files. Using the following pages configuration
6 we can look at how the output is changed.
7
8 pages:
9 - ['index.md']
10 - ['tutorial/install.md']
11 - ['tutorial/intro.md']
12
13 ## Markdown URLs
14
15 When linking from `install.md` to `intro.md` the link would
16 simply be `[intro](intro.md)`. However, when we build
17 `install.md` we place it in a directory to create nicer URLs.
18 This means that the path to `intro.md` becomes `../intro/`
19
20 ## Media URLs
21
22 To make it easier to work with media files and store them all
23 under one directory we re-write those to all be based on the
24 root. So, with the following markdown to add an image.
25
26 
27
28 The output would depend on the location of the Markdown file it
29 was added too.
30
31 Source file | Generated Path | Image Path |
32 ------------------- | ----------------- | ---------------------------- |
33 index.md | / | ./img/initial-layout.png |
34 tutorial/install.md | tutorial/install/ | ../img/initial-layout.png |
35 tutorial/intro.md | tutorial/intro/ | ../../img/initial-layout.png |
36
37 """
38 from markdown.extensions import Extension
39 from markdown.treeprocessors import Treeprocessor
40
41 from mkdocs import utils
42 from mkdocs.compat import urlparse, urlunparse
43
44
45 def _iter(node):
46 # TODO: Remove when dropping Python 2.6. Replace this
47 # function call with note.iter()
48 return [node] + node.findall('.//*')
49
50
51 def path_to_url(url, nav):
52 scheme, netloc, path, query, query, fragment = urlparse(url)
53
54 if scheme or netloc or not path:
55 # Ignore URLs unless they are a relative link to a markdown file.
56 return url
57
58 if nav and not utils.is_markdown_file(path):
59 path = utils.create_media_urls(nav, [path])[0]
60 elif nav:
61 # If the site navigation has been provided, then validate
62 # the internal hyperlink, making sure the target actually exists.
63 target_file = nav.file_context.make_absolute(path)
64 if target_file not in nav.source_files:
65 source_file = nav.file_context.current_file
66 msg = (
67 'The page "%s" contained a hyperlink to "%s" which '
68 'is not listed in the "pages" configuration.'
69 )
70 assert False, msg % (source_file, target_file)
71 path = utils.get_url_path(target_file, nav.use_directory_urls)
72 path = nav.url_context.make_relative(path)
73 else:
74 path = utils.get_url_path(path).lstrip('/')
75
76 # Convert the .md hyperlink to a relative hyperlink to the HTML page.
77 url = urlunparse((scheme, netloc, path, query, query, fragment))
78 return url
79
80
81 class RelativePathTreeprocessor(Treeprocessor):
82
83 def __init__(self, site_navigation):
84 self.site_navigation = site_navigation
85
86 def run(self, root):
87 """Update urls on anchors and images to make them relative
88
89 Iterates through the full document tree looking for specific
90 tags and then makes them relative based on the site navigation
91 """
92
93 for element in _iter(root):
94
95 if element.tag == 'a':
96 key = 'href'
97 elif element.tag == 'img':
98 key = 'src'
99 else:
100 continue
101
102 url = element.get(key)
103 new_url = path_to_url(url, self.site_navigation)
104 element.set(key, new_url)
105
106 return root
107
108
109 class RelativePathExtension(Extension):
110 """
111 The Extension class is what we pass to markdown, it then
112 registers the Treeprocessor.
113 """
114
115 def __init__(self, site_navigation):
116 self.site_navigation = site_navigation
117
118 def extendMarkdown(self, md, md_globals):
119 relpath = RelativePathTreeprocessor(self.site_navigation)
120 md.treeprocessors.add("relpath", relpath, "_end")
121
[end of mkdocs/relative_path_ext.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mkdocs/relative_path_ext.py b/mkdocs/relative_path_ext.py
--- a/mkdocs/relative_path_ext.py
+++ b/mkdocs/relative_path_ext.py
@@ -56,7 +56,7 @@
return url
if nav and not utils.is_markdown_file(path):
- path = utils.create_media_urls(nav, [path])[0]
+ path = utils.create_relative_media_url(nav, path)
elif nav:
# If the site navigation has been provided, then validate
# the internal hyperlink, making sure the target actually exists.
diff --git a/mkdocs/utils.py b/mkdocs/utils.py
--- a/mkdocs/utils.py
+++ b/mkdocs/utils.py
@@ -166,3 +166,41 @@
relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)
final_urls.append(relative_url)
return final_urls
+
+
+def create_relative_media_url(nav, url):
+ """
+ For a current page, create a relative url based on the given URL.
+
+ On index.md (which becomes /index.html):
+ image.png -> ./image.png
+ /image.png -> ./image.png
+
+ on sub/page.md (which becomes /sub/page/index.html):
+ image.png -> ../image.png
+ /image.png -> ../../image.png
+
+ """
+
+ # Allow links to fully qualified URL's
+ parsed = urlparse(url)
+ if parsed.netloc:
+ return url
+
+ # If the URL we are looking at starts with a /, then it should be
+ # considered as absolute and will be 'relative' to the root.
+ if url.startswith('/'):
+ base = '/'
+ url = url[1:]
+ else:
+ base = nav.url_context.base_path
+
+ relative_url = '%s/%s' % (nav.url_context.make_relative(base), url)
+
+ # TODO: Fix this, this is a hack. Relative urls are not being calculated
+ # correctly for images in the same directory as the markdown. I think this
+ # is due to us moving it into a directory with index.html, but I'm not sure
+ if nav.url_context.base_path is not '/' and relative_url.startswith("./"):
+ relative_url = ".%s" % relative_url
+
+ return relative_url
|
{"golden_diff": "diff --git a/mkdocs/relative_path_ext.py b/mkdocs/relative_path_ext.py\n--- a/mkdocs/relative_path_ext.py\n+++ b/mkdocs/relative_path_ext.py\n@@ -56,7 +56,7 @@\n return url\n \n if nav and not utils.is_markdown_file(path):\n- path = utils.create_media_urls(nav, [path])[0]\n+ path = utils.create_relative_media_url(nav, path)\n elif nav:\n # If the site navigation has been provided, then validate\n # the internal hyperlink, making sure the target actually exists.\ndiff --git a/mkdocs/utils.py b/mkdocs/utils.py\n--- a/mkdocs/utils.py\n+++ b/mkdocs/utils.py\n@@ -166,3 +166,41 @@\n relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)\n final_urls.append(relative_url)\n return final_urls\n+\n+\n+def create_relative_media_url(nav, url):\n+ \"\"\"\n+ For a current page, create a relative url based on the given URL.\n+\n+ On index.md (which becomes /index.html):\n+ image.png -> ./image.png\n+ /image.png -> ./image.png\n+\n+ on sub/page.md (which becomes /sub/page/index.html):\n+ image.png -> ../image.png\n+ /image.png -> ../../image.png\n+\n+ \"\"\"\n+\n+ # Allow links to fully qualified URL's\n+ parsed = urlparse(url)\n+ if parsed.netloc:\n+ return url\n+\n+ # If the URL we are looking at starts with a /, then it should be\n+ # considered as absolute and will be 'relative' to the root.\n+ if url.startswith('/'):\n+ base = '/'\n+ url = url[1:]\n+ else:\n+ base = nav.url_context.base_path\n+\n+ relative_url = '%s/%s' % (nav.url_context.make_relative(base), url)\n+\n+ # TODO: Fix this, this is a hack. Relative urls are not being calculated\n+ # correctly for images in the same directory as the markdown. I think this\n+ # is due to us moving it into a directory with index.html, but I'm not sure\n+ if nav.url_context.base_path is not '/' and relative_url.startswith(\"./\"):\n+ relative_url = \".%s\" % relative_url\n+\n+ return relative_url\n", "issue": "Relative links in pages below the top-level document folder point to the wrong path\nRelative image links in pages below the top-level document folder seem to point to the wrong path.\n## Reproduction\n\nI've created a minimal reproduction case:\nhttps://github.com/jdhoek/mkdocs-test-relative-image-links\n\nThe reproduction case contains this file structure:\n\n```\n\u251c\u2500\u2500 docs\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index.md\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 subdir\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 sub.md\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 sub.png\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 toplevel.png\n\u2514\u2500\u2500 mkdocs.yml\n```\n\n`mkdocs serve` and `mkdocs build` correctly process the Markdown files in the top-level folder `docs_dir`, but with the Markdown file in the sub-folder the relative image links are broken.\n\n\n\n", "before_files": [{"content": "# coding: utf-8\n\n\"\"\"\nStandalone file utils.\n\nNothing in this module should have an knowledge of config or the layout\nand structure of the site and pages in the site.\n\"\"\"\n\nimport os\nimport shutil\n\nfrom mkdocs.compat import urlparse\n\n\ndef copy_file(source_path, output_path):\n \"\"\"\n Copy source_path to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n shutil.copy(source_path, output_path)\n\n\ndef write_file(content, output_path):\n \"\"\"\n Write content to output_path, making sure any parent directories exist.\n \"\"\"\n output_dir = os.path.dirname(output_path)\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n open(output_path, 'wb').write(content)\n\n\ndef clean_directory(directory):\n \"\"\"\n Remove the content of a directory recursively but not the directory itself.\n \"\"\"\n if os.path.exists(directory):\n for entry in os.listdir(directory):\n path = os.path.join(directory, entry)\n if os.path.isdir(path):\n shutil.rmtree(path, True)\n else:\n os.unlink(path)\n\n\ndef copy_media_files(from_dir, to_dir):\n \"\"\"\n Recursively copy all files except markdown and HTML into another directory.\n \"\"\"\n for (source_dir, dirnames, filenames) in os.walk(from_dir):\n relative_path = os.path.relpath(source_dir, from_dir)\n output_dir = os.path.normpath(os.path.join(to_dir, relative_path))\n\n # Filter filenames starting with a '.'\n filenames = [f for f in filenames if not f.startswith('.')]\n\n # Filter the dirnames that start with a '.' and update the list in\n # place to prevent us walking these.\n dirnames[:] = [d for d in dirnames if not d.startswith('.')]\n\n for filename in filenames:\n if not is_markdown_file(filename) and not is_html_file(filename):\n source_path = os.path.join(source_dir, filename)\n output_path = os.path.join(output_dir, filename)\n copy_file(source_path, output_path)\n\n\ndef get_html_path(path):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to 'index.html'\n Paths like 'about.md' will be converted to 'about/index.html'\n Paths like 'api-guide/core.md' will be converted to 'api-guide/core/index.html'\n \"\"\"\n path = os.path.splitext(path)[0]\n if os.path.basename(path) == 'index':\n return path + '.html'\n return \"/\".join((path, 'index.html'))\n\n\ndef get_url_path(path, use_directory_urls=True):\n \"\"\"\n Map a source file path to an output html path.\n\n Paths like 'index.md' will be converted to '/'\n Paths like 'about.md' will be converted to '/about/'\n Paths like 'api-guide/core.md' will be converted to '/api-guide/core/'\n\n If `use_directory_urls` is `False`, returned URLs will include the a trailing\n `index.html` rather than just returning the directory path.\n \"\"\"\n path = get_html_path(path)\n url = '/' + path.replace(os.path.sep, '/')\n if use_directory_urls:\n return url[:-len('index.html')]\n return url\n\n\ndef is_homepage(path):\n return os.path.splitext(path)[0] == 'index'\n\n\ndef is_markdown_file(path):\n \"\"\"\n Return True if the given file path is a Markdown file.\n\n http://superuser.com/questions/249436/file-extension-for-markdown-files\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.markdown',\n '.mdown',\n '.mkdn',\n '.mkd',\n '.md',\n ]\n\n\ndef is_css_file(path):\n \"\"\"\n Return True if the given file path is a CSS file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.css',\n ]\n\n\ndef is_javascript_file(path):\n \"\"\"\n Return True if the given file path is a Javascript file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.js',\n '.javascript'\n ]\n\n\ndef is_html_file(path):\n \"\"\"\n Return True if the given file path is an HTML file.\n \"\"\"\n ext = os.path.splitext(path)[1].lower()\n return ext in [\n '.html',\n '.htm',\n ]\n\n\ndef create_media_urls(nav, url_list):\n \"\"\"\n Return a list of URLs that have been processed correctly for inclusion in a page.\n \"\"\"\n final_urls = []\n for url in url_list:\n # Allow links to fully qualified URL's\n parsed = urlparse(url)\n if parsed.netloc:\n final_urls.append(url)\n else:\n relative_url = '%s/%s' % (nav.url_context.make_relative('/'), url)\n final_urls.append(relative_url)\n return final_urls\n", "path": "mkdocs/utils.py"}, {"content": "\"\"\"\n# Relative Path Markdown Extension\n\nDuring the MkDocs build we rewrite URLs that link to local\nMarkdown or media files. Using the following pages configuration\nwe can look at how the output is changed.\n\n pages:\n - ['index.md']\n - ['tutorial/install.md']\n - ['tutorial/intro.md']\n\n## Markdown URLs\n\nWhen linking from `install.md` to `intro.md` the link would\nsimply be `[intro](intro.md)`. However, when we build\n`install.md` we place it in a directory to create nicer URLs.\nThis means that the path to `intro.md` becomes `../intro/`\n\n## Media URLs\n\nTo make it easier to work with media files and store them all\nunder one directory we re-write those to all be based on the\nroot. So, with the following markdown to add an image.\n\n \n\nThe output would depend on the location of the Markdown file it\nwas added too.\n\nSource file | Generated Path | Image Path |\n------------------- | ----------------- | ---------------------------- |\nindex.md | / | ./img/initial-layout.png |\ntutorial/install.md | tutorial/install/ | ../img/initial-layout.png |\ntutorial/intro.md | tutorial/intro/ | ../../img/initial-layout.png |\n\n\"\"\"\nfrom markdown.extensions import Extension\nfrom markdown.treeprocessors import Treeprocessor\n\nfrom mkdocs import utils\nfrom mkdocs.compat import urlparse, urlunparse\n\n\ndef _iter(node):\n # TODO: Remove when dropping Python 2.6. Replace this\n # function call with note.iter()\n return [node] + node.findall('.//*')\n\n\ndef path_to_url(url, nav):\n scheme, netloc, path, query, query, fragment = urlparse(url)\n\n if scheme or netloc or not path:\n # Ignore URLs unless they are a relative link to a markdown file.\n return url\n\n if nav and not utils.is_markdown_file(path):\n path = utils.create_media_urls(nav, [path])[0]\n elif nav:\n # If the site navigation has been provided, then validate\n # the internal hyperlink, making sure the target actually exists.\n target_file = nav.file_context.make_absolute(path)\n if target_file not in nav.source_files:\n source_file = nav.file_context.current_file\n msg = (\n 'The page \"%s\" contained a hyperlink to \"%s\" which '\n 'is not listed in the \"pages\" configuration.'\n )\n assert False, msg % (source_file, target_file)\n path = utils.get_url_path(target_file, nav.use_directory_urls)\n path = nav.url_context.make_relative(path)\n else:\n path = utils.get_url_path(path).lstrip('/')\n\n # Convert the .md hyperlink to a relative hyperlink to the HTML page.\n url = urlunparse((scheme, netloc, path, query, query, fragment))\n return url\n\n\nclass RelativePathTreeprocessor(Treeprocessor):\n\n def __init__(self, site_navigation):\n self.site_navigation = site_navigation\n\n def run(self, root):\n \"\"\"Update urls on anchors and images to make them relative\n\n Iterates through the full document tree looking for specific\n tags and then makes them relative based on the site navigation\n \"\"\"\n\n for element in _iter(root):\n\n if element.tag == 'a':\n key = 'href'\n elif element.tag == 'img':\n key = 'src'\n else:\n continue\n\n url = element.get(key)\n new_url = path_to_url(url, self.site_navigation)\n element.set(key, new_url)\n\n return root\n\n\nclass RelativePathExtension(Extension):\n \"\"\"\n The Extension class is what we pass to markdown, it then\n registers the Treeprocessor.\n \"\"\"\n\n def __init__(self, site_navigation):\n self.site_navigation = site_navigation\n\n def extendMarkdown(self, md, md_globals):\n relpath = RelativePathTreeprocessor(self.site_navigation)\n md.treeprocessors.add(\"relpath\", relpath, \"_end\")\n", "path": "mkdocs/relative_path_ext.py"}]}
| 3,428 | 534 |
gh_patches_debug_37325
|
rasdani/github-patches
|
git_diff
|
pallets__werkzeug-1790
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
deprecate posixemulation
It's called out as "not a public interface" in the docstring, and looks like it was only there to support `contrib.sessions`, which has moved to `secure-cookie` now. Move it there if it's still needed, remove it here.
</issue>
<code>
[start of src/werkzeug/posixemulation.py]
1 """A ``rename`` function that follows POSIX semantics. If the target
2 file already exists it will be replaced without asking.
3
4 This is not a public interface.
5 """
6 import errno
7 import os
8 import random
9 import sys
10 import time
11
12 from ._internal import _to_str
13 from .filesystem import get_filesystem_encoding
14
15 can_rename_open_file = False
16
17 if os.name == "nt":
18 try:
19 import ctypes
20
21 _MOVEFILE_REPLACE_EXISTING = 0x1
22 _MOVEFILE_WRITE_THROUGH = 0x8
23 _MoveFileEx = ctypes.windll.kernel32.MoveFileExW # type: ignore
24
25 def _rename(src, dst):
26 src = _to_str(src, get_filesystem_encoding())
27 dst = _to_str(dst, get_filesystem_encoding())
28 if _rename_atomic(src, dst):
29 return True
30 retry = 0
31 rv = False
32 while not rv and retry < 100:
33 rv = _MoveFileEx(
34 src, dst, _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH
35 )
36 if not rv:
37 time.sleep(0.001)
38 retry += 1
39 return rv
40
41 # new in Vista and Windows Server 2008
42 _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction # type: ignore
43 _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction # type: ignore
44 _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW # type: ignore
45 _CloseHandle = ctypes.windll.kernel32.CloseHandle # type: ignore
46 can_rename_open_file = True
47
48 def _rename_atomic(src, dst):
49 ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, "Werkzeug rename")
50 if ta == -1:
51 return False
52 try:
53 retry = 0
54 rv = False
55 while not rv and retry < 100:
56 rv = _MoveFileTransacted(
57 src,
58 dst,
59 None,
60 None,
61 _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH,
62 ta,
63 )
64 if rv:
65 rv = _CommitTransaction(ta)
66 break
67 else:
68 time.sleep(0.001)
69 retry += 1
70 return rv
71 finally:
72 _CloseHandle(ta)
73
74 except Exception:
75
76 def _rename(src, dst):
77 return False
78
79 def _rename_atomic(src, dst):
80 return False
81
82 def rename(src, dst):
83 # Try atomic or pseudo-atomic rename
84 if _rename(src, dst):
85 return
86 # Fall back to "move away and replace"
87 try:
88 os.rename(src, dst)
89 except OSError as e:
90 if e.errno != errno.EEXIST:
91 raise
92 old = f"{dst}-{random.randint(0, sys.maxsize):08x}"
93 os.rename(dst, old)
94 os.rename(src, dst)
95 try:
96 os.unlink(old)
97 except Exception:
98 pass
99
100
101 else:
102 rename = os.rename
103 can_rename_open_file = True
104
[end of src/werkzeug/posixemulation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/werkzeug/posixemulation.py b/src/werkzeug/posixemulation.py
deleted file mode 100644
--- a/src/werkzeug/posixemulation.py
+++ /dev/null
@@ -1,103 +0,0 @@
-"""A ``rename`` function that follows POSIX semantics. If the target
-file already exists it will be replaced without asking.
-
-This is not a public interface.
-"""
-import errno
-import os
-import random
-import sys
-import time
-
-from ._internal import _to_str
-from .filesystem import get_filesystem_encoding
-
-can_rename_open_file = False
-
-if os.name == "nt":
- try:
- import ctypes
-
- _MOVEFILE_REPLACE_EXISTING = 0x1
- _MOVEFILE_WRITE_THROUGH = 0x8
- _MoveFileEx = ctypes.windll.kernel32.MoveFileExW # type: ignore
-
- def _rename(src, dst):
- src = _to_str(src, get_filesystem_encoding())
- dst = _to_str(dst, get_filesystem_encoding())
- if _rename_atomic(src, dst):
- return True
- retry = 0
- rv = False
- while not rv and retry < 100:
- rv = _MoveFileEx(
- src, dst, _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH
- )
- if not rv:
- time.sleep(0.001)
- retry += 1
- return rv
-
- # new in Vista and Windows Server 2008
- _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction # type: ignore
- _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction # type: ignore
- _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW # type: ignore
- _CloseHandle = ctypes.windll.kernel32.CloseHandle # type: ignore
- can_rename_open_file = True
-
- def _rename_atomic(src, dst):
- ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, "Werkzeug rename")
- if ta == -1:
- return False
- try:
- retry = 0
- rv = False
- while not rv and retry < 100:
- rv = _MoveFileTransacted(
- src,
- dst,
- None,
- None,
- _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH,
- ta,
- )
- if rv:
- rv = _CommitTransaction(ta)
- break
- else:
- time.sleep(0.001)
- retry += 1
- return rv
- finally:
- _CloseHandle(ta)
-
- except Exception:
-
- def _rename(src, dst):
- return False
-
- def _rename_atomic(src, dst):
- return False
-
- def rename(src, dst):
- # Try atomic or pseudo-atomic rename
- if _rename(src, dst):
- return
- # Fall back to "move away and replace"
- try:
- os.rename(src, dst)
- except OSError as e:
- if e.errno != errno.EEXIST:
- raise
- old = f"{dst}-{random.randint(0, sys.maxsize):08x}"
- os.rename(dst, old)
- os.rename(src, dst)
- try:
- os.unlink(old)
- except Exception:
- pass
-
-
-else:
- rename = os.rename
- can_rename_open_file = True
|
{"golden_diff": "diff --git a/src/werkzeug/posixemulation.py b/src/werkzeug/posixemulation.py\ndeleted file mode 100644\n--- a/src/werkzeug/posixemulation.py\n+++ /dev/null\n@@ -1,103 +0,0 @@\n-\"\"\"A ``rename`` function that follows POSIX semantics. If the target\n-file already exists it will be replaced without asking.\n-\n-This is not a public interface.\n-\"\"\"\n-import errno\n-import os\n-import random\n-import sys\n-import time\n-\n-from ._internal import _to_str\n-from .filesystem import get_filesystem_encoding\n-\n-can_rename_open_file = False\n-\n-if os.name == \"nt\":\n- try:\n- import ctypes\n-\n- _MOVEFILE_REPLACE_EXISTING = 0x1\n- _MOVEFILE_WRITE_THROUGH = 0x8\n- _MoveFileEx = ctypes.windll.kernel32.MoveFileExW # type: ignore\n-\n- def _rename(src, dst):\n- src = _to_str(src, get_filesystem_encoding())\n- dst = _to_str(dst, get_filesystem_encoding())\n- if _rename_atomic(src, dst):\n- return True\n- retry = 0\n- rv = False\n- while not rv and retry < 100:\n- rv = _MoveFileEx(\n- src, dst, _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH\n- )\n- if not rv:\n- time.sleep(0.001)\n- retry += 1\n- return rv\n-\n- # new in Vista and Windows Server 2008\n- _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction # type: ignore\n- _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction # type: ignore\n- _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW # type: ignore\n- _CloseHandle = ctypes.windll.kernel32.CloseHandle # type: ignore\n- can_rename_open_file = True\n-\n- def _rename_atomic(src, dst):\n- ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, \"Werkzeug rename\")\n- if ta == -1:\n- return False\n- try:\n- retry = 0\n- rv = False\n- while not rv and retry < 100:\n- rv = _MoveFileTransacted(\n- src,\n- dst,\n- None,\n- None,\n- _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH,\n- ta,\n- )\n- if rv:\n- rv = _CommitTransaction(ta)\n- break\n- else:\n- time.sleep(0.001)\n- retry += 1\n- return rv\n- finally:\n- _CloseHandle(ta)\n-\n- except Exception:\n-\n- def _rename(src, dst):\n- return False\n-\n- def _rename_atomic(src, dst):\n- return False\n-\n- def rename(src, dst):\n- # Try atomic or pseudo-atomic rename\n- if _rename(src, dst):\n- return\n- # Fall back to \"move away and replace\"\n- try:\n- os.rename(src, dst)\n- except OSError as e:\n- if e.errno != errno.EEXIST:\n- raise\n- old = f\"{dst}-{random.randint(0, sys.maxsize):08x}\"\n- os.rename(dst, old)\n- os.rename(src, dst)\n- try:\n- os.unlink(old)\n- except Exception:\n- pass\n-\n-\n-else:\n- rename = os.rename\n- can_rename_open_file = True\n", "issue": "deprecate posixemulation\nIt's called out as \"not a public interface\" in the docstring, and looks like it was only there to support `contrib.sessions`, which has moved to `secure-cookie` now. Move it there if it's still needed, remove it here.\n", "before_files": [{"content": "\"\"\"A ``rename`` function that follows POSIX semantics. If the target\nfile already exists it will be replaced without asking.\n\nThis is not a public interface.\n\"\"\"\nimport errno\nimport os\nimport random\nimport sys\nimport time\n\nfrom ._internal import _to_str\nfrom .filesystem import get_filesystem_encoding\n\ncan_rename_open_file = False\n\nif os.name == \"nt\":\n try:\n import ctypes\n\n _MOVEFILE_REPLACE_EXISTING = 0x1\n _MOVEFILE_WRITE_THROUGH = 0x8\n _MoveFileEx = ctypes.windll.kernel32.MoveFileExW # type: ignore\n\n def _rename(src, dst):\n src = _to_str(src, get_filesystem_encoding())\n dst = _to_str(dst, get_filesystem_encoding())\n if _rename_atomic(src, dst):\n return True\n retry = 0\n rv = False\n while not rv and retry < 100:\n rv = _MoveFileEx(\n src, dst, _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH\n )\n if not rv:\n time.sleep(0.001)\n retry += 1\n return rv\n\n # new in Vista and Windows Server 2008\n _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction # type: ignore\n _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction # type: ignore\n _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW # type: ignore\n _CloseHandle = ctypes.windll.kernel32.CloseHandle # type: ignore\n can_rename_open_file = True\n\n def _rename_atomic(src, dst):\n ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, \"Werkzeug rename\")\n if ta == -1:\n return False\n try:\n retry = 0\n rv = False\n while not rv and retry < 100:\n rv = _MoveFileTransacted(\n src,\n dst,\n None,\n None,\n _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH,\n ta,\n )\n if rv:\n rv = _CommitTransaction(ta)\n break\n else:\n time.sleep(0.001)\n retry += 1\n return rv\n finally:\n _CloseHandle(ta)\n\n except Exception:\n\n def _rename(src, dst):\n return False\n\n def _rename_atomic(src, dst):\n return False\n\n def rename(src, dst):\n # Try atomic or pseudo-atomic rename\n if _rename(src, dst):\n return\n # Fall back to \"move away and replace\"\n try:\n os.rename(src, dst)\n except OSError as e:\n if e.errno != errno.EEXIST:\n raise\n old = f\"{dst}-{random.randint(0, sys.maxsize):08x}\"\n os.rename(dst, old)\n os.rename(src, dst)\n try:\n os.unlink(old)\n except Exception:\n pass\n\n\nelse:\n rename = os.rename\n can_rename_open_file = True\n", "path": "src/werkzeug/posixemulation.py"}]}
| 1,515 | 852 |
gh_patches_debug_3284
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-7947
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exporting CSV file that contains Unicode text fails on Python 3 on Windows
### Observed behavior
User reports of CSV file export failing prompted me to add a test for this in a Windows environment in https://github.com/learningequality/kolibri/pull/7899.
When Unicode text should be inserted into the exported CSV, such as a Unicode username or full_name, then an error occurs on windows.
```
Error trying to write csv file: 'charmap' codec can't encode characters in position 29-35: character maps to <undefined>
```
### Expected behavior
CSV Export should work on all platforms.
This is fixed by this commit: https://github.com/learningequality/kolibri/pull/7899/commits/c088d08b3e6c4a0f4194a17062457cbd7ae34816
### User-facing consequences
Windows users are unable to export CSV files.
</issue>
<code>
[start of kolibri/core/logger/csv_export.py]
1 from __future__ import unicode_literals
2
3 import csv
4 import io
5 import json
6 import logging
7 import math
8 import os
9 import sys
10 from collections import OrderedDict
11
12 from django.core.cache import cache
13 from django.http import Http404
14 from django.http import HttpResponse
15 from django.http.response import FileResponse
16 from django.template.defaultfilters import slugify
17 from django.utils import translation
18 from django.utils.translation import get_language_from_request
19 from django.utils.translation import pgettext
20
21 from .models import ContentSessionLog
22 from .models import ContentSummaryLog
23 from kolibri.core.auth.models import Facility
24 from kolibri.core.content.models import ChannelMetadata
25 from kolibri.core.content.models import ContentNode
26 from kolibri.utils import conf
27
28
29 logger = logging.getLogger(__name__)
30
31 CSV_EXPORT_FILENAMES = {
32 "session": "{}_{}_content_session_logs.csv",
33 "summary": "{}_{}_content_summary_logs.csv",
34 }
35
36
37 def cache_channel_name(obj):
38 channel_id = obj["channel_id"]
39 key = "{id}_ChannelMetadata_name".format(id=channel_id)
40 channel_name = cache.get(key)
41 if channel_name is None:
42 try:
43 channel_name = ChannelMetadata.objects.get(id=channel_id)
44 except ChannelMetadata.DoesNotExist:
45 channel_name = ""
46 cache.set(key, channel_name, 60 * 10)
47 return channel_name
48
49
50 def cache_content_title(obj):
51 content_id = obj["content_id"]
52 key = "{id}_ContentNode_title".format(id=content_id)
53 title = cache.get(key)
54 if title is None:
55 node = ContentNode.objects.filter(content_id=content_id).first()
56 if node:
57 title = node.title
58 else:
59 title = ""
60 cache.set(key, title, 60 * 10)
61 return title
62
63
64 mappings = {
65 "channel_name": cache_channel_name,
66 "content_title": cache_content_title,
67 "time_spent": lambda x: "{:.1f}".format(round(x["time_spent"], 1)),
68 "progress": lambda x: "{:.4f}".format(math.floor(x["progress"] * 10000.0) / 10000),
69 }
70
71 labels = OrderedDict(
72 (
73 ("user__facility__name", "Facility name"),
74 ("user__username", "Username"),
75 ("channel_id", "Channel id"),
76 ("channel_name", "Channel name"),
77 ("content_id", "Content id"),
78 ("content_title", "Content title"),
79 ("start_timestamp", "Time of first interaction"),
80 ("end_timestamp", "Time of last interaction"),
81 ("completion_timestamp", "Time of completion"),
82 ("time_spent", "Time Spent (sec)"),
83 ("progress", "Progress (0-1)"),
84 ("kind", "Content kind"),
85 )
86 )
87
88
89 def map_object(obj):
90 mapped_obj = {}
91 for header, label in labels.items():
92 if header in mappings:
93 mapped_obj[label] = mappings[header](obj)
94 elif header in obj:
95 mapped_obj[label] = obj[header]
96 return mapped_obj
97
98
99 classes_info = {
100 "session": {
101 "queryset": ContentSessionLog.objects.all(),
102 "filename": CSV_EXPORT_FILENAMES["session"],
103 "db_columns": (
104 "user__username",
105 "user__facility__name",
106 "channel_id",
107 "content_id",
108 "start_timestamp",
109 "end_timestamp",
110 "time_spent",
111 "progress",
112 "kind",
113 ),
114 },
115 "summary": {
116 "queryset": ContentSummaryLog.objects.all(),
117 "filename": CSV_EXPORT_FILENAMES["summary"],
118 "db_columns": (
119 "user__username",
120 "user__facility__name",
121 "content_id",
122 "channel_id",
123 "start_timestamp",
124 "end_timestamp",
125 "completion_timestamp",
126 "time_spent",
127 "progress",
128 "kind",
129 ),
130 },
131 }
132
133
134 def csv_file_generator(facility, log_type, filepath, overwrite=False):
135
136 if log_type not in ("summary", "session"):
137 raise ValueError(
138 "Impossible to create a csv export file for {}".format(log_type)
139 )
140
141 log_info = classes_info[log_type]
142
143 if not overwrite and os.path.exists(filepath):
144 raise ValueError("{} already exists".format(filepath))
145 queryset = log_info["queryset"].filter(dataset_id=facility.dataset_id)
146
147 # Exclude completion timestamp for the sessionlog CSV
148 header_labels = tuple(
149 label
150 for label in labels.values()
151 if log_type == "summary" or label != "completion_timestamp"
152 )
153
154 if sys.version_info[0] < 3:
155 csv_file = io.open(filepath, "wb")
156 else:
157 csv_file = io.open(filepath, "w", newline="")
158
159 with csv_file as f:
160 writer = csv.DictWriter(f, header_labels)
161 logger.info("Creating csv file {filename}".format(filename=filepath))
162 writer.writeheader()
163 for item in queryset.select_related("user", "user__facility").values(
164 *log_info["db_columns"]
165 ):
166 writer.writerow(map_object(item))
167 yield
168
169
170 def exported_logs_info(request, facility_id, facility):
171 """
172 Get the last modification timestamp of the summary logs exported
173
174 :returns: An object with the files informatin
175 """
176 logs_dir = os.path.join(conf.KOLIBRI_HOME, "log_export")
177 csv_statuses = {}
178
179 for log_type in CSV_EXPORT_FILENAMES.keys():
180 log_path = os.path.join(
181 logs_dir, CSV_EXPORT_FILENAMES[log_type].format(facility, facility_id[:4])
182 )
183 if os.path.exists(log_path):
184 csv_statuses[log_type] = os.path.getmtime(log_path)
185 else:
186 csv_statuses[log_type] = None
187
188 return HttpResponse(json.dumps(csv_statuses), content_type="application/json")
189
190
191 def download_csv_file(request, log_type, facility_id):
192 if facility_id:
193 facility_name = Facility.objects.get(pk=facility_id).name
194 else:
195 facility_name = request.user.facility.name
196 facility_id = request.user.facility.id
197
198 locale = get_language_from_request(request)
199 translation.activate(locale)
200
201 csv_translated_filenames = {
202 "session": (
203 "{}_{}_"
204 + slugify(
205 pgettext(
206 "Default name for the exported CSV file with content session logs. Please keep the underscores between words in the translation",
207 "content_session_logs",
208 )
209 )
210 + ".csv"
211 ).replace("-", "_"),
212 "summary": (
213 "{}_{}_"
214 + slugify(
215 pgettext(
216 "Default name for the exported CSV file with content summary logs. Please keep the underscores between words in the translation",
217 "content_summary_logs",
218 )
219 )
220 + ".csv"
221 ).replace("-", "_"),
222 }
223
224 if log_type in CSV_EXPORT_FILENAMES.keys():
225 filepath = os.path.join(
226 conf.KOLIBRI_HOME,
227 "log_export",
228 CSV_EXPORT_FILENAMES[log_type].format(facility_name, facility_id[:4]),
229 )
230 else:
231 filepath = None
232
233 # if the file does not exist on disk, return a 404
234 if filepath is None or not os.path.exists(filepath):
235 raise Http404("There is no csv export file for {} available".format(log_type))
236
237 # generate a file response
238 response = FileResponse(io.open(filepath, "rb"))
239 # set the content-type by guessing from the filename
240 response["Content-Type"] = "text/csv"
241
242 # set the content-disposition as attachment to force download
243 response["Content-Disposition"] = "attachment; filename={}".format(
244 str(csv_translated_filenames[log_type]).format(facility_name, facility_id[:4])
245 )
246 translation.deactivate()
247
248 # set the content-length to the file size
249 response["Content-Length"] = os.path.getsize(filepath)
250
251 return response
252
[end of kolibri/core/logger/csv_export.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kolibri/core/logger/csv_export.py b/kolibri/core/logger/csv_export.py
--- a/kolibri/core/logger/csv_export.py
+++ b/kolibri/core/logger/csv_export.py
@@ -154,7 +154,7 @@
if sys.version_info[0] < 3:
csv_file = io.open(filepath, "wb")
else:
- csv_file = io.open(filepath, "w", newline="")
+ csv_file = io.open(filepath, "w", newline="", encoding="utf-8")
with csv_file as f:
writer = csv.DictWriter(f, header_labels)
|
{"golden_diff": "diff --git a/kolibri/core/logger/csv_export.py b/kolibri/core/logger/csv_export.py\n--- a/kolibri/core/logger/csv_export.py\n+++ b/kolibri/core/logger/csv_export.py\n@@ -154,7 +154,7 @@\n if sys.version_info[0] < 3:\n csv_file = io.open(filepath, \"wb\")\n else:\n- csv_file = io.open(filepath, \"w\", newline=\"\")\n+ csv_file = io.open(filepath, \"w\", newline=\"\", encoding=\"utf-8\")\n \n with csv_file as f:\n writer = csv.DictWriter(f, header_labels)\n", "issue": "Exporting CSV file that contains Unicode text fails on Python 3 on Windows\n### Observed behavior\r\nUser reports of CSV file export failing prompted me to add a test for this in a Windows environment in https://github.com/learningequality/kolibri/pull/7899.\r\n\r\nWhen Unicode text should be inserted into the exported CSV, such as a Unicode username or full_name, then an error occurs on windows.\r\n\r\n```\r\nError trying to write csv file: 'charmap' codec can't encode characters in position 29-35: character maps to <undefined>\r\n```\r\n\r\n### Expected behavior\r\nCSV Export should work on all platforms.\r\n\r\nThis is fixed by this commit: https://github.com/learningequality/kolibri/pull/7899/commits/c088d08b3e6c4a0f4194a17062457cbd7ae34816\r\n\r\n### User-facing consequences\r\nWindows users are unable to export CSV files.\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport csv\nimport io\nimport json\nimport logging\nimport math\nimport os\nimport sys\nfrom collections import OrderedDict\n\nfrom django.core.cache import cache\nfrom django.http import Http404\nfrom django.http import HttpResponse\nfrom django.http.response import FileResponse\nfrom django.template.defaultfilters import slugify\nfrom django.utils import translation\nfrom django.utils.translation import get_language_from_request\nfrom django.utils.translation import pgettext\n\nfrom .models import ContentSessionLog\nfrom .models import ContentSummaryLog\nfrom kolibri.core.auth.models import Facility\nfrom kolibri.core.content.models import ChannelMetadata\nfrom kolibri.core.content.models import ContentNode\nfrom kolibri.utils import conf\n\n\nlogger = logging.getLogger(__name__)\n\nCSV_EXPORT_FILENAMES = {\n \"session\": \"{}_{}_content_session_logs.csv\",\n \"summary\": \"{}_{}_content_summary_logs.csv\",\n}\n\n\ndef cache_channel_name(obj):\n channel_id = obj[\"channel_id\"]\n key = \"{id}_ChannelMetadata_name\".format(id=channel_id)\n channel_name = cache.get(key)\n if channel_name is None:\n try:\n channel_name = ChannelMetadata.objects.get(id=channel_id)\n except ChannelMetadata.DoesNotExist:\n channel_name = \"\"\n cache.set(key, channel_name, 60 * 10)\n return channel_name\n\n\ndef cache_content_title(obj):\n content_id = obj[\"content_id\"]\n key = \"{id}_ContentNode_title\".format(id=content_id)\n title = cache.get(key)\n if title is None:\n node = ContentNode.objects.filter(content_id=content_id).first()\n if node:\n title = node.title\n else:\n title = \"\"\n cache.set(key, title, 60 * 10)\n return title\n\n\nmappings = {\n \"channel_name\": cache_channel_name,\n \"content_title\": cache_content_title,\n \"time_spent\": lambda x: \"{:.1f}\".format(round(x[\"time_spent\"], 1)),\n \"progress\": lambda x: \"{:.4f}\".format(math.floor(x[\"progress\"] * 10000.0) / 10000),\n}\n\nlabels = OrderedDict(\n (\n (\"user__facility__name\", \"Facility name\"),\n (\"user__username\", \"Username\"),\n (\"channel_id\", \"Channel id\"),\n (\"channel_name\", \"Channel name\"),\n (\"content_id\", \"Content id\"),\n (\"content_title\", \"Content title\"),\n (\"start_timestamp\", \"Time of first interaction\"),\n (\"end_timestamp\", \"Time of last interaction\"),\n (\"completion_timestamp\", \"Time of completion\"),\n (\"time_spent\", \"Time Spent (sec)\"),\n (\"progress\", \"Progress (0-1)\"),\n (\"kind\", \"Content kind\"),\n )\n)\n\n\ndef map_object(obj):\n mapped_obj = {}\n for header, label in labels.items():\n if header in mappings:\n mapped_obj[label] = mappings[header](obj)\n elif header in obj:\n mapped_obj[label] = obj[header]\n return mapped_obj\n\n\nclasses_info = {\n \"session\": {\n \"queryset\": ContentSessionLog.objects.all(),\n \"filename\": CSV_EXPORT_FILENAMES[\"session\"],\n \"db_columns\": (\n \"user__username\",\n \"user__facility__name\",\n \"channel_id\",\n \"content_id\",\n \"start_timestamp\",\n \"end_timestamp\",\n \"time_spent\",\n \"progress\",\n \"kind\",\n ),\n },\n \"summary\": {\n \"queryset\": ContentSummaryLog.objects.all(),\n \"filename\": CSV_EXPORT_FILENAMES[\"summary\"],\n \"db_columns\": (\n \"user__username\",\n \"user__facility__name\",\n \"content_id\",\n \"channel_id\",\n \"start_timestamp\",\n \"end_timestamp\",\n \"completion_timestamp\",\n \"time_spent\",\n \"progress\",\n \"kind\",\n ),\n },\n}\n\n\ndef csv_file_generator(facility, log_type, filepath, overwrite=False):\n\n if log_type not in (\"summary\", \"session\"):\n raise ValueError(\n \"Impossible to create a csv export file for {}\".format(log_type)\n )\n\n log_info = classes_info[log_type]\n\n if not overwrite and os.path.exists(filepath):\n raise ValueError(\"{} already exists\".format(filepath))\n queryset = log_info[\"queryset\"].filter(dataset_id=facility.dataset_id)\n\n # Exclude completion timestamp for the sessionlog CSV\n header_labels = tuple(\n label\n for label in labels.values()\n if log_type == \"summary\" or label != \"completion_timestamp\"\n )\n\n if sys.version_info[0] < 3:\n csv_file = io.open(filepath, \"wb\")\n else:\n csv_file = io.open(filepath, \"w\", newline=\"\")\n\n with csv_file as f:\n writer = csv.DictWriter(f, header_labels)\n logger.info(\"Creating csv file {filename}\".format(filename=filepath))\n writer.writeheader()\n for item in queryset.select_related(\"user\", \"user__facility\").values(\n *log_info[\"db_columns\"]\n ):\n writer.writerow(map_object(item))\n yield\n\n\ndef exported_logs_info(request, facility_id, facility):\n \"\"\"\n Get the last modification timestamp of the summary logs exported\n\n :returns: An object with the files informatin\n \"\"\"\n logs_dir = os.path.join(conf.KOLIBRI_HOME, \"log_export\")\n csv_statuses = {}\n\n for log_type in CSV_EXPORT_FILENAMES.keys():\n log_path = os.path.join(\n logs_dir, CSV_EXPORT_FILENAMES[log_type].format(facility, facility_id[:4])\n )\n if os.path.exists(log_path):\n csv_statuses[log_type] = os.path.getmtime(log_path)\n else:\n csv_statuses[log_type] = None\n\n return HttpResponse(json.dumps(csv_statuses), content_type=\"application/json\")\n\n\ndef download_csv_file(request, log_type, facility_id):\n if facility_id:\n facility_name = Facility.objects.get(pk=facility_id).name\n else:\n facility_name = request.user.facility.name\n facility_id = request.user.facility.id\n\n locale = get_language_from_request(request)\n translation.activate(locale)\n\n csv_translated_filenames = {\n \"session\": (\n \"{}_{}_\"\n + slugify(\n pgettext(\n \"Default name for the exported CSV file with content session logs. Please keep the underscores between words in the translation\",\n \"content_session_logs\",\n )\n )\n + \".csv\"\n ).replace(\"-\", \"_\"),\n \"summary\": (\n \"{}_{}_\"\n + slugify(\n pgettext(\n \"Default name for the exported CSV file with content summary logs. Please keep the underscores between words in the translation\",\n \"content_summary_logs\",\n )\n )\n + \".csv\"\n ).replace(\"-\", \"_\"),\n }\n\n if log_type in CSV_EXPORT_FILENAMES.keys():\n filepath = os.path.join(\n conf.KOLIBRI_HOME,\n \"log_export\",\n CSV_EXPORT_FILENAMES[log_type].format(facility_name, facility_id[:4]),\n )\n else:\n filepath = None\n\n # if the file does not exist on disk, return a 404\n if filepath is None or not os.path.exists(filepath):\n raise Http404(\"There is no csv export file for {} available\".format(log_type))\n\n # generate a file response\n response = FileResponse(io.open(filepath, \"rb\"))\n # set the content-type by guessing from the filename\n response[\"Content-Type\"] = \"text/csv\"\n\n # set the content-disposition as attachment to force download\n response[\"Content-Disposition\"] = \"attachment; filename={}\".format(\n str(csv_translated_filenames[log_type]).format(facility_name, facility_id[:4])\n )\n translation.deactivate()\n\n # set the content-length to the file size\n response[\"Content-Length\"] = os.path.getsize(filepath)\n\n return response\n", "path": "kolibri/core/logger/csv_export.py"}]}
| 3,133 | 137 |
gh_patches_debug_24098
|
rasdani/github-patches
|
git_diff
|
encode__uvicorn-666
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
color codes in windows console not escaped
Fixes https://github.com/tiangolo/fastapi/issues/815 that should have been reported upstream
There are many ways to handle the case obviously, I choose to use click.clear() since we use already click.style and because it already performs the os check and issues the right command for that.
Use optional package installs.
Instead of the platform detection I’d like uvicorn to use optional installs.
* `pip install uvicorn` - Just the package itself.
* `pip install uvicorn[standard]` - uvloop/httptools/websockets
* `pip install uvicorn[pure]` - asyncio/h11/wsproto
* `pip install uvicorn[full]` - Everything
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import os
5 import re
6
7 from setuptools import setup
8
9
10 def get_version(package):
11 """
12 Return package version as listed in `__version__` in `init.py`.
13 """
14 path = os.path.join(package, "__init__.py")
15 init_py = open(path, "r", encoding="utf8").read()
16 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
17
18
19 def get_long_description():
20 """
21 Return the README.
22 """
23 return open("README.md", "r", encoding="utf8").read()
24
25
26 def get_packages(package):
27 """
28 Return root package and all sub-packages.
29 """
30 return [
31 dirpath
32 for dirpath, dirnames, filenames in os.walk(package)
33 if os.path.exists(os.path.join(dirpath, "__init__.py"))
34 ]
35
36
37 env_marker = (
38 "sys_platform != 'win32'"
39 " and sys_platform != 'cygwin'"
40 " and platform_python_implementation != 'PyPy'"
41 )
42
43 requirements = [
44 "click==7.*",
45 "h11>=0.8,<0.10",
46 "websockets==8.*",
47 "httptools==0.1.* ;" + env_marker,
48 "uvloop>=0.14.0 ;" + env_marker,
49 ]
50
51 extras_require = {"watchgodreload": ["watchgod>=0.6,<0.7"]}
52
53
54 setup(
55 name="uvicorn",
56 version=get_version("uvicorn"),
57 url="https://github.com/encode/uvicorn",
58 license="BSD",
59 description="The lightning-fast ASGI server.",
60 long_description=get_long_description(),
61 long_description_content_type="text/markdown",
62 author="Tom Christie",
63 author_email="[email protected]",
64 packages=get_packages("uvicorn"),
65 install_requires=requirements,
66 extras_require=extras_require,
67 include_package_data=True,
68 classifiers=[
69 "Development Status :: 4 - Beta",
70 "Environment :: Web Environment",
71 "Intended Audience :: Developers",
72 "License :: OSI Approved :: BSD License",
73 "Operating System :: OS Independent",
74 "Topic :: Internet :: WWW/HTTP",
75 "Programming Language :: Python :: 3",
76 "Programming Language :: Python :: 3.6",
77 "Programming Language :: Python :: 3.7",
78 "Programming Language :: Python :: 3.8",
79 "Programming Language :: Python :: Implementation :: CPython",
80 "Programming Language :: Python :: Implementation :: PyPy",
81 ],
82 entry_points="""
83 [console_scripts]
84 uvicorn=uvicorn.main:main
85 """,
86 )
87
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -34,21 +34,28 @@
]
-env_marker = (
+env_marker_cpython = (
"sys_platform != 'win32'"
" and sys_platform != 'cygwin'"
" and platform_python_implementation != 'PyPy'"
)
-requirements = [
+env_marker_win = "sys_platform == 'win32'"
+
+
+minimal_requirements = [
"click==7.*",
"h11>=0.8,<0.10",
- "websockets==8.*",
- "httptools==0.1.* ;" + env_marker,
- "uvloop>=0.14.0 ;" + env_marker,
]
-extras_require = {"watchgodreload": ["watchgod>=0.6,<0.7"]}
+extra_requirements = [
+ "websockets==8.*",
+ "httptools==0.1.* ;" + env_marker_cpython,
+ "uvloop>=0.14.0 ;" + env_marker_cpython,
+ "colorama>=0.4.*;" + env_marker_win,
+ "watchgod>=0.6,<0.7",
+ "python-dotenv==0.13.*",
+]
setup(
@@ -62,8 +69,8 @@
author="Tom Christie",
author_email="[email protected]",
packages=get_packages("uvicorn"),
- install_requires=requirements,
- extras_require=extras_require,
+ install_requires=minimal_requirements,
+ extras_require={"standard": extra_requirements},
include_package_data=True,
classifiers=[
"Development Status :: 4 - Beta",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -34,21 +34,28 @@\n ]\n \n \n-env_marker = (\n+env_marker_cpython = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'PyPy'\"\n )\n \n-requirements = [\n+env_marker_win = \"sys_platform == 'win32'\"\n+\n+\n+minimal_requirements = [\n \"click==7.*\",\n \"h11>=0.8,<0.10\",\n- \"websockets==8.*\",\n- \"httptools==0.1.* ;\" + env_marker,\n- \"uvloop>=0.14.0 ;\" + env_marker,\n ]\n \n-extras_require = {\"watchgodreload\": [\"watchgod>=0.6,<0.7\"]}\n+extra_requirements = [\n+ \"websockets==8.*\",\n+ \"httptools==0.1.* ;\" + env_marker_cpython,\n+ \"uvloop>=0.14.0 ;\" + env_marker_cpython,\n+ \"colorama>=0.4.*;\" + env_marker_win,\n+ \"watchgod>=0.6,<0.7\",\n+ \"python-dotenv==0.13.*\",\n+]\n \n \n setup(\n@@ -62,8 +69,8 @@\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n packages=get_packages(\"uvicorn\"),\n- install_requires=requirements,\n- extras_require=extras_require,\n+ install_requires=minimal_requirements,\n+ extras_require={\"standard\": extra_requirements},\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n", "issue": "color codes in windows console not escaped\nFixes https://github.com/tiangolo/fastapi/issues/815 that should have been reported upstream\r\n\r\nThere are many ways to handle the case obviously, I choose to use click.clear() since we use already click.style and because it already performs the os check and issues the right command for that.\r\n\r\n\nUse optional package installs.\nInstead of the platform detection I\u2019d like uvicorn to use optional installs.\r\n\r\n* `pip install uvicorn` - Just the package itself.\r\n* `pip install uvicorn[standard]` - uvloop/httptools/websockets\r\n* `pip install uvicorn[pure]` - asyncio/h11/wsproto\r\n* `pip install uvicorn[full]` - Everything\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, \"__init__.py\")\n init_py = open(path, \"r\", encoding=\"utf8\").read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open(\"README.md\", \"r\", encoding=\"utf8\").read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [\n dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, \"__init__.py\"))\n ]\n\n\nenv_marker = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'PyPy'\"\n)\n\nrequirements = [\n \"click==7.*\",\n \"h11>=0.8,<0.10\",\n \"websockets==8.*\",\n \"httptools==0.1.* ;\" + env_marker,\n \"uvloop>=0.14.0 ;\" + env_marker,\n]\n\nextras_require = {\"watchgodreload\": [\"watchgod>=0.6,<0.7\"]}\n\n\nsetup(\n name=\"uvicorn\",\n version=get_version(\"uvicorn\"),\n url=\"https://github.com/encode/uvicorn\",\n license=\"BSD\",\n description=\"The lightning-fast ASGI server.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n packages=get_packages(\"uvicorn\"),\n install_requires=requirements,\n extras_require=extras_require,\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\",\n)\n", "path": "setup.py"}]}
| 1,439 | 387 |
gh_patches_debug_51634
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-2576
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
non-EUTF search results appearing (estimate: 8)
Created via Reamaze:
Link: https://akvoo.reamaze.com/admin/conversations/rsr-release-3-dot-22-chisinau-is-out
Assignee: Unassigned
Message:
Hi RSR Team,
Just saw this email, nice that the new release is already out! However, I tried to use the search function, and it shows organizations that are not related to the Akvo Page, in this case the EUTF Page. Randomly searching for “Tom(bouctou)” gives the following search options. Clicking on the first organization “Catholic Diocese of Tombu", it leads you nowhere..
Please see image below.
Thanks!
Christien
Christien Bosman
Project Officer
Akvo • 's-Gravenhekje 1A • 1011 TG • Amsterdam (NL)
T +31 20 8200 175 • M +31 6 1191 5449 • S christien.bosman • I www.akvo.org <http://www.akvo.org/>
</issue>
<code>
[start of akvo/rest/views/typeahead.py]
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4 See more details in the license.txt file located at the root folder of the
5 Akvo RSR module. For additional details on the GNU license please
6 see < http://www.gnu.org/licenses/agpl.html >.
7 """
8
9 from akvo.rest.serializers import (TypeaheadCountrySerializer,
10 TypeaheadOrganisationSerializer,
11 TypeaheadProjectSerializer,
12 TypeaheadProjectUpdateSerializer)
13
14 from akvo.codelists.models import Country, Version
15 from akvo.rsr.models import Organisation, Project, ProjectUpdate
16 from akvo.rsr.views.project import _project_directory_coll
17
18 from django.conf import settings
19
20 from rest_framework.decorators import api_view
21 from rest_framework.response import Response
22
23
24 def rejig(queryset, serializer):
25 """Rearrange & add queryset count to the response data."""
26 return {
27 'count': queryset.count(),
28 'results': serializer.data
29 }
30
31
32 @api_view(['GET'])
33 def typeahead_country(request):
34 iati_version = Version.objects.get(code=settings.IATI_VERSION)
35 countries = Country.objects.filter(version=iati_version)
36 return Response(
37 rejig(countries, TypeaheadCountrySerializer(countries, many=True))
38 )
39
40
41 @api_view(['GET'])
42 def typeahead_organisation(request):
43 organisations = Organisation.objects.all()
44 return Response(
45 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
46 many=True))
47 )
48
49
50 @api_view(['GET'])
51 def typeahead_user_organisations(request):
52 user = request.user
53 is_admin = user.is_active and (user.is_superuser or user.is_admin)
54 organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()
55 return Response(
56 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
57 many=True))
58 )
59
60
61 @api_view(['GET'])
62 def typeahead_project(request):
63 """Return the typeaheads for projects.
64
65 Without any query parameters, it returns the info for all the projects in
66 the current context -- changes depending on whether we are on a partner
67 site, or the RSR site.
68
69 If a published query parameter is passed, only projects that have been
70 published are returned.
71
72 NOTE: The unauthenticated user gets information about all the projects when
73 using this API endpoint. More permission checking will need to be added,
74 if the amount of data being returned is changed.
75
76 """
77 if request.GET.get('published', '0') == '0':
78 # Project editor - organization projects, all
79 page = request.rsr_page
80 projects = page.organisation.all_projects() if page else Project.objects.all()
81 else:
82 # Search bar - organization projects, published
83 projects = _project_directory_coll(request)
84
85 projects = projects.exclude(title='')
86 return Response(
87 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
88 )
89
90
91 @api_view(['GET'])
92 def typeahead_user_projects(request):
93 user = request.user
94 is_admin = user.is_active and (user.is_superuser or user.is_admin)
95 if is_admin:
96 projects = Project.objects.all()
97 else:
98 projects = user.approved_organisations().all_projects()
99 projects = projects.exclude(title='')
100 return Response(
101 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
102 )
103
104
105 @api_view(['GET'])
106 def typeahead_impact_projects(request):
107 user = request.user
108 projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()
109 projects = projects.published().filter(is_impact_project=True).order_by('title')
110
111 return Response(
112 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
113 )
114
115
116 @api_view(['GET'])
117 def typeahead_projectupdate(request):
118 updates = ProjectUpdate.objects.all()
119 return Response(
120 rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))
121 )
122
[end of akvo/rest/views/typeahead.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py
--- a/akvo/rest/views/typeahead.py
+++ b/akvo/rest/views/typeahead.py
@@ -40,7 +40,8 @@
@api_view(['GET'])
def typeahead_organisation(request):
- organisations = Organisation.objects.all()
+ page = request.rsr_page
+ organisations = page.organisation.partners().distinct() if page else Organisation.objects.all()
return Response(
rejig(organisations, TypeaheadOrganisationSerializer(organisations,
many=True))
|
{"golden_diff": "diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py\n--- a/akvo/rest/views/typeahead.py\n+++ b/akvo/rest/views/typeahead.py\n@@ -40,7 +40,8 @@\n \n @api_view(['GET'])\n def typeahead_organisation(request):\n- organisations = Organisation.objects.all()\n+ page = request.rsr_page\n+ organisations = page.organisation.partners().distinct() if page else Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n", "issue": "non-EUTF search results appearing (estimate: 8)\nCreated via Reamaze:\r\n\r\nLink: https://akvoo.reamaze.com/admin/conversations/rsr-release-3-dot-22-chisinau-is-out\r\nAssignee: Unassigned\r\n\r\nMessage:\r\nHi RSR Team,\r\n\r\nJust saw this email, nice that the new release is already out! However, I tried to use the search function, and it shows organizations that are not related to the Akvo Page, in this case the EUTF Page. Randomly searching for \u201cTom(bouctou)\u201d gives the following search options. Clicking on the first organization \u201cCatholic Diocese of Tombu\", it leads you nowhere..\r\n\r\nPlease see image below.\r\n\r\nThanks!\r\nChristien\r\n\r\nChristien Bosman\r\nProject Officer\r\n\r\nAkvo \u2022 's-Gravenhekje 1A \u2022 1011 TG \u2022 Amsterdam (NL)\r\nT +31 20 8200 175 \u2022 M +31 6 1191 5449 \u2022 S christien.bosman \u2022 I www.akvo.org <http://www.akvo.org/>\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rest.serializers import (TypeaheadCountrySerializer,\n TypeaheadOrganisationSerializer,\n TypeaheadProjectSerializer,\n TypeaheadProjectUpdateSerializer)\n\nfrom akvo.codelists.models import Country, Version\nfrom akvo.rsr.models import Organisation, Project, ProjectUpdate\nfrom akvo.rsr.views.project import _project_directory_coll\n\nfrom django.conf import settings\n\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\n\n\ndef rejig(queryset, serializer):\n \"\"\"Rearrange & add queryset count to the response data.\"\"\"\n return {\n 'count': queryset.count(),\n 'results': serializer.data\n }\n\n\n@api_view(['GET'])\ndef typeahead_country(request):\n iati_version = Version.objects.get(code=settings.IATI_VERSION)\n countries = Country.objects.filter(version=iati_version)\n return Response(\n rejig(countries, TypeaheadCountrySerializer(countries, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_organisation(request):\n organisations = Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_organisations(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_project(request):\n \"\"\"Return the typeaheads for projects.\n\n Without any query parameters, it returns the info for all the projects in\n the current context -- changes depending on whether we are on a partner\n site, or the RSR site.\n\n If a published query parameter is passed, only projects that have been\n published are returned.\n\n NOTE: The unauthenticated user gets information about all the projects when\n using this API endpoint. More permission checking will need to be added,\n if the amount of data being returned is changed.\n\n \"\"\"\n if request.GET.get('published', '0') == '0':\n # Project editor - organization projects, all\n page = request.rsr_page\n projects = page.organisation.all_projects() if page else Project.objects.all()\n else:\n # Search bar - organization projects, published\n projects = _project_directory_coll(request)\n\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_projects(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n if is_admin:\n projects = Project.objects.all()\n else:\n projects = user.approved_organisations().all_projects()\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_impact_projects(request):\n user = request.user\n projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()\n projects = projects.published().filter(is_impact_project=True).order_by('title')\n\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_projectupdate(request):\n updates = ProjectUpdate.objects.all()\n return Response(\n rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))\n )\n", "path": "akvo/rest/views/typeahead.py"}]}
| 1,908 | 130 |
gh_patches_debug_21946
|
rasdani/github-patches
|
git_diff
|
StackStorm__st2-4168
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fail to produce a validation message on an array. validate_config_against_schema gets a TypeError Exception
<!---
Please complete as much of this form as possible. It's OK if you
don't know all the answers, but fill in as much as you can. This will help
us to help you. -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest -->
- Bug Report *
- Feature Idea
- Other
<!--- Is this actually a question? In that case, post it at forum.stackstorm.com. That's a great place
to interact with users and developers, and discuss ideas. -->
##### STACKSTORM VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "st2 --version" between quotes below -->
```
st2 2.8dev (0f1d9dc), on Python 2.7.6
```
##### OS / ENVIRONMENT / INSTALL METHOD
<!--- Below this comment, post what OS you are running this on, along with
any other relevant information - e.g. Docker, Vagrant, Kubernetes, etc.
Describe how you installed ST2 - e.g. one-line install, custom install, etc -->
docker
##### SUMMARY
<!--- Explain the problem or request briefly -->
There is a problem when validating an array of objects. If the validation fails then on this line:
[https://github.com/StackStorm/st2/blob/master/st2common/st2common/util/pack.py#L112
](url)
Because the path has a number you cannot join an integer.
```
Traceback (most recent call last):
File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/router.py", line 472, in __call__
resp = func(**kw)
File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2api/controllers/v1/pack_configs.py", line 107, in put
config_api.validate(validate_against_schema=True)
File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/models/api/pack.py", line 274, in validate
cleaned_values = self._validate_config_values_against_schema()
File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/models/api/pack.py", line 296, in _validate_config_values_against_schema
pack_name=self.pack)
File "/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/util/pack.py", line 111, in validate_config_against_schema
attribute = '.'.join(attribute)
TypeError: sequence item 1: expected string or Unicode, int found
2018-06-06 14:24:35,673 139995931735056 ERROR error_handling [-] API call failed: sequence item 1: expected string or Unicode, int found
Traceback (most recent call last):
```
When I changed the code to:
```
attribute = '.'.join(str(attribute))
```
MESSAGE: Failed validating attribute "d.e.q.u.e.(.[.u.'.i.n.s.t.a.n.c.e.s.'.,. .0.,. .u.'.a.l.i.a.s.'.].)" in config for pack "my_pack" (/opt/stackstorm/configs/my_pack.yaml): u'alias' is a required property.
I guess there is a problem with path when you have an array becuase you get for example
instances[0]['alias'] for a path and the exception cannot produce a message
</issue>
<code>
[start of st2common/st2common/util/pack.py]
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from __future__ import absolute_import
17 import os
18 import re
19
20 from st2common.util import schema as util_schema
21 from st2common.constants.pack import MANIFEST_FILE_NAME
22 from st2common.constants.pack import PACK_REF_WHITELIST_REGEX
23 from st2common.content.loader import MetaLoader
24 from st2common.persistence.pack import Pack
25
26 __all__ = [
27 'get_pack_ref_from_metadata',
28 'get_pack_metadata',
29
30 'get_pack_common_libs_path_for_pack_ref',
31 'get_pack_common_libs_path_for_pack_db',
32
33 'validate_config_against_schema',
34
35 'normalize_pack_version'
36 ]
37
38
39 def get_pack_ref_from_metadata(metadata, pack_directory_name=None):
40 """
41 Utility function which retrieves pack "ref" attribute from the pack metadata file.
42
43 If this attribute is not provided, an attempt is made to infer "ref" from the "name" attribute.
44
45 :rtype: ``str``
46 """
47 pack_ref = None
48
49 # The rules for the pack ref are as follows:
50 # 1. If ref attribute is available, we used that
51 # 2. If pack_directory_name is available we use that (this only applies to packs
52 # which are in sub-directories)
53 # 2. If attribute is not available, but pack name is and pack name meets the valid name
54 # criteria, we use that
55 if metadata.get('ref', None):
56 pack_ref = metadata['ref']
57 elif pack_directory_name and re.match(PACK_REF_WHITELIST_REGEX, pack_directory_name):
58 pack_ref = pack_directory_name
59 else:
60 if re.match(PACK_REF_WHITELIST_REGEX, metadata['name']):
61 pack_ref = metadata['name']
62 else:
63 msg = ('Pack name "%s" contains invalid characters and "ref" attribute is not '
64 'available. You either need to add "ref" attribute which contains only word '
65 'characters to the pack metadata file or update name attribute to contain only'
66 'word characters.')
67 raise ValueError(msg % (metadata['name']))
68
69 return pack_ref
70
71
72 def get_pack_metadata(pack_dir):
73 """
74 Return parsed metadata for a particular pack directory.
75
76 :rtype: ``dict``
77 """
78 manifest_path = os.path.join(pack_dir, MANIFEST_FILE_NAME)
79
80 if not os.path.isfile(manifest_path):
81 raise ValueError('Pack "%s" is missing %s file' % (pack_dir, MANIFEST_FILE_NAME))
82
83 meta_loader = MetaLoader()
84 content = meta_loader.load(manifest_path)
85 if not content:
86 raise ValueError('Pack "%s" metadata file is empty' % (pack_dir))
87
88 return content
89
90
91 def validate_config_against_schema(config_schema, config_object, config_path,
92 pack_name=None):
93 """
94 Validate provided config dictionary against the provided config schema
95 dictionary.
96 """
97 # NOTE: Lazy improt to avoid performance overhead of importing this module when it's not used
98 import jsonschema
99
100 pack_name = pack_name or 'unknown'
101
102 schema = util_schema.get_schema_for_resource_parameters(parameters_schema=config_schema,
103 allow_additional_properties=True)
104 instance = config_object
105
106 try:
107 cleaned = util_schema.validate(instance=instance, schema=schema,
108 cls=util_schema.CustomValidator, use_default=True,
109 allow_default_none=True)
110 except jsonschema.ValidationError as e:
111 attribute = getattr(e, 'path', [])
112 attribute = '.'.join(attribute)
113
114 msg = ('Failed validating attribute "%s" in config for pack "%s" (%s): %s' %
115 (attribute, pack_name, config_path, str(e)))
116 raise jsonschema.ValidationError(msg)
117
118 return cleaned
119
120
121 def get_pack_common_libs_path_for_pack_ref(pack_ref):
122 pack_db = Pack.get_by_ref(pack_ref)
123 pack_common_libs_path = get_pack_common_libs_path_for_pack_db(pack_db=pack_db)
124 return pack_common_libs_path
125
126
127 def get_pack_common_libs_path_for_pack_db(pack_db):
128 """
129 Return the pack's common lib path. This is the path where common code for sensors
130 and actions are placed.
131
132 For example, if the pack is at /opt/stackstorm/packs/my_pack, you can place
133 common library code for actions and sensors in /opt/stackstorm/packs/my_pack/lib/.
134 This common library code is only available for python sensors and actions. The lib
135 structure also needs to follow a python convention with a __init__.py file.
136
137 :param pack_db: Pack DB model
138 :type pack_db: :class:`PackDB`
139
140 :rtype: ``str``
141 """
142 pack_dir = getattr(pack_db, 'path', None)
143
144 if not pack_dir:
145 return None
146
147 libs_path = os.path.join(pack_dir, 'lib')
148
149 return libs_path
150
151
152 def normalize_pack_version(version):
153 """
154 Normalize old, pre StackStorm v2.1 non valid semver version string (e.g. 0.2) to a valid
155 semver version string (0.2.0).
156
157 :rtype: ``str``
158 """
159 version = str(version)
160
161 version_seperator_count = version.count('.')
162 if version_seperator_count == 1:
163 version = version + '.0'
164
165 return version
166
[end of st2common/st2common/util/pack.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/st2common/st2common/util/pack.py b/st2common/st2common/util/pack.py
--- a/st2common/st2common/util/pack.py
+++ b/st2common/st2common/util/pack.py
@@ -14,8 +14,10 @@
# limitations under the License.
from __future__ import absolute_import
+
import os
import re
+import collections
from st2common.util import schema as util_schema
from st2common.constants.pack import MANIFEST_FILE_NAME
@@ -109,7 +111,12 @@
allow_default_none=True)
except jsonschema.ValidationError as e:
attribute = getattr(e, 'path', [])
- attribute = '.'.join(attribute)
+
+ if isinstance(attribute, (tuple, list, collections.Iterable)):
+ attribute = [str(item) for item in attribute]
+ attribute = '.'.join(attribute)
+ else:
+ attribute = str(attribute)
msg = ('Failed validating attribute "%s" in config for pack "%s" (%s): %s' %
(attribute, pack_name, config_path, str(e)))
|
{"golden_diff": "diff --git a/st2common/st2common/util/pack.py b/st2common/st2common/util/pack.py\n--- a/st2common/st2common/util/pack.py\n+++ b/st2common/st2common/util/pack.py\n@@ -14,8 +14,10 @@\n # limitations under the License.\n \n from __future__ import absolute_import\n+\n import os\n import re\n+import collections\n \n from st2common.util import schema as util_schema\n from st2common.constants.pack import MANIFEST_FILE_NAME\n@@ -109,7 +111,12 @@\n allow_default_none=True)\n except jsonschema.ValidationError as e:\n attribute = getattr(e, 'path', [])\n- attribute = '.'.join(attribute)\n+\n+ if isinstance(attribute, (tuple, list, collections.Iterable)):\n+ attribute = [str(item) for item in attribute]\n+ attribute = '.'.join(attribute)\n+ else:\n+ attribute = str(attribute)\n \n msg = ('Failed validating attribute \"%s\" in config for pack \"%s\" (%s): %s' %\n (attribute, pack_name, config_path, str(e)))\n", "issue": "Fail to produce a validation message on an array. validate_config_against_schema gets a TypeError Exception\n<!---\r\nPlease complete as much of this form as possible. It's OK if you\r\ndon't know all the answers, but fill in as much as you can. This will help\r\nus to help you. -->\r\n\r\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest -->\r\n - Bug Report *\r\n - Feature Idea\r\n - Other\r\n\r\n<!--- Is this actually a question? In that case, post it at forum.stackstorm.com. That's a great place\r\nto interact with users and developers, and discuss ideas. -->\r\n\r\n##### STACKSTORM VERSION\r\n<!--- Paste, BELOW THIS COMMENT, verbatim output from \"st2 --version\" between quotes below -->\r\n```\r\nst2 2.8dev (0f1d9dc), on Python 2.7.6\r\n```\r\n\r\n##### OS / ENVIRONMENT / INSTALL METHOD\r\n<!--- Below this comment, post what OS you are running this on, along with \r\nany other relevant information - e.g. Docker, Vagrant, Kubernetes, etc.\r\nDescribe how you installed ST2 - e.g. one-line install, custom install, etc -->\r\n\r\ndocker\r\n##### SUMMARY\r\n<!--- Explain the problem or request briefly -->\r\n\r\nThere is a problem when validating an array of objects. If the validation fails then on this line:\r\n\r\n[https://github.com/StackStorm/st2/blob/master/st2common/st2common/util/pack.py#L112\r\n](url)\r\n\r\nBecause the path has a number you cannot join an integer.\r\n\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/router.py\", line 472, in __call__\r\n resp = func(**kw)\r\n File \"/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2api/controllers/v1/pack_configs.py\", line 107, in put\r\n config_api.validate(validate_against_schema=True)\r\n File \"/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/models/api/pack.py\", line 274, in validate\r\n cleaned_values = self._validate_config_values_against_schema()\r\n File \"/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/models/api/pack.py\", line 296, in _validate_config_values_against_schema\r\n pack_name=self.pack)\r\n File \"/opt/stackstorm/st2/local/lib/python2.7/site-packages/st2common/util/pack.py\", line 111, in validate_config_against_schema\r\n attribute = '.'.join(attribute)\r\nTypeError: sequence item 1: expected string or Unicode, int found\r\n2018-06-06 14:24:35,673 139995931735056 ERROR error_handling [-] API call failed: sequence item 1: expected string or Unicode, int found\r\nTraceback (most recent call last):\r\n\r\n```\r\n\r\nWhen I changed the code to:\r\n\r\n```\r\nattribute = '.'.join(str(attribute))\r\n\r\n```\r\n\r\n\r\nMESSAGE: Failed validating attribute \"d.e.q.u.e.(.[.u.'.i.n.s.t.a.n.c.e.s.'.,. .0.,. .u.'.a.l.i.a.s.'.].)\" in config for pack \"my_pack\" (/opt/stackstorm/configs/my_pack.yaml): u'alias' is a required property.\r\n\r\nI guess there is a problem with path when you have an array becuase you get for example \r\ninstances[0]['alias'] for a path and the exception cannot produce a message\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nimport os\nimport re\n\nfrom st2common.util import schema as util_schema\nfrom st2common.constants.pack import MANIFEST_FILE_NAME\nfrom st2common.constants.pack import PACK_REF_WHITELIST_REGEX\nfrom st2common.content.loader import MetaLoader\nfrom st2common.persistence.pack import Pack\n\n__all__ = [\n 'get_pack_ref_from_metadata',\n 'get_pack_metadata',\n\n 'get_pack_common_libs_path_for_pack_ref',\n 'get_pack_common_libs_path_for_pack_db',\n\n 'validate_config_against_schema',\n\n 'normalize_pack_version'\n]\n\n\ndef get_pack_ref_from_metadata(metadata, pack_directory_name=None):\n \"\"\"\n Utility function which retrieves pack \"ref\" attribute from the pack metadata file.\n\n If this attribute is not provided, an attempt is made to infer \"ref\" from the \"name\" attribute.\n\n :rtype: ``str``\n \"\"\"\n pack_ref = None\n\n # The rules for the pack ref are as follows:\n # 1. If ref attribute is available, we used that\n # 2. If pack_directory_name is available we use that (this only applies to packs\n # which are in sub-directories)\n # 2. If attribute is not available, but pack name is and pack name meets the valid name\n # criteria, we use that\n if metadata.get('ref', None):\n pack_ref = metadata['ref']\n elif pack_directory_name and re.match(PACK_REF_WHITELIST_REGEX, pack_directory_name):\n pack_ref = pack_directory_name\n else:\n if re.match(PACK_REF_WHITELIST_REGEX, metadata['name']):\n pack_ref = metadata['name']\n else:\n msg = ('Pack name \"%s\" contains invalid characters and \"ref\" attribute is not '\n 'available. You either need to add \"ref\" attribute which contains only word '\n 'characters to the pack metadata file or update name attribute to contain only'\n 'word characters.')\n raise ValueError(msg % (metadata['name']))\n\n return pack_ref\n\n\ndef get_pack_metadata(pack_dir):\n \"\"\"\n Return parsed metadata for a particular pack directory.\n\n :rtype: ``dict``\n \"\"\"\n manifest_path = os.path.join(pack_dir, MANIFEST_FILE_NAME)\n\n if not os.path.isfile(manifest_path):\n raise ValueError('Pack \"%s\" is missing %s file' % (pack_dir, MANIFEST_FILE_NAME))\n\n meta_loader = MetaLoader()\n content = meta_loader.load(manifest_path)\n if not content:\n raise ValueError('Pack \"%s\" metadata file is empty' % (pack_dir))\n\n return content\n\n\ndef validate_config_against_schema(config_schema, config_object, config_path,\n pack_name=None):\n \"\"\"\n Validate provided config dictionary against the provided config schema\n dictionary.\n \"\"\"\n # NOTE: Lazy improt to avoid performance overhead of importing this module when it's not used\n import jsonschema\n\n pack_name = pack_name or 'unknown'\n\n schema = util_schema.get_schema_for_resource_parameters(parameters_schema=config_schema,\n allow_additional_properties=True)\n instance = config_object\n\n try:\n cleaned = util_schema.validate(instance=instance, schema=schema,\n cls=util_schema.CustomValidator, use_default=True,\n allow_default_none=True)\n except jsonschema.ValidationError as e:\n attribute = getattr(e, 'path', [])\n attribute = '.'.join(attribute)\n\n msg = ('Failed validating attribute \"%s\" in config for pack \"%s\" (%s): %s' %\n (attribute, pack_name, config_path, str(e)))\n raise jsonschema.ValidationError(msg)\n\n return cleaned\n\n\ndef get_pack_common_libs_path_for_pack_ref(pack_ref):\n pack_db = Pack.get_by_ref(pack_ref)\n pack_common_libs_path = get_pack_common_libs_path_for_pack_db(pack_db=pack_db)\n return pack_common_libs_path\n\n\ndef get_pack_common_libs_path_for_pack_db(pack_db):\n \"\"\"\n Return the pack's common lib path. This is the path where common code for sensors\n and actions are placed.\n\n For example, if the pack is at /opt/stackstorm/packs/my_pack, you can place\n common library code for actions and sensors in /opt/stackstorm/packs/my_pack/lib/.\n This common library code is only available for python sensors and actions. The lib\n structure also needs to follow a python convention with a __init__.py file.\n\n :param pack_db: Pack DB model\n :type pack_db: :class:`PackDB`\n\n :rtype: ``str``\n \"\"\"\n pack_dir = getattr(pack_db, 'path', None)\n\n if not pack_dir:\n return None\n\n libs_path = os.path.join(pack_dir, 'lib')\n\n return libs_path\n\n\ndef normalize_pack_version(version):\n \"\"\"\n Normalize old, pre StackStorm v2.1 non valid semver version string (e.g. 0.2) to a valid\n semver version string (0.2.0).\n\n :rtype: ``str``\n \"\"\"\n version = str(version)\n\n version_seperator_count = version.count('.')\n if version_seperator_count == 1:\n version = version + '.0'\n\n return version\n", "path": "st2common/st2common/util/pack.py"}]}
| 3,029 | 249 |
gh_patches_debug_21973
|
rasdani/github-patches
|
git_diff
|
statsmodels__statsmodels-2919
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Try to coerce index into datetime before failing.
See what we can do for these types of problems (maybe with a warning about the copy/conversion).
http://stackoverflow.com/questions/19892830/issue-with-dates-in-statsmodels-and-pandas
</issue>
<code>
[start of statsmodels/tsa/base/tsa_model.py]
1 from statsmodels.compat.python import lrange
2 import statsmodels.base.model as base
3 from statsmodels.base import data
4 import statsmodels.base.wrapper as wrap
5 from statsmodels.tsa.base import datetools
6 from numpy import arange, asarray
7 from pandas import Index
8 from pandas import datetools as pandas_datetools
9 import datetime
10
11 _freq_to_pandas = datetools._freq_to_pandas
12
13 _tsa_doc = """
14 %(model)s
15
16 Parameters
17 ----------
18 %(params)s
19 dates : array-like of datetime, optional
20 An array-like object of datetime objects. If a pandas object is given
21 for endog or exog, it is assumed to have a DateIndex.
22 freq : str, optional
23 The frequency of the time-series. A Pandas offset or 'B', 'D', 'W',
24 'M', 'A', or 'Q'. This is optional if dates are given.
25 %(extra_params)s
26 %(extra_sections)s
27 """
28
29 _model_doc = "Timeseries model base class"
30
31 _generic_params = base._model_params_doc
32 _missing_param_doc = base._missing_param_doc
33
34 class TimeSeriesModel(base.LikelihoodModel):
35
36 __doc__ = _tsa_doc % {"model" : _model_doc, "params" : _generic_params,
37 "extra_params" : _missing_param_doc,
38 "extra_sections" : ""}
39
40 def __init__(self, endog, exog=None, dates=None, freq=None, missing='none'):
41 super(TimeSeriesModel, self).__init__(endog, exog, missing=missing)
42 self._init_dates(dates, freq)
43
44 def _init_dates(self, dates, freq):
45 if dates is None:
46 dates = self.data.row_labels
47
48 if dates is not None:
49 if (not datetools._is_datetime_index(dates) and
50 isinstance(self.data, data.PandasData)):
51 raise ValueError("Given a pandas object and the index does "
52 "not contain dates")
53 if not freq:
54 try:
55 freq = datetools._infer_freq(dates)
56 except:
57 raise ValueError("Frequency inference failed. Use `freq` "
58 "keyword.")
59 dates = Index(dates)
60 self.data.dates = dates
61 if freq:
62 try: #NOTE: Can drop this once we move to pandas >= 0.8.x
63 _freq_to_pandas[freq]
64 except:
65 raise ValueError("freq %s not understood" % freq)
66 self.data.freq = freq
67
68 def _get_exog_names(self):
69 return self.data.xnames
70
71 def _set_exog_names(self, vals):
72 if not isinstance(vals, list):
73 vals = [vals]
74 self.data.xnames = vals
75
76 #overwrite with writable property for (V)AR models
77 exog_names = property(_get_exog_names, _set_exog_names)
78
79 def _get_dates_loc(self, dates, date):
80 if hasattr(dates, 'indexMap'): # 0.7.x
81 date = dates.indexMap[date]
82 else:
83 date = dates.get_loc(date)
84 try: # pandas 0.8.0 returns a boolean array
85 len(date)
86 from numpy import where
87 date = where(date)[0].item()
88 except TypeError: # this is expected behavior
89 pass
90 return date
91
92 def _str_to_date(self, date):
93 """
94 Takes a string and returns a datetime object
95 """
96 return datetools.date_parser(date)
97
98 def _set_predict_start_date(self, start):
99 dates = self.data.dates
100 if dates is None:
101 return
102 if start > len(dates):
103 raise ValueError("Start must be <= len(endog)")
104 if start == len(dates):
105 self.data.predict_start = datetools._date_from_idx(dates[-1],
106 1, self.data.freq)
107 elif start < len(dates):
108 self.data.predict_start = dates[start]
109 else:
110 raise ValueError("Start must be <= len(dates)")
111
112 def _get_predict_start(self, start):
113 """
114 Returns the index of the given start date. Subclasses should define
115 default behavior for start = None. That isn't handled here.
116
117 Start can be a string or an integer if self.data.dates is None.
118 """
119 dates = self.data.dates
120 if isinstance(start, str):
121 if dates is None:
122 raise ValueError("Got a string for start and dates is None")
123 dtstart = self._str_to_date(start)
124 self.data.predict_start = dtstart
125 try:
126 start = self._get_dates_loc(dates, dtstart)
127 except KeyError:
128 raise ValueError("Start must be in dates. Got %s | %s" %
129 (str(start), str(dtstart)))
130
131 self._set_predict_start_date(start)
132 return start
133
134
135 def _get_predict_end(self, end):
136 """
137 See _get_predict_start for more information. Subclasses do not
138 need to define anything for this.
139 """
140
141 out_of_sample = 0 # will be overwritten if needed
142 if end is None: # use data for ARIMA - endog changes
143 end = len(self.data.endog) - 1
144
145 dates = self.data.dates
146 freq = self.data.freq
147
148 if isinstance(end, str) or (dates is not None
149 and isinstance(end, type(dates[0]))):
150 if dates is None:
151 raise ValueError("Got a string or date for `end` and `dates` is None")
152
153 if isinstance(end, str):
154 dtend = self._str_to_date(end)
155 else:
156 dtend = end # end could be a pandas TimeStamp not a datetime
157
158 self.data.predict_end = dtend
159 try:
160 end = self._get_dates_loc(dates, dtend)
161 except KeyError as err: # end is greater than dates[-1]...probably
162 if dtend > self.data.dates[-1]:
163 end = len(self.data.endog) - 1
164 freq = self.data.freq
165 out_of_sample = datetools._idx_from_dates(dates[-1], dtend,
166 freq)
167 else:
168 if freq is None:
169 raise ValueError("There is no frequency for these "
170 "dates and date %s is not in dates "
171 "index. Try giving a date that is in "
172 "the dates index or use an integer."
173 % dtend)
174 else: #pragma: no cover
175 raise err # should never get here
176 self._make_predict_dates() # attaches self.data.predict_dates
177
178 elif isinstance(end, int) and dates is not None:
179 try:
180 self.data.predict_end = dates[end]
181 except IndexError as err:
182 nobs = len(self.data.endog) - 1 # as an index
183 out_of_sample = end - nobs
184 end = nobs
185 if freq is not None:
186 self.data.predict_end = datetools._date_from_idx(dates[-1],
187 out_of_sample, freq)
188 elif out_of_sample <= 0: # have no frequency but are in sample
189 #TODO: what error to catch here to make sure dates is
190 #on the index?
191 try:
192 self.data.predict_end = self._get_dates_loc(dates,
193 end)
194 except KeyError:
195 raise
196 else:
197 self.data.predict_end = end + out_of_sample
198 self.data.predict_start = self._get_dates_loc(dates,
199 self.data.predict_start)
200
201 self._make_predict_dates()
202
203 elif isinstance(end, int):
204 nobs = len(self.data.endog) - 1 # is an index
205 if end > nobs:
206 out_of_sample = end - nobs
207 end = nobs
208
209 elif freq is None: # should have a date with freq = None
210 raise ValueError("When freq is None, you must give an integer "
211 "index for end.")
212
213 else:
214 raise ValueError("no rule for interpreting end")
215
216 return end, out_of_sample
217
218 def _make_predict_dates(self):
219 data = self.data
220 dtstart = data.predict_start
221 dtend = data.predict_end
222 freq = data.freq
223
224 if freq is not None:
225 pandas_freq = _freq_to_pandas[freq]
226 try:
227 from pandas import DatetimeIndex
228 dates = DatetimeIndex(start=dtstart, end=dtend,
229 freq=pandas_freq)
230 except ImportError as err:
231 from pandas import DateRange
232 dates = DateRange(dtstart, dtend, offset = pandas_freq).values
233 # handle
234 elif freq is None and (isinstance(dtstart, int) and
235 isinstance(dtend, int)):
236 from pandas import Index
237 dates = Index(lrange(dtstart, dtend+1))
238 # if freq is None and dtstart and dtend aren't integers, we're
239 # in sample
240 else:
241 dates = self.data.dates
242 start = self._get_dates_loc(dates, dtstart)
243 end = self._get_dates_loc(dates, dtend)
244 dates = dates[start:end+1] # is this index inclusive?
245 self.data.predict_dates = dates
246
247 class TimeSeriesModelResults(base.LikelihoodModelResults):
248 def __init__(self, model, params, normalized_cov_params, scale=1.):
249 self.data = model.data
250 super(TimeSeriesModelResults,
251 self).__init__(model, params, normalized_cov_params, scale)
252
253 class TimeSeriesResultsWrapper(wrap.ResultsWrapper):
254 _attrs = {}
255 _wrap_attrs = wrap.union_dicts(base.LikelihoodResultsWrapper._wrap_attrs,
256 _attrs)
257 _methods = {'predict' : 'dates'}
258 _wrap_methods = wrap.union_dicts(base.LikelihoodResultsWrapper._wrap_methods,
259 _methods)
260 wrap.populate_wrapper(TimeSeriesResultsWrapper,
261 TimeSeriesModelResults)
262
263 if __name__ == "__main__":
264 import statsmodels.api as sm
265 import datetime
266 import pandas
267
268 data = sm.datasets.macrodata.load()
269
270 #make a DataFrame
271 #TODO: attach a DataFrame to some of the datasets, for quicker use
272 dates = [str(int(x[0])) +':'+ str(int(x[1])) \
273 for x in data.data[['year','quarter']]]
274
275 df = pandas.DataFrame(data.data[['realgdp','realinv','realcons']], index=dates)
276 ex_mod = TimeSeriesModel(df)
277 #ts_series = pandas.Series()
278
279
280
[end of statsmodels/tsa/base/tsa_model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/statsmodels/tsa/base/tsa_model.py b/statsmodels/tsa/base/tsa_model.py
--- a/statsmodels/tsa/base/tsa_model.py
+++ b/statsmodels/tsa/base/tsa_model.py
@@ -4,7 +4,7 @@
import statsmodels.base.wrapper as wrap
from statsmodels.tsa.base import datetools
from numpy import arange, asarray
-from pandas import Index
+from pandas import Index, to_datetime
from pandas import datetools as pandas_datetools
import datetime
@@ -48,8 +48,11 @@
if dates is not None:
if (not datetools._is_datetime_index(dates) and
isinstance(self.data, data.PandasData)):
- raise ValueError("Given a pandas object and the index does "
- "not contain dates")
+ try:
+ dates = to_datetime(dates)
+ except ValueError:
+ raise ValueError("Given a pandas object and the index does "
+ "not contain dates")
if not freq:
try:
freq = datetools._infer_freq(dates)
|
{"golden_diff": "diff --git a/statsmodels/tsa/base/tsa_model.py b/statsmodels/tsa/base/tsa_model.py\n--- a/statsmodels/tsa/base/tsa_model.py\n+++ b/statsmodels/tsa/base/tsa_model.py\n@@ -4,7 +4,7 @@\n import statsmodels.base.wrapper as wrap\n from statsmodels.tsa.base import datetools\n from numpy import arange, asarray\n-from pandas import Index\n+from pandas import Index, to_datetime\n from pandas import datetools as pandas_datetools\n import datetime\n \n@@ -48,8 +48,11 @@\n if dates is not None:\n if (not datetools._is_datetime_index(dates) and\n isinstance(self.data, data.PandasData)):\n- raise ValueError(\"Given a pandas object and the index does \"\n- \"not contain dates\")\n+ try:\n+ dates = to_datetime(dates)\n+ except ValueError:\n+ raise ValueError(\"Given a pandas object and the index does \"\n+ \"not contain dates\")\n if not freq:\n try:\n freq = datetools._infer_freq(dates)\n", "issue": "Try to coerce index into datetime before failing.\nSee what we can do for these types of problems (maybe with a warning about the copy/conversion).\n\nhttp://stackoverflow.com/questions/19892830/issue-with-dates-in-statsmodels-and-pandas\n\n", "before_files": [{"content": "from statsmodels.compat.python import lrange\nimport statsmodels.base.model as base\nfrom statsmodels.base import data\nimport statsmodels.base.wrapper as wrap\nfrom statsmodels.tsa.base import datetools\nfrom numpy import arange, asarray\nfrom pandas import Index\nfrom pandas import datetools as pandas_datetools\nimport datetime\n\n_freq_to_pandas = datetools._freq_to_pandas\n\n_tsa_doc = \"\"\"\n %(model)s\n\n Parameters\n ----------\n %(params)s\n dates : array-like of datetime, optional\n An array-like object of datetime objects. If a pandas object is given\n for endog or exog, it is assumed to have a DateIndex.\n freq : str, optional\n The frequency of the time-series. A Pandas offset or 'B', 'D', 'W',\n 'M', 'A', or 'Q'. This is optional if dates are given.\n %(extra_params)s\n %(extra_sections)s\n\"\"\"\n\n_model_doc = \"Timeseries model base class\"\n\n_generic_params = base._model_params_doc\n_missing_param_doc = base._missing_param_doc\n\nclass TimeSeriesModel(base.LikelihoodModel):\n\n __doc__ = _tsa_doc % {\"model\" : _model_doc, \"params\" : _generic_params,\n \"extra_params\" : _missing_param_doc,\n \"extra_sections\" : \"\"}\n\n def __init__(self, endog, exog=None, dates=None, freq=None, missing='none'):\n super(TimeSeriesModel, self).__init__(endog, exog, missing=missing)\n self._init_dates(dates, freq)\n\n def _init_dates(self, dates, freq):\n if dates is None:\n dates = self.data.row_labels\n\n if dates is not None:\n if (not datetools._is_datetime_index(dates) and\n isinstance(self.data, data.PandasData)):\n raise ValueError(\"Given a pandas object and the index does \"\n \"not contain dates\")\n if not freq:\n try:\n freq = datetools._infer_freq(dates)\n except:\n raise ValueError(\"Frequency inference failed. Use `freq` \"\n \"keyword.\")\n dates = Index(dates)\n self.data.dates = dates\n if freq:\n try: #NOTE: Can drop this once we move to pandas >= 0.8.x\n _freq_to_pandas[freq]\n except:\n raise ValueError(\"freq %s not understood\" % freq)\n self.data.freq = freq\n\n def _get_exog_names(self):\n return self.data.xnames\n\n def _set_exog_names(self, vals):\n if not isinstance(vals, list):\n vals = [vals]\n self.data.xnames = vals\n\n #overwrite with writable property for (V)AR models\n exog_names = property(_get_exog_names, _set_exog_names)\n\n def _get_dates_loc(self, dates, date):\n if hasattr(dates, 'indexMap'): # 0.7.x\n date = dates.indexMap[date]\n else:\n date = dates.get_loc(date)\n try: # pandas 0.8.0 returns a boolean array\n len(date)\n from numpy import where\n date = where(date)[0].item()\n except TypeError: # this is expected behavior\n pass\n return date\n\n def _str_to_date(self, date):\n \"\"\"\n Takes a string and returns a datetime object\n \"\"\"\n return datetools.date_parser(date)\n\n def _set_predict_start_date(self, start):\n dates = self.data.dates\n if dates is None:\n return\n if start > len(dates):\n raise ValueError(\"Start must be <= len(endog)\")\n if start == len(dates):\n self.data.predict_start = datetools._date_from_idx(dates[-1],\n 1, self.data.freq)\n elif start < len(dates):\n self.data.predict_start = dates[start]\n else:\n raise ValueError(\"Start must be <= len(dates)\")\n\n def _get_predict_start(self, start):\n \"\"\"\n Returns the index of the given start date. Subclasses should define\n default behavior for start = None. That isn't handled here.\n\n Start can be a string or an integer if self.data.dates is None.\n \"\"\"\n dates = self.data.dates\n if isinstance(start, str):\n if dates is None:\n raise ValueError(\"Got a string for start and dates is None\")\n dtstart = self._str_to_date(start)\n self.data.predict_start = dtstart\n try:\n start = self._get_dates_loc(dates, dtstart)\n except KeyError:\n raise ValueError(\"Start must be in dates. Got %s | %s\" %\n (str(start), str(dtstart)))\n\n self._set_predict_start_date(start)\n return start\n\n\n def _get_predict_end(self, end):\n \"\"\"\n See _get_predict_start for more information. Subclasses do not\n need to define anything for this.\n \"\"\"\n\n out_of_sample = 0 # will be overwritten if needed\n if end is None: # use data for ARIMA - endog changes\n end = len(self.data.endog) - 1\n\n dates = self.data.dates\n freq = self.data.freq\n\n if isinstance(end, str) or (dates is not None\n and isinstance(end, type(dates[0]))):\n if dates is None:\n raise ValueError(\"Got a string or date for `end` and `dates` is None\")\n\n if isinstance(end, str):\n dtend = self._str_to_date(end)\n else:\n dtend = end # end could be a pandas TimeStamp not a datetime\n\n self.data.predict_end = dtend\n try:\n end = self._get_dates_loc(dates, dtend)\n except KeyError as err: # end is greater than dates[-1]...probably\n if dtend > self.data.dates[-1]:\n end = len(self.data.endog) - 1\n freq = self.data.freq\n out_of_sample = datetools._idx_from_dates(dates[-1], dtend,\n freq)\n else:\n if freq is None:\n raise ValueError(\"There is no frequency for these \"\n \"dates and date %s is not in dates \"\n \"index. Try giving a date that is in \"\n \"the dates index or use an integer.\"\n % dtend)\n else: #pragma: no cover\n raise err # should never get here\n self._make_predict_dates() # attaches self.data.predict_dates\n\n elif isinstance(end, int) and dates is not None:\n try:\n self.data.predict_end = dates[end]\n except IndexError as err:\n nobs = len(self.data.endog) - 1 # as an index\n out_of_sample = end - nobs\n end = nobs\n if freq is not None:\n self.data.predict_end = datetools._date_from_idx(dates[-1],\n out_of_sample, freq)\n elif out_of_sample <= 0: # have no frequency but are in sample\n #TODO: what error to catch here to make sure dates is\n #on the index?\n try:\n self.data.predict_end = self._get_dates_loc(dates,\n end)\n except KeyError:\n raise\n else:\n self.data.predict_end = end + out_of_sample\n self.data.predict_start = self._get_dates_loc(dates,\n self.data.predict_start)\n\n self._make_predict_dates()\n\n elif isinstance(end, int):\n nobs = len(self.data.endog) - 1 # is an index\n if end > nobs:\n out_of_sample = end - nobs\n end = nobs\n\n elif freq is None: # should have a date with freq = None\n raise ValueError(\"When freq is None, you must give an integer \"\n \"index for end.\")\n\n else:\n raise ValueError(\"no rule for interpreting end\")\n\n return end, out_of_sample\n\n def _make_predict_dates(self):\n data = self.data\n dtstart = data.predict_start\n dtend = data.predict_end\n freq = data.freq\n\n if freq is not None:\n pandas_freq = _freq_to_pandas[freq]\n try:\n from pandas import DatetimeIndex\n dates = DatetimeIndex(start=dtstart, end=dtend,\n freq=pandas_freq)\n except ImportError as err:\n from pandas import DateRange\n dates = DateRange(dtstart, dtend, offset = pandas_freq).values\n # handle\n elif freq is None and (isinstance(dtstart, int) and\n isinstance(dtend, int)):\n from pandas import Index\n dates = Index(lrange(dtstart, dtend+1))\n # if freq is None and dtstart and dtend aren't integers, we're\n # in sample\n else:\n dates = self.data.dates\n start = self._get_dates_loc(dates, dtstart)\n end = self._get_dates_loc(dates, dtend)\n dates = dates[start:end+1] # is this index inclusive?\n self.data.predict_dates = dates\n\nclass TimeSeriesModelResults(base.LikelihoodModelResults):\n def __init__(self, model, params, normalized_cov_params, scale=1.):\n self.data = model.data\n super(TimeSeriesModelResults,\n self).__init__(model, params, normalized_cov_params, scale)\n\nclass TimeSeriesResultsWrapper(wrap.ResultsWrapper):\n _attrs = {}\n _wrap_attrs = wrap.union_dicts(base.LikelihoodResultsWrapper._wrap_attrs,\n _attrs)\n _methods = {'predict' : 'dates'}\n _wrap_methods = wrap.union_dicts(base.LikelihoodResultsWrapper._wrap_methods,\n _methods)\nwrap.populate_wrapper(TimeSeriesResultsWrapper,\n TimeSeriesModelResults)\n\nif __name__ == \"__main__\":\n import statsmodels.api as sm\n import datetime\n import pandas\n\n data = sm.datasets.macrodata.load()\n\n #make a DataFrame\n #TODO: attach a DataFrame to some of the datasets, for quicker use\n dates = [str(int(x[0])) +':'+ str(int(x[1])) \\\n for x in data.data[['year','quarter']]]\n\n df = pandas.DataFrame(data.data[['realgdp','realinv','realcons']], index=dates)\n ex_mod = TimeSeriesModel(df)\n #ts_series = pandas.Series()\n\n\n", "path": "statsmodels/tsa/base/tsa_model.py"}]}
| 3,630 | 241 |
gh_patches_debug_13104
|
rasdani/github-patches
|
git_diff
|
vispy__vispy-835
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Vispy Windows XP Compatibility
When trying to run a Vispy Application in windows xp project will not start due to fault "Attribute Error: function 'SetProcessDPIAware' not found". First instance traced back to "vispy\ext\gdi32plus.py line 165" this line was commented out and program ran again and produced the same fault but pointed at "vispy\util\dpi_win32.py line 13" this line was also commented out and the program finally loaded. Exhaustive testing is not yet complete but program is basically functional.
It appears the "SetProcessDPIAware" API is not supported in windows XP and only partially supported in windows vista and later, see.... https://msdn.microsoft.com/en-us/library/dn469266(v=vs.85).aspx and .... https://msdn.microsoft.com/en-us/library/dn302122(v=vs.85).aspx.
Further testing needs to be done to see whether Vispy is obtaining proper DPI settings.
reference: http://www.experts-exchange.com/Programming/Languages/Scripting/Python/Q_21794611.html
</issue>
<code>
[start of vispy/ext/gdi32plus.py]
1 # -*- coding: utf-8 -*-
2 # -----------------------------------------------------------------------------
3 # Copyright (c) 2014, Vispy Development Team. All Rights Reserved.
4 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
5 # -----------------------------------------------------------------------------
6
7 # Adapted from Pyglet
8
9 import atexit
10 from functools import partial
11 import struct
12
13 from ctypes import (windll, Structure, POINTER, byref, WINFUNCTYPE,
14 c_uint, c_float, c_int, c_ulong, c_uint64,
15 c_void_p, c_uint32, c_wchar, c_wchar_p)
16 from ctypes.wintypes import (LONG, BYTE, HFONT, HGDIOBJ, BOOL, UINT, INT,
17 DWORD, LPARAM)
18
19 try:
20 import _winreg as winreg
21 except ImportError:
22 import winreg # noqa, analysis:ignore
23
24 _64_bit = (8 * struct.calcsize("P")) == 64
25
26 LF_FACESIZE = 32
27 FW_BOLD = 700
28 FW_NORMAL = 400
29 ANTIALIASED_QUALITY = 4
30 FontStyleBold = 1
31 FontStyleItalic = 2
32 UnitPixel = 2
33 UnitPoint = 3
34 DEFAULT_CHARSET = 1
35 ANSI_CHARSET = 0
36 TRUETYPE_FONTTYPE = 4
37 GM_ADVANCED = 2
38 CSIDL_FONTS = 0x0014
39
40 PixelFormat24bppRGB = 137224
41 PixelFormat32bppRGB = 139273
42 PixelFormat32bppARGB = 2498570
43
44 DriverStringOptionsCmapLookup = 1
45 DriverStringOptionsRealizedAdvance = 4
46 TextRenderingHintAntiAlias = 4
47 TextRenderingHintAntiAliasGridFit = 3
48 ImageLockModeRead = 1
49 StringFormatFlagsMeasureTrailingSpaces = 0x00000800
50 StringFormatFlagsNoClip = 0x00004000
51 StringFormatFlagsNoFitBlackBox = 0x00000004
52
53 INT_PTR = c_int
54 REAL = c_float
55 TCHAR = c_wchar
56 UINT32 = c_uint32
57 HDC = c_void_p
58 PSTR = c_uint64 if _64_bit else c_uint
59
60 HORZSIZE = 4
61 VERTSIZE = 6
62
63 HORZRES = 8
64 VERTRES = 10
65
66
67 # gdi32
68
69 class POINT(Structure):
70 _fields_ = [('x', LONG), ('y', LONG)]
71
72
73 class RECT(Structure):
74 _fields_ = [('left', LONG), ('top', LONG),
75 ('right', LONG), ('bottom', LONG)]
76
77
78 class PANOSE(Structure):
79 _fields_ = [
80 ('bFamilyType', BYTE), ('bSerifStyle', BYTE), ('bWeight', BYTE),
81 ('bProportion', BYTE), ('bContrast', BYTE), ('bStrokeVariation', BYTE),
82 ('bArmStyle', BYTE), ('bLetterform', BYTE), ('bMidline', BYTE),
83 ('bXHeight', BYTE)]
84
85
86 class TEXTMETRIC(Structure):
87 _fields_ = [
88 ('tmHeight', LONG), ('tmAscent', LONG), ('tmDescent', LONG),
89 ('tmInternalLeading', LONG), ('tmExternalLeading', LONG),
90 ('tmAveCharWidth', LONG), ('tmMaxCharWidth', LONG),
91 ('tmWeight', LONG), ('tmOverhang', LONG),
92 ('tmDigitizedAspectX', LONG), ('tmDigitizedAspectY', LONG),
93 ('tmFirstChar', TCHAR), ('tmLastChar', TCHAR),
94 ('tmDefaultChar', TCHAR), ('tmBreakChar', TCHAR),
95 ('tmItalic', BYTE), ('tmUnderlined', BYTE),
96 ('tmStruckOut', BYTE), ('tmPitchAndFamily', BYTE),
97 ('tmCharSet', BYTE)]
98
99
100 class OUTLINETEXTMETRIC(Structure):
101 _fields_ = [
102 ('otmSize', UINT), ('otmTextMetrics', TEXTMETRIC),
103 ('otmMysteryBytes', BYTE), ('otmPanoseNumber', PANOSE),
104 ('otmMysteryByte', BYTE),
105 ('otmfsSelection', UINT), ('otmfsType', UINT),
106 ('otmsCharSlopeRise', INT), ('otmsCharSlopeRun', INT),
107 ('otmItalicAngle', INT), ('otmEMSquare', UINT), ('otmAscent', INT),
108 ('otmDescent', INT), ('otmLineGap', UINT), ('otmsCapEmHeight', UINT),
109 ('otmsXHeight', UINT), ('otmrcFontBox', RECT), ('otmMacAscent', INT),
110 ('otmMacDescent', INT), ('otmMacLineGap', UINT),
111 ('otmusMinimumPPEM', UINT), ('otmptSubscriptSize', POINT),
112 ('otmptSubscriptOffset', POINT), ('otmptSuperscriptSize', POINT),
113 ('otmptSuperscriptOffset', POINT), ('otmsStrikeoutSize', UINT),
114 ('otmsStrikeoutPosition', INT), ('otmsUnderscoreSize', INT),
115 ('otmsUnderscorePosition', INT), ('otmpFamilyName', PSTR),
116 ('otmpFaceName', PSTR), ('otmpStyleName', PSTR),
117 ('otmpFullName', PSTR), ('junk', (BYTE) * 1024)] # room for strs
118
119
120 class LOGFONT(Structure):
121 _fields_ = [
122 ('lfHeight', LONG), ('lfWidth', LONG), ('lfEscapement', LONG),
123 ('lfOrientation', LONG), ('lfWeight', LONG), ('lfItalic', BYTE),
124 ('lfUnderline', BYTE), ('lfStrikeOut', BYTE), ('lfCharSet', BYTE),
125 ('lfOutPrecision', BYTE), ('lfClipPrecision', BYTE),
126 ('lfQuality', BYTE), ('lfPitchAndFamily', BYTE),
127 ('lfFaceName', (TCHAR * LF_FACESIZE))]
128
129
130 gdi32 = windll.gdi32
131
132 gdi32.CreateFontIndirectW.restype = HFONT
133 gdi32.CreateFontIndirectW.argtypes = [POINTER(LOGFONT)]
134
135 gdi32.SelectObject.restype = HGDIOBJ
136 gdi32.SelectObject.argtypes = [HDC, HGDIOBJ]
137
138 gdi32.SetGraphicsMode.restype = INT
139 gdi32.SetGraphicsMode.argtypes = [HDC, INT]
140
141 gdi32.GetTextMetricsW.restype = BOOL
142 gdi32.GetTextMetricsW.argtypes = [HDC, POINTER(TEXTMETRIC)]
143
144 FONTENUMPROC = WINFUNCTYPE(INT, POINTER(LOGFONT), POINTER(TEXTMETRIC),
145 DWORD, c_void_p)
146 gdi32.EnumFontFamiliesExW.restype = INT
147 gdi32.EnumFontFamiliesExW.argtypes = [HDC, POINTER(LOGFONT),
148 FONTENUMPROC, LPARAM, DWORD]
149
150 gdi32.GetOutlineTextMetricsW.restype = UINT
151 gdi32.GetOutlineTextMetricsW.argtypes = [HDC, UINT,
152 POINTER(OUTLINETEXTMETRIC)]
153
154
155 gdi32.GetDeviceCaps.argtypes = [HDC, INT]
156 gdi32.GetDeviceCaps.restype = INT
157
158 user32 = windll.user32
159
160 user32.GetDC.restype = HDC # HDC
161 user32.GetDC.argtypes = [UINT32] # HWND
162
163 user32.ReleaseDC.argtypes = [c_void_p, HDC]
164
165 user32.SetProcessDPIAware.argtypes = []
166
167
168 # gdiplus
169
170 class GdiplusStartupInput(Structure):
171 _fields_ = [
172 ('GdiplusVersion', UINT32), ('DebugEventCallback', c_void_p),
173 ('SuppressBackgroundThread', BOOL), ('SuppressExternalCodecs', BOOL)]
174
175
176 class GdiplusStartupOutput(Structure):
177 _fields = [('NotificationHookProc', c_void_p),
178 ('NotificationUnhookProc', c_void_p)]
179
180 gdiplus = windll.gdiplus
181
182 gdiplus.GdipCreateFontFamilyFromName.restype = c_int
183 gdiplus.GdipCreateFontFamilyFromName.argtypes = [c_wchar_p, c_void_p, c_void_p]
184
185 gdiplus.GdipNewPrivateFontCollection.restype = c_int
186 gdiplus.GdipNewPrivateFontCollection.argtypes = [c_void_p]
187
188 gdiplus.GdipPrivateAddFontFile.restype = c_int
189 gdiplus.GdipPrivateAddFontFile.argtypes = [c_void_p, c_wchar_p]
190
191 gdiplus.GdipGetFamilyName.restype = c_int
192 gdiplus.GdipGetFamilyName.argtypes = [c_void_p, c_wchar_p, c_int]
193
194
195 def gdiplus_init():
196 token = c_ulong()
197 startup_in = GdiplusStartupInput()
198 startup_in.GdiplusVersion = 1
199 startup_out = GdiplusStartupOutput()
200 gdiplus.GdiplusStartup(byref(token), byref(startup_in), byref(startup_out))
201 atexit.register(partial(gdiplus.GdiplusShutdown, token))
202
203 gdiplus_init()
204
[end of vispy/ext/gdi32plus.py]
[start of vispy/util/dpi/_win32.py]
1 # -*- coding: utf-8 -*-
2 # -----------------------------------------------------------------------------
3 # Copyright (c) 2014, Vispy Development Team. All Rights Reserved.
4 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
5 # -----------------------------------------------------------------------------
6
7 from ...ext.gdi32plus import (gdi32, user32, HORZSIZE, VERTSIZE,
8 HORZRES, VERTRES)
9
10
11 def get_dpi():
12 """Get screen DPI from the OS"""
13 user32.SetProcessDPIAware()
14 dc = user32.GetDC(0)
15 h_size = gdi32.GetDeviceCaps(dc, HORZSIZE)
16 v_size = gdi32.GetDeviceCaps(dc, VERTSIZE)
17 h_res = gdi32.GetDeviceCaps(dc, HORZRES)
18 v_res = gdi32.GetDeviceCaps(dc, VERTRES)
19 user32.ReleaseDC(None, dc)
20 return (h_res/float(h_size) + v_res/float(v_size)) * 0.5 * 25.4
21
[end of vispy/util/dpi/_win32.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/vispy/ext/gdi32plus.py b/vispy/ext/gdi32plus.py
--- a/vispy/ext/gdi32plus.py
+++ b/vispy/ext/gdi32plus.py
@@ -162,7 +162,10 @@
user32.ReleaseDC.argtypes = [c_void_p, HDC]
-user32.SetProcessDPIAware.argtypes = []
+try:
+ user32.SetProcessDPIAware.argtypes = []
+except AttributeError:
+ pass # not present on XP
# gdiplus
diff --git a/vispy/util/dpi/_win32.py b/vispy/util/dpi/_win32.py
--- a/vispy/util/dpi/_win32.py
+++ b/vispy/util/dpi/_win32.py
@@ -10,7 +10,10 @@
def get_dpi():
"""Get screen DPI from the OS"""
- user32.SetProcessDPIAware()
+ try:
+ user32.SetProcessDPIAware()
+ except AttributeError:
+ pass # not present on XP
dc = user32.GetDC(0)
h_size = gdi32.GetDeviceCaps(dc, HORZSIZE)
v_size = gdi32.GetDeviceCaps(dc, VERTSIZE)
|
{"golden_diff": "diff --git a/vispy/ext/gdi32plus.py b/vispy/ext/gdi32plus.py\n--- a/vispy/ext/gdi32plus.py\n+++ b/vispy/ext/gdi32plus.py\n@@ -162,7 +162,10 @@\n \n user32.ReleaseDC.argtypes = [c_void_p, HDC]\n \n-user32.SetProcessDPIAware.argtypes = []\n+try:\n+ user32.SetProcessDPIAware.argtypes = []\n+except AttributeError:\n+ pass # not present on XP\n \n \n # gdiplus\ndiff --git a/vispy/util/dpi/_win32.py b/vispy/util/dpi/_win32.py\n--- a/vispy/util/dpi/_win32.py\n+++ b/vispy/util/dpi/_win32.py\n@@ -10,7 +10,10 @@\n \n def get_dpi():\n \"\"\"Get screen DPI from the OS\"\"\"\n- user32.SetProcessDPIAware()\n+ try:\n+ user32.SetProcessDPIAware()\n+ except AttributeError:\n+ pass # not present on XP\n dc = user32.GetDC(0)\n h_size = gdi32.GetDeviceCaps(dc, HORZSIZE)\n v_size = gdi32.GetDeviceCaps(dc, VERTSIZE)\n", "issue": "Vispy Windows XP Compatibility\nWhen trying to run a Vispy Application in windows xp project will not start due to fault \"Attribute Error: function 'SetProcessDPIAware' not found\". First instance traced back to \"vispy\\ext\\gdi32plus.py line 165\" this line was commented out and program ran again and produced the same fault but pointed at \"vispy\\util\\dpi_win32.py line 13\" this line was also commented out and the program finally loaded. Exhaustive testing is not yet complete but program is basically functional.\n\nIt appears the \"SetProcessDPIAware\" API is not supported in windows XP and only partially supported in windows vista and later, see.... https://msdn.microsoft.com/en-us/library/dn469266(v=vs.85).aspx and .... https://msdn.microsoft.com/en-us/library/dn302122(v=vs.85).aspx.\n\nFurther testing needs to be done to see whether Vispy is obtaining proper DPI settings. \nreference: http://www.experts-exchange.com/Programming/Languages/Scripting/Python/Q_21794611.html\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# -----------------------------------------------------------------------------\n# Copyright (c) 2014, Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n# -----------------------------------------------------------------------------\n\n# Adapted from Pyglet\n\nimport atexit\nfrom functools import partial\nimport struct\n\nfrom ctypes import (windll, Structure, POINTER, byref, WINFUNCTYPE,\n c_uint, c_float, c_int, c_ulong, c_uint64,\n c_void_p, c_uint32, c_wchar, c_wchar_p)\nfrom ctypes.wintypes import (LONG, BYTE, HFONT, HGDIOBJ, BOOL, UINT, INT,\n DWORD, LPARAM)\n\ntry:\n import _winreg as winreg\nexcept ImportError:\n import winreg # noqa, analysis:ignore\n\n_64_bit = (8 * struct.calcsize(\"P\")) == 64\n\nLF_FACESIZE = 32\nFW_BOLD = 700\nFW_NORMAL = 400\nANTIALIASED_QUALITY = 4\nFontStyleBold = 1\nFontStyleItalic = 2\nUnitPixel = 2\nUnitPoint = 3\nDEFAULT_CHARSET = 1\nANSI_CHARSET = 0\nTRUETYPE_FONTTYPE = 4\nGM_ADVANCED = 2\nCSIDL_FONTS = 0x0014\n\nPixelFormat24bppRGB = 137224\nPixelFormat32bppRGB = 139273\nPixelFormat32bppARGB = 2498570\n\nDriverStringOptionsCmapLookup = 1\nDriverStringOptionsRealizedAdvance = 4\nTextRenderingHintAntiAlias = 4\nTextRenderingHintAntiAliasGridFit = 3\nImageLockModeRead = 1\nStringFormatFlagsMeasureTrailingSpaces = 0x00000800\nStringFormatFlagsNoClip = 0x00004000\nStringFormatFlagsNoFitBlackBox = 0x00000004\n\nINT_PTR = c_int\nREAL = c_float\nTCHAR = c_wchar\nUINT32 = c_uint32\nHDC = c_void_p\nPSTR = c_uint64 if _64_bit else c_uint\n\nHORZSIZE = 4\nVERTSIZE = 6\n\nHORZRES = 8\nVERTRES = 10\n\n\n# gdi32\n\nclass POINT(Structure):\n _fields_ = [('x', LONG), ('y', LONG)]\n\n\nclass RECT(Structure):\n _fields_ = [('left', LONG), ('top', LONG),\n ('right', LONG), ('bottom', LONG)]\n\n\nclass PANOSE(Structure):\n _fields_ = [\n ('bFamilyType', BYTE), ('bSerifStyle', BYTE), ('bWeight', BYTE),\n ('bProportion', BYTE), ('bContrast', BYTE), ('bStrokeVariation', BYTE),\n ('bArmStyle', BYTE), ('bLetterform', BYTE), ('bMidline', BYTE),\n ('bXHeight', BYTE)]\n\n\nclass TEXTMETRIC(Structure):\n _fields_ = [\n ('tmHeight', LONG), ('tmAscent', LONG), ('tmDescent', LONG),\n ('tmInternalLeading', LONG), ('tmExternalLeading', LONG),\n ('tmAveCharWidth', LONG), ('tmMaxCharWidth', LONG),\n ('tmWeight', LONG), ('tmOverhang', LONG),\n ('tmDigitizedAspectX', LONG), ('tmDigitizedAspectY', LONG),\n ('tmFirstChar', TCHAR), ('tmLastChar', TCHAR),\n ('tmDefaultChar', TCHAR), ('tmBreakChar', TCHAR),\n ('tmItalic', BYTE), ('tmUnderlined', BYTE),\n ('tmStruckOut', BYTE), ('tmPitchAndFamily', BYTE),\n ('tmCharSet', BYTE)]\n\n\nclass OUTLINETEXTMETRIC(Structure):\n _fields_ = [\n ('otmSize', UINT), ('otmTextMetrics', TEXTMETRIC),\n ('otmMysteryBytes', BYTE), ('otmPanoseNumber', PANOSE),\n ('otmMysteryByte', BYTE),\n ('otmfsSelection', UINT), ('otmfsType', UINT),\n ('otmsCharSlopeRise', INT), ('otmsCharSlopeRun', INT),\n ('otmItalicAngle', INT), ('otmEMSquare', UINT), ('otmAscent', INT),\n ('otmDescent', INT), ('otmLineGap', UINT), ('otmsCapEmHeight', UINT),\n ('otmsXHeight', UINT), ('otmrcFontBox', RECT), ('otmMacAscent', INT),\n ('otmMacDescent', INT), ('otmMacLineGap', UINT),\n ('otmusMinimumPPEM', UINT), ('otmptSubscriptSize', POINT),\n ('otmptSubscriptOffset', POINT), ('otmptSuperscriptSize', POINT),\n ('otmptSuperscriptOffset', POINT), ('otmsStrikeoutSize', UINT),\n ('otmsStrikeoutPosition', INT), ('otmsUnderscoreSize', INT),\n ('otmsUnderscorePosition', INT), ('otmpFamilyName', PSTR),\n ('otmpFaceName', PSTR), ('otmpStyleName', PSTR),\n ('otmpFullName', PSTR), ('junk', (BYTE) * 1024)] # room for strs\n\n\nclass LOGFONT(Structure):\n _fields_ = [\n ('lfHeight', LONG), ('lfWidth', LONG), ('lfEscapement', LONG),\n ('lfOrientation', LONG), ('lfWeight', LONG), ('lfItalic', BYTE),\n ('lfUnderline', BYTE), ('lfStrikeOut', BYTE), ('lfCharSet', BYTE),\n ('lfOutPrecision', BYTE), ('lfClipPrecision', BYTE),\n ('lfQuality', BYTE), ('lfPitchAndFamily', BYTE),\n ('lfFaceName', (TCHAR * LF_FACESIZE))]\n\n\ngdi32 = windll.gdi32\n\ngdi32.CreateFontIndirectW.restype = HFONT\ngdi32.CreateFontIndirectW.argtypes = [POINTER(LOGFONT)]\n\ngdi32.SelectObject.restype = HGDIOBJ\ngdi32.SelectObject.argtypes = [HDC, HGDIOBJ]\n\ngdi32.SetGraphicsMode.restype = INT\ngdi32.SetGraphicsMode.argtypes = [HDC, INT]\n\ngdi32.GetTextMetricsW.restype = BOOL\ngdi32.GetTextMetricsW.argtypes = [HDC, POINTER(TEXTMETRIC)]\n\nFONTENUMPROC = WINFUNCTYPE(INT, POINTER(LOGFONT), POINTER(TEXTMETRIC),\n DWORD, c_void_p)\ngdi32.EnumFontFamiliesExW.restype = INT\ngdi32.EnumFontFamiliesExW.argtypes = [HDC, POINTER(LOGFONT),\n FONTENUMPROC, LPARAM, DWORD]\n\ngdi32.GetOutlineTextMetricsW.restype = UINT\ngdi32.GetOutlineTextMetricsW.argtypes = [HDC, UINT,\n POINTER(OUTLINETEXTMETRIC)]\n\n\ngdi32.GetDeviceCaps.argtypes = [HDC, INT]\ngdi32.GetDeviceCaps.restype = INT\n\nuser32 = windll.user32\n\nuser32.GetDC.restype = HDC # HDC\nuser32.GetDC.argtypes = [UINT32] # HWND\n\nuser32.ReleaseDC.argtypes = [c_void_p, HDC]\n\nuser32.SetProcessDPIAware.argtypes = []\n\n\n# gdiplus\n\nclass GdiplusStartupInput(Structure):\n _fields_ = [\n ('GdiplusVersion', UINT32), ('DebugEventCallback', c_void_p),\n ('SuppressBackgroundThread', BOOL), ('SuppressExternalCodecs', BOOL)]\n\n\nclass GdiplusStartupOutput(Structure):\n _fields = [('NotificationHookProc', c_void_p),\n ('NotificationUnhookProc', c_void_p)]\n\ngdiplus = windll.gdiplus\n\ngdiplus.GdipCreateFontFamilyFromName.restype = c_int\ngdiplus.GdipCreateFontFamilyFromName.argtypes = [c_wchar_p, c_void_p, c_void_p]\n\ngdiplus.GdipNewPrivateFontCollection.restype = c_int\ngdiplus.GdipNewPrivateFontCollection.argtypes = [c_void_p]\n\ngdiplus.GdipPrivateAddFontFile.restype = c_int\ngdiplus.GdipPrivateAddFontFile.argtypes = [c_void_p, c_wchar_p]\n\ngdiplus.GdipGetFamilyName.restype = c_int\ngdiplus.GdipGetFamilyName.argtypes = [c_void_p, c_wchar_p, c_int]\n\n\ndef gdiplus_init():\n token = c_ulong()\n startup_in = GdiplusStartupInput()\n startup_in.GdiplusVersion = 1\n startup_out = GdiplusStartupOutput()\n gdiplus.GdiplusStartup(byref(token), byref(startup_in), byref(startup_out))\n atexit.register(partial(gdiplus.GdiplusShutdown, token))\n\ngdiplus_init()\n", "path": "vispy/ext/gdi32plus.py"}, {"content": "# -*- coding: utf-8 -*-\n# -----------------------------------------------------------------------------\n# Copyright (c) 2014, Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n# -----------------------------------------------------------------------------\n\nfrom ...ext.gdi32plus import (gdi32, user32, HORZSIZE, VERTSIZE,\n HORZRES, VERTRES)\n\n\ndef get_dpi():\n \"\"\"Get screen DPI from the OS\"\"\"\n user32.SetProcessDPIAware()\n dc = user32.GetDC(0)\n h_size = gdi32.GetDeviceCaps(dc, HORZSIZE)\n v_size = gdi32.GetDeviceCaps(dc, VERTSIZE)\n h_res = gdi32.GetDeviceCaps(dc, HORZRES)\n v_res = gdi32.GetDeviceCaps(dc, VERTRES)\n user32.ReleaseDC(None, dc)\n return (h_res/float(h_size) + v_res/float(v_size)) * 0.5 * 25.4\n", "path": "vispy/util/dpi/_win32.py"}]}
| 3,651 | 303 |
gh_patches_debug_14858
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-2040
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
New `User enrollment error` page

## Acceptance Criteria
<!-- Remember to consider edge cases -->
- [x] Headline copy updated
- [x] Body copy updated
- [x] 24px spacing between icon, headline, body, and agency info
- [x] 64px spacing between CTA and the thing above it
- [x] Login.gov and non-login.gov variants (with and without the Sign out button)
## Additional context
- Handles the case of the user entering the wrong card information (e.g. expiration date, security code)
- Littlepay responds with an error in Hosted Card Tokenization flow (Javascript)
</issue>
<code>
[start of benefits/enrollment/views.py]
1 """
2 The enrollment application: view definitions for the benefits enrollment flow.
3 """
4
5 import logging
6 from datetime import timedelta
7
8 from django.http import JsonResponse
9 from django.template.response import TemplateResponse
10 from django.urls import reverse
11 from django.utils import timezone
12 from django.utils.decorators import decorator_from_middleware
13 from littlepay.api.client import Client
14 from requests.exceptions import HTTPError
15
16 from benefits.core import session
17 from benefits.core.middleware import (
18 EligibleSessionRequired,
19 VerifierSessionRequired,
20 pageview_decorator,
21 )
22 from benefits.core.views import ROUTE_LOGGED_OUT
23 from . import analytics, forms
24
25
26 ROUTE_INDEX = "enrollment:index"
27 ROUTE_RETRY = "enrollment:retry"
28 ROUTE_SUCCESS = "enrollment:success"
29 ROUTE_TOKEN = "enrollment:token"
30
31 TEMPLATE_REENROLLMENT_ERROR = "enrollment/reenrollment-error.html"
32 TEMPLATE_RETRY = "enrollment/retry.html"
33 TEMPLATE_SUCCESS = "enrollment/success.html"
34
35
36 logger = logging.getLogger(__name__)
37
38
39 @decorator_from_middleware(EligibleSessionRequired)
40 def token(request):
41 """View handler for the enrollment auth token."""
42 if not session.enrollment_token_valid(request):
43 agency = session.agency(request)
44 payment_processor = agency.payment_processor
45 client = Client(
46 base_url=payment_processor.api_base_url,
47 client_id=payment_processor.client_id,
48 client_secret=payment_processor.client_secret,
49 audience=payment_processor.audience,
50 )
51 client.oauth.ensure_active_token(client.token)
52 response = client.request_card_tokenization_access()
53 session.update(request, enrollment_token=response.get("access_token"), enrollment_token_exp=response.get("expires_at"))
54
55 data = {"token": session.enrollment_token(request)}
56
57 return JsonResponse(data)
58
59
60 @decorator_from_middleware(EligibleSessionRequired)
61 def index(request):
62 """View handler for the enrollment landing page."""
63 session.update(request, origin=reverse(ROUTE_INDEX))
64
65 agency = session.agency(request)
66 eligibility = session.eligibility(request)
67 payment_processor = agency.payment_processor
68
69 # POST back after payment processor form, process card token
70 if request.method == "POST":
71 form = forms.CardTokenizeSuccessForm(request.POST)
72 if not form.is_valid():
73 raise Exception("Invalid card token form")
74
75 card_token = form.cleaned_data.get("card_token")
76
77 client = Client(
78 base_url=payment_processor.api_base_url,
79 client_id=payment_processor.client_id,
80 client_secret=payment_processor.client_secret,
81 audience=payment_processor.audience,
82 )
83 client.oauth.ensure_active_token(client.token)
84
85 funding_source = client.get_funding_source_by_token(card_token)
86 group_id = eligibility.group_id
87
88 try:
89 group_funding_source = _get_group_funding_source(
90 client=client, group_id=group_id, funding_source_id=funding_source.id
91 )
92
93 already_enrolled = group_funding_source is not None
94
95 if eligibility.supports_expiration:
96 # set expiry on session
97 if already_enrolled and group_funding_source.expiry_date is not None:
98 session.update(request, enrollment_expiry=group_funding_source.expiry_date)
99 else:
100 session.update(request, enrollment_expiry=_calculate_expiry(eligibility.expiration_days))
101
102 if not already_enrolled:
103 # enroll user with an expiration date, return success
104 client.link_concession_group_funding_source(
105 group_id=group_id, funding_source_id=funding_source.id, expiry=session.enrollment_expiry(request)
106 )
107 return success(request)
108 else: # already_enrolled
109 if group_funding_source.expiry_date is None:
110 # update expiration of existing enrollment, return success
111 client.update_concession_group_funding_source_expiry(
112 group_id=group_id,
113 funding_source_id=funding_source.id,
114 expiry=session.enrollment_expiry(request),
115 )
116 return success(request)
117 else:
118 is_expired = _is_expired(group_funding_source.expiry_date)
119 is_within_reenrollment_window = _is_within_reenrollment_window(
120 group_funding_source.expiry_date, session.enrollment_reenrollment(request)
121 )
122
123 if is_expired or is_within_reenrollment_window:
124 # update expiration of existing enrollment, return success
125 client.update_concession_group_funding_source_expiry(
126 group_id=group_id,
127 funding_source_id=funding_source.id,
128 expiry=session.enrollment_expiry(request),
129 )
130 return success(request)
131 else:
132 # re-enrollment error, return enrollment error with expiration and reenrollment_date
133 return reenrollment_error(request)
134 else: # eligibility does not support expiration
135 if not already_enrolled:
136 # enroll user with no expiration date, return success
137 client.link_concession_group_funding_source(group_id=group_id, funding_source_id=funding_source.id)
138 return success(request)
139 else: # already_enrolled
140 if group_funding_source.expiry_date is None:
141 # no action, return success
142 return success(request)
143 else:
144 # remove expiration date, return success
145 raise NotImplementedError("Removing expiration date is currently not supported")
146
147 except HTTPError as e:
148 analytics.returned_error(request, str(e))
149 raise Exception(f"{e}: {e.response.json()}")
150 except Exception as e:
151 analytics.returned_error(request, str(e))
152 raise e
153
154 # GET enrollment index
155 else:
156 tokenize_retry_form = forms.CardTokenizeFailForm(ROUTE_RETRY)
157 tokenize_success_form = forms.CardTokenizeSuccessForm(auto_id=True, label_suffix="")
158
159 context = {
160 "forms": [tokenize_retry_form, tokenize_success_form],
161 "cta_button": "tokenize_card",
162 "card_tokenize_env": agency.payment_processor.card_tokenize_env,
163 "card_tokenize_func": agency.payment_processor.card_tokenize_func,
164 "card_tokenize_url": agency.payment_processor.card_tokenize_url,
165 "token_field": "card_token",
166 "form_retry": tokenize_retry_form.id,
167 "form_success": tokenize_success_form.id,
168 }
169
170 logger.debug(f'card_tokenize_url: {context["card_tokenize_url"]}')
171
172 return TemplateResponse(request, eligibility.enrollment_index_template, context)
173
174
175 def _get_group_funding_source(client: Client, group_id, funding_source_id):
176 group_funding_sources = client.get_concession_group_linked_funding_sources(group_id)
177 matching_group_funding_source = None
178 for group_funding_source in group_funding_sources:
179 if group_funding_source.id == funding_source_id:
180 matching_group_funding_source = group_funding_source
181 break
182
183 return matching_group_funding_source
184
185
186 def _is_expired(expiry_date):
187 """Returns whether the passed in datetime is expired or not."""
188 return expiry_date <= timezone.now()
189
190
191 def _is_within_reenrollment_window(expiry_date, enrollment_reenrollment_date):
192 """Returns if we are currently within the reenrollment window."""
193 return enrollment_reenrollment_date <= timezone.now() < expiry_date
194
195
196 def _calculate_expiry(expiration_days):
197 """Returns the expiry datetime, which should be midnight in our configured timezone of the (N + 1)th day from now,
198 where N is expiration_days."""
199 default_time_zone = timezone.get_default_timezone()
200 expiry_date = timezone.localtime(timezone=default_time_zone) + timedelta(days=expiration_days + 1)
201 expiry_datetime = expiry_date.replace(hour=0, minute=0, second=0, microsecond=0)
202
203 return expiry_datetime
204
205
206 def reenrollment_error(request):
207 """View handler for a re-enrollment attempt that is not yet within the re-enrollment window."""
208 analytics.returned_error(request, "Re-enrollment error")
209 return TemplateResponse(request, TEMPLATE_REENROLLMENT_ERROR)
210
211
212 @decorator_from_middleware(EligibleSessionRequired)
213 def retry(request):
214 """View handler for a recoverable failure condition."""
215 if request.method == "POST":
216 analytics.returned_retry(request)
217 form = forms.CardTokenizeFailForm(request.POST)
218 if form.is_valid():
219 return TemplateResponse(request, TEMPLATE_RETRY)
220 else:
221 analytics.returned_error(request, "Invalid retry submission.")
222 raise Exception("Invalid retry submission.")
223 else:
224 analytics.returned_error(request, "This view method only supports POST.")
225 raise Exception("This view method only supports POST.")
226
227
228 @pageview_decorator
229 @decorator_from_middleware(EligibleSessionRequired)
230 @decorator_from_middleware(VerifierSessionRequired)
231 def success(request):
232 """View handler for the final success page."""
233 request.path = "/enrollment/success"
234 session.update(request, origin=reverse(ROUTE_SUCCESS))
235
236 agency = session.agency(request)
237 verifier = session.verifier(request)
238 eligibility = session.eligibility(request)
239
240 if session.logged_in(request) and verifier.auth_provider.supports_sign_out:
241 # overwrite origin for a logged in user
242 # if they click the logout button, they are taken to the new route
243 session.update(request, origin=reverse(ROUTE_LOGGED_OUT))
244
245 analytics.returned_success(request, eligibility.group_id)
246 return TemplateResponse(request, agency.enrollment_success_template)
247
[end of benefits/enrollment/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/benefits/enrollment/views.py b/benefits/enrollment/views.py
--- a/benefits/enrollment/views.py
+++ b/benefits/enrollment/views.py
@@ -212,17 +212,8 @@
@decorator_from_middleware(EligibleSessionRequired)
def retry(request):
"""View handler for a recoverable failure condition."""
- if request.method == "POST":
- analytics.returned_retry(request)
- form = forms.CardTokenizeFailForm(request.POST)
- if form.is_valid():
- return TemplateResponse(request, TEMPLATE_RETRY)
- else:
- analytics.returned_error(request, "Invalid retry submission.")
- raise Exception("Invalid retry submission.")
- else:
- analytics.returned_error(request, "This view method only supports POST.")
- raise Exception("This view method only supports POST.")
+ analytics.returned_retry(request)
+ return TemplateResponse(request, TEMPLATE_RETRY)
@pageview_decorator
|
{"golden_diff": "diff --git a/benefits/enrollment/views.py b/benefits/enrollment/views.py\n--- a/benefits/enrollment/views.py\n+++ b/benefits/enrollment/views.py\n@@ -212,17 +212,8 @@\n @decorator_from_middleware(EligibleSessionRequired)\n def retry(request):\n \"\"\"View handler for a recoverable failure condition.\"\"\"\n- if request.method == \"POST\":\n- analytics.returned_retry(request)\n- form = forms.CardTokenizeFailForm(request.POST)\n- if form.is_valid():\n- return TemplateResponse(request, TEMPLATE_RETRY)\n- else:\n- analytics.returned_error(request, \"Invalid retry submission.\")\n- raise Exception(\"Invalid retry submission.\")\n- else:\n- analytics.returned_error(request, \"This view method only supports POST.\")\n- raise Exception(\"This view method only supports POST.\")\n+ analytics.returned_retry(request)\n+ return TemplateResponse(request, TEMPLATE_RETRY)\n \n \n @pageview_decorator\n", "issue": "New `User enrollment error` page\n\r\n\r\n\r\n## Acceptance Criteria\r\n\r\n<!-- Remember to consider edge cases -->\r\n\r\n- [x] Headline copy updated\r\n- [x] Body copy updated\r\n- [x] 24px spacing between icon, headline, body, and agency info\r\n- [x] 64px spacing between CTA and the thing above it\r\n- [x] Login.gov and non-login.gov variants (with and without the Sign out button)\r\n\r\n\r\n## Additional context\r\n\r\n - Handles the case of the user entering the wrong card information (e.g. expiration date, security code)\r\n - Littlepay responds with an error in Hosted Card Tokenization flow (Javascript)\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nThe enrollment application: view definitions for the benefits enrollment flow.\n\"\"\"\n\nimport logging\nfrom datetime import timedelta\n\nfrom django.http import JsonResponse\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.decorators import decorator_from_middleware\nfrom littlepay.api.client import Client\nfrom requests.exceptions import HTTPError\n\nfrom benefits.core import session\nfrom benefits.core.middleware import (\n EligibleSessionRequired,\n VerifierSessionRequired,\n pageview_decorator,\n)\nfrom benefits.core.views import ROUTE_LOGGED_OUT\nfrom . import analytics, forms\n\n\nROUTE_INDEX = \"enrollment:index\"\nROUTE_RETRY = \"enrollment:retry\"\nROUTE_SUCCESS = \"enrollment:success\"\nROUTE_TOKEN = \"enrollment:token\"\n\nTEMPLATE_REENROLLMENT_ERROR = \"enrollment/reenrollment-error.html\"\nTEMPLATE_RETRY = \"enrollment/retry.html\"\nTEMPLATE_SUCCESS = \"enrollment/success.html\"\n\n\nlogger = logging.getLogger(__name__)\n\n\n@decorator_from_middleware(EligibleSessionRequired)\ndef token(request):\n \"\"\"View handler for the enrollment auth token.\"\"\"\n if not session.enrollment_token_valid(request):\n agency = session.agency(request)\n payment_processor = agency.payment_processor\n client = Client(\n base_url=payment_processor.api_base_url,\n client_id=payment_processor.client_id,\n client_secret=payment_processor.client_secret,\n audience=payment_processor.audience,\n )\n client.oauth.ensure_active_token(client.token)\n response = client.request_card_tokenization_access()\n session.update(request, enrollment_token=response.get(\"access_token\"), enrollment_token_exp=response.get(\"expires_at\"))\n\n data = {\"token\": session.enrollment_token(request)}\n\n return JsonResponse(data)\n\n\n@decorator_from_middleware(EligibleSessionRequired)\ndef index(request):\n \"\"\"View handler for the enrollment landing page.\"\"\"\n session.update(request, origin=reverse(ROUTE_INDEX))\n\n agency = session.agency(request)\n eligibility = session.eligibility(request)\n payment_processor = agency.payment_processor\n\n # POST back after payment processor form, process card token\n if request.method == \"POST\":\n form = forms.CardTokenizeSuccessForm(request.POST)\n if not form.is_valid():\n raise Exception(\"Invalid card token form\")\n\n card_token = form.cleaned_data.get(\"card_token\")\n\n client = Client(\n base_url=payment_processor.api_base_url,\n client_id=payment_processor.client_id,\n client_secret=payment_processor.client_secret,\n audience=payment_processor.audience,\n )\n client.oauth.ensure_active_token(client.token)\n\n funding_source = client.get_funding_source_by_token(card_token)\n group_id = eligibility.group_id\n\n try:\n group_funding_source = _get_group_funding_source(\n client=client, group_id=group_id, funding_source_id=funding_source.id\n )\n\n already_enrolled = group_funding_source is not None\n\n if eligibility.supports_expiration:\n # set expiry on session\n if already_enrolled and group_funding_source.expiry_date is not None:\n session.update(request, enrollment_expiry=group_funding_source.expiry_date)\n else:\n session.update(request, enrollment_expiry=_calculate_expiry(eligibility.expiration_days))\n\n if not already_enrolled:\n # enroll user with an expiration date, return success\n client.link_concession_group_funding_source(\n group_id=group_id, funding_source_id=funding_source.id, expiry=session.enrollment_expiry(request)\n )\n return success(request)\n else: # already_enrolled\n if group_funding_source.expiry_date is None:\n # update expiration of existing enrollment, return success\n client.update_concession_group_funding_source_expiry(\n group_id=group_id,\n funding_source_id=funding_source.id,\n expiry=session.enrollment_expiry(request),\n )\n return success(request)\n else:\n is_expired = _is_expired(group_funding_source.expiry_date)\n is_within_reenrollment_window = _is_within_reenrollment_window(\n group_funding_source.expiry_date, session.enrollment_reenrollment(request)\n )\n\n if is_expired or is_within_reenrollment_window:\n # update expiration of existing enrollment, return success\n client.update_concession_group_funding_source_expiry(\n group_id=group_id,\n funding_source_id=funding_source.id,\n expiry=session.enrollment_expiry(request),\n )\n return success(request)\n else:\n # re-enrollment error, return enrollment error with expiration and reenrollment_date\n return reenrollment_error(request)\n else: # eligibility does not support expiration\n if not already_enrolled:\n # enroll user with no expiration date, return success\n client.link_concession_group_funding_source(group_id=group_id, funding_source_id=funding_source.id)\n return success(request)\n else: # already_enrolled\n if group_funding_source.expiry_date is None:\n # no action, return success\n return success(request)\n else:\n # remove expiration date, return success\n raise NotImplementedError(\"Removing expiration date is currently not supported\")\n\n except HTTPError as e:\n analytics.returned_error(request, str(e))\n raise Exception(f\"{e}: {e.response.json()}\")\n except Exception as e:\n analytics.returned_error(request, str(e))\n raise e\n\n # GET enrollment index\n else:\n tokenize_retry_form = forms.CardTokenizeFailForm(ROUTE_RETRY)\n tokenize_success_form = forms.CardTokenizeSuccessForm(auto_id=True, label_suffix=\"\")\n\n context = {\n \"forms\": [tokenize_retry_form, tokenize_success_form],\n \"cta_button\": \"tokenize_card\",\n \"card_tokenize_env\": agency.payment_processor.card_tokenize_env,\n \"card_tokenize_func\": agency.payment_processor.card_tokenize_func,\n \"card_tokenize_url\": agency.payment_processor.card_tokenize_url,\n \"token_field\": \"card_token\",\n \"form_retry\": tokenize_retry_form.id,\n \"form_success\": tokenize_success_form.id,\n }\n\n logger.debug(f'card_tokenize_url: {context[\"card_tokenize_url\"]}')\n\n return TemplateResponse(request, eligibility.enrollment_index_template, context)\n\n\ndef _get_group_funding_source(client: Client, group_id, funding_source_id):\n group_funding_sources = client.get_concession_group_linked_funding_sources(group_id)\n matching_group_funding_source = None\n for group_funding_source in group_funding_sources:\n if group_funding_source.id == funding_source_id:\n matching_group_funding_source = group_funding_source\n break\n\n return matching_group_funding_source\n\n\ndef _is_expired(expiry_date):\n \"\"\"Returns whether the passed in datetime is expired or not.\"\"\"\n return expiry_date <= timezone.now()\n\n\ndef _is_within_reenrollment_window(expiry_date, enrollment_reenrollment_date):\n \"\"\"Returns if we are currently within the reenrollment window.\"\"\"\n return enrollment_reenrollment_date <= timezone.now() < expiry_date\n\n\ndef _calculate_expiry(expiration_days):\n \"\"\"Returns the expiry datetime, which should be midnight in our configured timezone of the (N + 1)th day from now,\n where N is expiration_days.\"\"\"\n default_time_zone = timezone.get_default_timezone()\n expiry_date = timezone.localtime(timezone=default_time_zone) + timedelta(days=expiration_days + 1)\n expiry_datetime = expiry_date.replace(hour=0, minute=0, second=0, microsecond=0)\n\n return expiry_datetime\n\n\ndef reenrollment_error(request):\n \"\"\"View handler for a re-enrollment attempt that is not yet within the re-enrollment window.\"\"\"\n analytics.returned_error(request, \"Re-enrollment error\")\n return TemplateResponse(request, TEMPLATE_REENROLLMENT_ERROR)\n\n\n@decorator_from_middleware(EligibleSessionRequired)\ndef retry(request):\n \"\"\"View handler for a recoverable failure condition.\"\"\"\n if request.method == \"POST\":\n analytics.returned_retry(request)\n form = forms.CardTokenizeFailForm(request.POST)\n if form.is_valid():\n return TemplateResponse(request, TEMPLATE_RETRY)\n else:\n analytics.returned_error(request, \"Invalid retry submission.\")\n raise Exception(\"Invalid retry submission.\")\n else:\n analytics.returned_error(request, \"This view method only supports POST.\")\n raise Exception(\"This view method only supports POST.\")\n\n\n@pageview_decorator\n@decorator_from_middleware(EligibleSessionRequired)\n@decorator_from_middleware(VerifierSessionRequired)\ndef success(request):\n \"\"\"View handler for the final success page.\"\"\"\n request.path = \"/enrollment/success\"\n session.update(request, origin=reverse(ROUTE_SUCCESS))\n\n agency = session.agency(request)\n verifier = session.verifier(request)\n eligibility = session.eligibility(request)\n\n if session.logged_in(request) and verifier.auth_provider.supports_sign_out:\n # overwrite origin for a logged in user\n # if they click the logout button, they are taken to the new route\n session.update(request, origin=reverse(ROUTE_LOGGED_OUT))\n\n analytics.returned_success(request, eligibility.group_id)\n return TemplateResponse(request, agency.enrollment_success_template)\n", "path": "benefits/enrollment/views.py"}]}
| 3,321 | 212 |
gh_patches_debug_23229
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-320
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Automatic import of tables when new database is connected.
**Problem**
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
When a user sets up an existing database in Mathesar, they'll expect to see their tables. We don't currently do anything to sync existing databases with Mathesar.
**Proposed solution**
<!-- A clear and concise description of your proposed solution or feature. -->
We should create instances of the appropriate Django models for tables in existing databases.
**Additional context**
<!-- Add any other context or screenshots about the feature request here.-->
Blocked by #61
</issue>
<code>
[start of mathesar/models.py]
1 from django.contrib.auth.models import User
2 from django.core.cache import cache
3 from django.db import models
4 from django.utils.functional import cached_property
5
6 from mathesar.database.base import create_mathesar_engine
7 from mathesar.utils import models as model_utils
8 from db import tables, records, schemas, columns
9
10 NAME_CACHE_INTERVAL = 60 * 5
11
12
13 class BaseModel(models.Model):
14 created_at = models.DateTimeField(auto_now_add=True)
15 updated_at = models.DateTimeField(auto_now=True)
16
17 class Meta:
18 abstract = True
19
20
21 class DatabaseObject(BaseModel):
22 oid = models.IntegerField()
23
24 class Meta:
25 abstract = True
26
27 def __str__(self):
28 return f"{self.__class__.__name__}: {self.oid}"
29
30
31 # TODO: Replace with a proper form of caching
32 # See: https://github.com/centerofci/mathesar/issues/280
33 _engine = None
34
35
36 class Schema(DatabaseObject):
37 database = models.CharField(max_length=128)
38
39 @property
40 def _sa_engine(self):
41 global _engine
42 # We're caching this since the engine is used frequently.
43 if _engine is None:
44 _engine = create_mathesar_engine(self.database)
45 return _engine
46
47 @cached_property
48 def name(self):
49 cache_key = f"{self.database}_schema_name_{self.oid}"
50 try:
51 schema_name = cache.get(cache_key)
52 if schema_name is None:
53 schema_name = schemas.get_schema_name_from_oid(
54 self.oid, self._sa_engine
55 )
56 cache.set(cache_key, schema_name, NAME_CACHE_INTERVAL)
57 return schema_name
58 # We catch this error, since it lets us decouple the cadence of
59 # overall DB reflection from the cadence of cache expiration for
60 # schema names. Also, it makes it obvious when the DB layer has
61 # been altered, as opposed to other reasons for a 404 when
62 # requesting a schema.
63 except TypeError:
64 return 'MISSING'
65
66
67 class Table(DatabaseObject):
68 schema = models.ForeignKey('Schema', on_delete=models.CASCADE,
69 related_name='tables')
70 import_verified = models.BooleanField(blank=True, null=True)
71
72 @cached_property
73 def _sa_table(self):
74 try:
75 table = tables.reflect_table_from_oid(
76 self.oid, self.schema._sa_engine,
77 )
78 # We catch this error, since it lets us decouple the cadence of
79 # overall DB reflection from the cadence of cache expiration for
80 # table names. Also, it makes it obvious when the DB layer has
81 # been altered, as opposed to other reasons for a 404 when
82 # requesting a table.
83 except TypeError:
84 table = tables.create_empty_table("MISSING")
85 return table
86
87 @cached_property
88 def name(self):
89 return self._sa_table.name
90
91 @property
92 def sa_columns(self):
93 return self._sa_table.columns
94
95 @property
96 def sa_column_names(self):
97 return self.sa_columns.keys()
98
99 def add_column(self, column_data):
100 return columns.create_column(
101 self.schema._sa_engine,
102 self.oid,
103 column_data,
104 )
105
106 def alter_column(self, column_index, column_data):
107 return columns.alter_column(
108 self.schema._sa_engine,
109 self.oid,
110 column_index,
111 column_data,
112 )
113
114 def drop_column(self, column_index):
115 columns.drop_column(
116 self.schema._sa_engine,
117 self.oid,
118 column_index,
119 )
120
121 @property
122 def sa_num_records(self):
123 return tables.get_count(self._sa_table, self.schema._sa_engine)
124
125 @property
126 def sa_all_records(self):
127 return records.get_records(self._sa_table, self.schema._sa_engine)
128
129 def get_record(self, id_value):
130 return records.get_record(self._sa_table, self.schema._sa_engine, id_value)
131
132 def get_records(self, limit=None, offset=None, filters=[], order_by=[]):
133 return records.get_records(self._sa_table, self.schema._sa_engine, limit,
134 offset, filters=filters, order_by=order_by)
135
136 def get_group_counts(
137 self, group_by, limit=None, offset=None, filters=[], order_by=[]
138 ):
139 return records.get_group_counts(self._sa_table, self.schema._sa_engine,
140 group_by, limit, offset, filters=filters,
141 order_by=order_by)
142
143 def create_record_or_records(self, record_data):
144 return records.create_record_or_records(self._sa_table, self.schema._sa_engine, record_data)
145
146 def update_record(self, id_value, record_data):
147 return records.update_record(self._sa_table, self.schema._sa_engine, id_value, record_data)
148
149 def delete_record(self, id_value):
150 return records.delete_record(self._sa_table, self.schema._sa_engine, id_value)
151
152
153 class DataFile(BaseModel):
154 file = models.FileField(
155 upload_to=model_utils.user_directory_path,
156 )
157 user = models.ForeignKey(User, blank=True, null=True, on_delete=models.CASCADE)
158 table_imported_to = models.ForeignKey(Table, related_name="data_files", blank=True,
159 null=True, on_delete=models.SET_NULL)
160 delimiter = models.CharField(max_length=1, default=',', blank=True)
161 escapechar = models.CharField(max_length=1, blank=True)
162 quotechar = models.CharField(max_length=1, default='"', blank=True)
163
[end of mathesar/models.py]
[start of mathesar/utils/schemas.py]
1 from rest_framework.exceptions import ValidationError
2
3 from db.schemas import (
4 create_schema, get_schema_oid_from_name, get_mathesar_schemas,
5 get_mathesar_schemas_with_oids
6 )
7 from mathesar.database.base import create_mathesar_engine
8 from mathesar.models import Schema
9
10
11 def create_schema_and_object(name, database):
12 engine = create_mathesar_engine(database)
13
14 all_schemas = get_mathesar_schemas(engine)
15 if name in all_schemas:
16 raise ValidationError({"name": "Schema name is not unique"})
17
18 create_schema(name, engine)
19 schema_oid = get_schema_oid_from_name(name, engine)
20 schema = Schema.objects.create(oid=schema_oid, database=database)
21 return schema
22
23
24 def reflect_schemas_from_database(database):
25 engine = create_mathesar_engine(database)
26 db_schema_oids = {
27 schema["oid"] for schema in get_mathesar_schemas_with_oids(engine)
28 }
29 schemas = [
30 Schema.objects.get_or_create(oid=oid, database=database)
31 for oid in db_schema_oids
32 ]
33 for schema in Schema.objects.all():
34 if schema.oid not in db_schema_oids:
35 schema.delete()
36 return schemas
37
[end of mathesar/utils/schemas.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mathesar/models.py b/mathesar/models.py
--- a/mathesar/models.py
+++ b/mathesar/models.py
@@ -30,7 +30,7 @@
# TODO: Replace with a proper form of caching
# See: https://github.com/centerofci/mathesar/issues/280
-_engine = None
+_engines = {}
class Schema(DatabaseObject):
@@ -38,11 +38,11 @@
@property
def _sa_engine(self):
- global _engine
+ global _engines
# We're caching this since the engine is used frequently.
- if _engine is None:
- _engine = create_mathesar_engine(self.database)
- return _engine
+ if self.database not in _engines:
+ _engines[self.database] = create_mathesar_engine(self.database)
+ return _engines[self.database]
@cached_property
def name(self):
diff --git a/mathesar/utils/schemas.py b/mathesar/utils/schemas.py
--- a/mathesar/utils/schemas.py
+++ b/mathesar/utils/schemas.py
@@ -31,6 +31,6 @@
for oid in db_schema_oids
]
for schema in Schema.objects.all():
- if schema.oid not in db_schema_oids:
+ if schema.database == database and schema.oid not in db_schema_oids:
schema.delete()
return schemas
|
{"golden_diff": "diff --git a/mathesar/models.py b/mathesar/models.py\n--- a/mathesar/models.py\n+++ b/mathesar/models.py\n@@ -30,7 +30,7 @@\n \n # TODO: Replace with a proper form of caching\n # See: https://github.com/centerofci/mathesar/issues/280\n-_engine = None\n+_engines = {}\n \n \n class Schema(DatabaseObject):\n@@ -38,11 +38,11 @@\n \n @property\n def _sa_engine(self):\n- global _engine\n+ global _engines\n # We're caching this since the engine is used frequently.\n- if _engine is None:\n- _engine = create_mathesar_engine(self.database)\n- return _engine\n+ if self.database not in _engines:\n+ _engines[self.database] = create_mathesar_engine(self.database)\n+ return _engines[self.database]\n \n @cached_property\n def name(self):\ndiff --git a/mathesar/utils/schemas.py b/mathesar/utils/schemas.py\n--- a/mathesar/utils/schemas.py\n+++ b/mathesar/utils/schemas.py\n@@ -31,6 +31,6 @@\n for oid in db_schema_oids\n ]\n for schema in Schema.objects.all():\n- if schema.oid not in db_schema_oids:\n+ if schema.database == database and schema.oid not in db_schema_oids:\n schema.delete()\n return schemas\n", "issue": "Automatic import of tables when new database is connected.\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nWhen a user sets up an existing database in Mathesar, they'll expect to see their tables. We don't currently do anything to sync existing databases with Mathesar.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nWe should create instances of the appropriate Django models for tables in existing databases.\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\nBlocked by #61\n", "before_files": [{"content": "from django.contrib.auth.models import User\nfrom django.core.cache import cache\nfrom django.db import models\nfrom django.utils.functional import cached_property\n\nfrom mathesar.database.base import create_mathesar_engine\nfrom mathesar.utils import models as model_utils\nfrom db import tables, records, schemas, columns\n\nNAME_CACHE_INTERVAL = 60 * 5\n\n\nclass BaseModel(models.Model):\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n\n class Meta:\n abstract = True\n\n\nclass DatabaseObject(BaseModel):\n oid = models.IntegerField()\n\n class Meta:\n abstract = True\n\n def __str__(self):\n return f\"{self.__class__.__name__}: {self.oid}\"\n\n\n# TODO: Replace with a proper form of caching\n# See: https://github.com/centerofci/mathesar/issues/280\n_engine = None\n\n\nclass Schema(DatabaseObject):\n database = models.CharField(max_length=128)\n\n @property\n def _sa_engine(self):\n global _engine\n # We're caching this since the engine is used frequently.\n if _engine is None:\n _engine = create_mathesar_engine(self.database)\n return _engine\n\n @cached_property\n def name(self):\n cache_key = f\"{self.database}_schema_name_{self.oid}\"\n try:\n schema_name = cache.get(cache_key)\n if schema_name is None:\n schema_name = schemas.get_schema_name_from_oid(\n self.oid, self._sa_engine\n )\n cache.set(cache_key, schema_name, NAME_CACHE_INTERVAL)\n return schema_name\n # We catch this error, since it lets us decouple the cadence of\n # overall DB reflection from the cadence of cache expiration for\n # schema names. Also, it makes it obvious when the DB layer has\n # been altered, as opposed to other reasons for a 404 when\n # requesting a schema.\n except TypeError:\n return 'MISSING'\n\n\nclass Table(DatabaseObject):\n schema = models.ForeignKey('Schema', on_delete=models.CASCADE,\n related_name='tables')\n import_verified = models.BooleanField(blank=True, null=True)\n\n @cached_property\n def _sa_table(self):\n try:\n table = tables.reflect_table_from_oid(\n self.oid, self.schema._sa_engine,\n )\n # We catch this error, since it lets us decouple the cadence of\n # overall DB reflection from the cadence of cache expiration for\n # table names. Also, it makes it obvious when the DB layer has\n # been altered, as opposed to other reasons for a 404 when\n # requesting a table.\n except TypeError:\n table = tables.create_empty_table(\"MISSING\")\n return table\n\n @cached_property\n def name(self):\n return self._sa_table.name\n\n @property\n def sa_columns(self):\n return self._sa_table.columns\n\n @property\n def sa_column_names(self):\n return self.sa_columns.keys()\n\n def add_column(self, column_data):\n return columns.create_column(\n self.schema._sa_engine,\n self.oid,\n column_data,\n )\n\n def alter_column(self, column_index, column_data):\n return columns.alter_column(\n self.schema._sa_engine,\n self.oid,\n column_index,\n column_data,\n )\n\n def drop_column(self, column_index):\n columns.drop_column(\n self.schema._sa_engine,\n self.oid,\n column_index,\n )\n\n @property\n def sa_num_records(self):\n return tables.get_count(self._sa_table, self.schema._sa_engine)\n\n @property\n def sa_all_records(self):\n return records.get_records(self._sa_table, self.schema._sa_engine)\n\n def get_record(self, id_value):\n return records.get_record(self._sa_table, self.schema._sa_engine, id_value)\n\n def get_records(self, limit=None, offset=None, filters=[], order_by=[]):\n return records.get_records(self._sa_table, self.schema._sa_engine, limit,\n offset, filters=filters, order_by=order_by)\n\n def get_group_counts(\n self, group_by, limit=None, offset=None, filters=[], order_by=[]\n ):\n return records.get_group_counts(self._sa_table, self.schema._sa_engine,\n group_by, limit, offset, filters=filters,\n order_by=order_by)\n\n def create_record_or_records(self, record_data):\n return records.create_record_or_records(self._sa_table, self.schema._sa_engine, record_data)\n\n def update_record(self, id_value, record_data):\n return records.update_record(self._sa_table, self.schema._sa_engine, id_value, record_data)\n\n def delete_record(self, id_value):\n return records.delete_record(self._sa_table, self.schema._sa_engine, id_value)\n\n\nclass DataFile(BaseModel):\n file = models.FileField(\n upload_to=model_utils.user_directory_path,\n )\n user = models.ForeignKey(User, blank=True, null=True, on_delete=models.CASCADE)\n table_imported_to = models.ForeignKey(Table, related_name=\"data_files\", blank=True,\n null=True, on_delete=models.SET_NULL)\n delimiter = models.CharField(max_length=1, default=',', blank=True)\n escapechar = models.CharField(max_length=1, blank=True)\n quotechar = models.CharField(max_length=1, default='\"', blank=True)\n", "path": "mathesar/models.py"}, {"content": "from rest_framework.exceptions import ValidationError\n\nfrom db.schemas import (\n create_schema, get_schema_oid_from_name, get_mathesar_schemas,\n get_mathesar_schemas_with_oids\n)\nfrom mathesar.database.base import create_mathesar_engine\nfrom mathesar.models import Schema\n\n\ndef create_schema_and_object(name, database):\n engine = create_mathesar_engine(database)\n\n all_schemas = get_mathesar_schemas(engine)\n if name in all_schemas:\n raise ValidationError({\"name\": \"Schema name is not unique\"})\n\n create_schema(name, engine)\n schema_oid = get_schema_oid_from_name(name, engine)\n schema = Schema.objects.create(oid=schema_oid, database=database)\n return schema\n\n\ndef reflect_schemas_from_database(database):\n engine = create_mathesar_engine(database)\n db_schema_oids = {\n schema[\"oid\"] for schema in get_mathesar_schemas_with_oids(engine)\n }\n schemas = [\n Schema.objects.get_or_create(oid=oid, database=database)\n for oid in db_schema_oids\n ]\n for schema in Schema.objects.all():\n if schema.oid not in db_schema_oids:\n schema.delete()\n return schemas\n", "path": "mathesar/utils/schemas.py"}]}
| 2,571 | 318 |
gh_patches_debug_31761
|
rasdani/github-patches
|
git_diff
|
nextcloud__appstore-38
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
use settings.AUTH_USER_MODEL
use settings.AUTH_USER_MODEL to refer to the auth usr model.
See https://docs.djangoproject.com/en/1.9/topics/auth/customizing/#referencing-the-user-model
</issue>
<code>
[start of nextcloudappstore/core/models.py]
1 from django.contrib.auth.models import User # type: ignore
2 from django.db import models # type: ignore
3 from django.utils.translation import ugettext_lazy as _ # type: ignore
4 from parler.models import TranslatedFields, TranslatableModel # type: ignore
5
6
7 class App(TranslatableModel):
8 id = models.CharField(max_length=128, unique=True, primary_key=True,
9 verbose_name=_('Id'),
10 help_text=_('app id, same as the folder name'))
11 categories = models.ManyToManyField('Category', verbose_name=_('Category'))
12 translations = TranslatedFields(
13 name=models.CharField(max_length=128, verbose_name=_('Name'),
14 help_text=_('Rendered app name for users')),
15 description=models.TextField(verbose_name=_('Description'),
16 help_text=_(
17 'Will be rendered as Markdown'))
18 )
19 # resources
20 user_docs = models.URLField(max_length=256, blank=True,
21 verbose_name=_('User documentation url'))
22 admin_docs = models.URLField(max_length=256, blank=True,
23 verbose_name=_('Admin documentation url'))
24 developer_docs = models.URLField(max_length=256, blank=True,
25 verbose_name=_(
26 'Developer documentation url'))
27 issue_tracker = models.URLField(max_length=256, blank=True,
28 verbose_name=_('Issue tracker url'))
29 website = models.URLField(max_length=256, blank=True,
30 verbose_name=_('Homepage'))
31 created = models.DateTimeField(auto_now_add=True, editable=False,
32 verbose_name=_('Created at'))
33 last_modified = models.DateTimeField(auto_now=True, editable=False,
34 verbose_name=_('Updated at'))
35 owner = models.ForeignKey('auth.User', verbose_name=_('App owner'),
36 on_delete=models.CASCADE,
37 related_name='owned_apps')
38 co_maintainers = models.ManyToManyField('auth.User',
39 verbose_name=_('Co-Maintainers'),
40 related_name='co_maintained_apps',
41 blank=True)
42 recommendations = models.ManyToManyField('auth.User',
43 verbose_name=_('Recommendations'),
44 related_name='recommended_apps',
45 blank=True)
46
47 class Meta:
48 verbose_name = _('App')
49 verbose_name_plural = _('Apps')
50
51 def __str__(self) -> str:
52 return self.name
53
54 def can_update(self, user: User) -> bool:
55 return self.owner == user or user in self.co_maintainers.all()
56
57 def can_delete(self, user: User) -> bool:
58 return self.owner == user
59
60
61 class AppRelease(models.Model):
62 version = models.CharField(max_length=128, verbose_name=_('Version'),
63 help_text=_(
64 'Version follows Semantic Versioning'))
65 app = models.ForeignKey('App', on_delete=models.CASCADE,
66 verbose_name=_('App'), related_name='releases')
67 # dependencies
68 php_extensions = models.ManyToManyField('PhpExtension',
69 through='PhpExtensionDependency',
70 verbose_name=_(
71 'PHP extension dependency'),
72 blank=True)
73 databases = models.ManyToManyField('Database',
74 through='DatabaseDependency',
75 verbose_name=_('Database dependency'),
76 blank=True)
77 licenses = models.ManyToManyField('License',
78 verbose_name=_('License'))
79 shell_commands = models.ManyToManyField('ShellCommand', verbose_name=_(
80 'Shell command dependency'), blank=True)
81 php_min_version = models.CharField(max_length=128,
82 verbose_name=_('PHP minimum version'),
83 blank=True)
84 php_max_version = models.CharField(max_length=128, blank=True,
85 verbose_name=_('PHP maximum version'))
86 platform_min_version = models.CharField(max_length=128,
87 verbose_name=_(
88 'Platform minimum version'))
89 platform_max_version = models.CharField(max_length=128, blank=True,
90 verbose_name=_(
91 'Platform maximum version'))
92 min_int_size = models.IntegerField(blank=True, default=0,
93 verbose_name=_('Minimum Integer Bits'),
94 help_text=_(
95 'e.g. 32 for 32bit Integers'))
96 checksum = models.CharField(max_length=64,
97 verbose_name=_('SHA256 checksum'))
98 download = models.URLField(max_length=256, blank=True,
99 verbose_name=_('Archive download Url'))
100 created = models.DateTimeField(auto_now_add=True, editable=False,
101 verbose_name=_('Created at'))
102 last_modified = models.DateTimeField(auto_now=True, editable=False,
103 verbose_name=_('Updated at'))
104
105 class Meta:
106 verbose_name = _('App Release')
107 verbose_name_plural = _('App Releases')
108 unique_together = (('app', 'version'),)
109
110 def can_update(self, user: User) -> bool:
111 return self.app.owner == user or user in self.app.co_maintainers.all()
112
113 def can_delete(self, user: User) -> bool:
114 return self.can_update(user)
115
116 def __str__(self) -> str:
117 return '%s %s' % (self.app, self.version)
118
119
120 class Screenshot(models.Model):
121 url = models.URLField(max_length=256, verbose_name=_('Image url'))
122 app = models.ForeignKey('App', on_delete=models.CASCADE,
123 verbose_name=_('App'), related_name='screenshots')
124 ordering = models.IntegerField(verbose_name=_('Ordering'))
125
126 class Meta:
127 verbose_name = _('Screenshot')
128 verbose_name_plural = _('Screenshots')
129 ordering = ['ordering']
130
131 def __str__(self) -> str:
132 return self.url
133
134
135 class ShellCommand(models.Model):
136 name = models.CharField(max_length=128, unique=True, primary_key=True,
137 verbose_name=_('Shell Command'),
138 help_text=_(
139 'Name of a required shell command, e.g. grep'))
140
141 class Meta:
142 verbose_name = _('Shell Command')
143 verbose_name_plural = _('Shell Commands')
144
145 def __str__(self) -> str:
146 return self.name
147
148
149 class Category(TranslatableModel):
150 id = models.CharField(max_length=128, unique=True, primary_key=True,
151 help_text=_(
152 'Category id which is used to identify a '
153 'category. Used to identify categories when '
154 'uploading an app'), verbose_name=_('Id'))
155 created = models.DateTimeField(auto_now_add=True, editable=False,
156 verbose_name=_('Created at'))
157 last_modified = models.DateTimeField(auto_now=True, editable=False,
158 verbose_name=_('Updated at'))
159 translations = TranslatedFields(
160 name=models.CharField(max_length=128, help_text=_(
161 'Category name which will be presented to the user'),
162 verbose_name=_('Name')),
163 description=models.TextField(verbose_name=_('Description'),
164 help_text=_(
165 'Will be rendered as Markdown'))
166 )
167
168 class Meta:
169 verbose_name = _('Category')
170 verbose_name_plural = _('Categories')
171
172 def __str__(self) -> str:
173 return self.name
174
175
176 class License(models.Model):
177 id = models.CharField(max_length=128, unique=True, primary_key=True,
178 verbose_name=_('Id'),
179 help_text=_(
180 'Key which is used to identify a license'))
181 name = models.CharField(max_length=128, verbose_name=_('Name'),
182 help_text=_(
183 'License name which will be presented to '
184 'the user'))
185
186 class Meta:
187 verbose_name = _('License')
188 verbose_name_plural = _('Licenses')
189
190 def __str__(self) -> str:
191 return self.name
192
193
194 class Database(models.Model):
195 id = models.CharField(max_length=128, unique=True, primary_key=True,
196 verbose_name=_('Id'),
197 help_text=_(
198 'Key which is used to identify a database'))
199 name = models.CharField(max_length=128, verbose_name=_('Name'),
200 help_text=_(
201 'Database name which will be presented to '
202 'the user'))
203
204 class Meta:
205 verbose_name = _('Database')
206 verbose_name_plural = _('Databases')
207
208 def __str__(self) -> str:
209 return self.name
210
211
212 class DatabaseDependency(models.Model):
213 app_release = models.ForeignKey('AppRelease', on_delete=models.CASCADE,
214 verbose_name=_('App release'),
215 related_name='databasedependencies')
216 database = models.ForeignKey('Database', on_delete=models.CASCADE,
217 verbose_name=_('Database'),
218 related_name='releasedependencies')
219 min_version = models.CharField(max_length=128,
220 verbose_name=_(
221 'Database minimum version'))
222 max_version = models.CharField(max_length=128, blank=True,
223 verbose_name=_(
224 'Database maximum version'))
225
226 class Meta:
227 verbose_name = _('Database Dependency')
228 verbose_name_plural = _('Database Dependencies')
229 unique_together = (('app_release', 'database', 'min_version',
230 'max_version'),)
231
232 def __str__(self) -> str:
233 return '%s: %s >=%s, <=%s' % (self.app_release, self.database,
234 self.min_version,
235 self.max_version)
236
237
238 class PhpExtension(models.Model):
239 id = models.CharField(max_length=128, unique=True, primary_key=True,
240 verbose_name=_('PHP extension'),
241 help_text=_('e.g. libxml'))
242
243 class Meta:
244 verbose_name = _('PHP Extension')
245 verbose_name_plural = _('PHP Extensions')
246
247 def __str__(self) -> str:
248 return self.id
249
250
251 class PhpExtensionDependency(models.Model):
252 app_release = models.ForeignKey('AppRelease', on_delete=models.CASCADE,
253 verbose_name=_('App Release'),
254 related_name='phpextensiondependencies')
255 php_extension = models.ForeignKey('PhpExtension', on_delete=models.CASCADE,
256 verbose_name=_('PHP Extension'),
257 related_name='releasedependencies')
258 min_version = models.CharField(max_length=128,
259 verbose_name=_(
260 'Extension minimum version'))
261 max_version = models.CharField(max_length=128,
262 verbose_name=_(
263 'Extension maximum version'),
264 blank=True)
265
266 class Meta:
267 verbose_name = _('PHP Extension Dependency')
268 verbose_name_plural = _('PHP Extension Dependencies')
269 unique_together = (('app_release', 'php_extension', 'min_version',
270 'max_version'),)
271
272 def __str__(self) -> str:
273 return '%s: %s >=%s, <=%s' % (self.app_release.app, self.php_extension,
274 self.min_version,
275 self.max_version)
276
[end of nextcloudappstore/core/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nextcloudappstore/core/models.py b/nextcloudappstore/core/models.py
--- a/nextcloudappstore/core/models.py
+++ b/nextcloudappstore/core/models.py
@@ -1,3 +1,4 @@
+from django.conf import settings # type: ignore
from django.contrib.auth.models import User # type: ignore
from django.db import models # type: ignore
from django.utils.translation import ugettext_lazy as _ # type: ignore
@@ -32,14 +33,15 @@
verbose_name=_('Created at'))
last_modified = models.DateTimeField(auto_now=True, editable=False,
verbose_name=_('Updated at'))
- owner = models.ForeignKey('auth.User', verbose_name=_('App owner'),
+ owner = models.ForeignKey(settings.AUTH_USER_MODEL,
+ verbose_name=_('App owner'),
on_delete=models.CASCADE,
related_name='owned_apps')
- co_maintainers = models.ManyToManyField('auth.User',
+ co_maintainers = models.ManyToManyField(settings.AUTH_USER_MODEL,
verbose_name=_('Co-Maintainers'),
related_name='co_maintained_apps',
blank=True)
- recommendations = models.ManyToManyField('auth.User',
+ recommendations = models.ManyToManyField(settings.AUTH_USER_MODEL,
verbose_name=_('Recommendations'),
related_name='recommended_apps',
blank=True)
|
{"golden_diff": "diff --git a/nextcloudappstore/core/models.py b/nextcloudappstore/core/models.py\n--- a/nextcloudappstore/core/models.py\n+++ b/nextcloudappstore/core/models.py\n@@ -1,3 +1,4 @@\n+from django.conf import settings # type: ignore\n from django.contrib.auth.models import User # type: ignore\n from django.db import models # type: ignore\n from django.utils.translation import ugettext_lazy as _ # type: ignore\n@@ -32,14 +33,15 @@\n verbose_name=_('Created at'))\n last_modified = models.DateTimeField(auto_now=True, editable=False,\n verbose_name=_('Updated at'))\n- owner = models.ForeignKey('auth.User', verbose_name=_('App owner'),\n+ owner = models.ForeignKey(settings.AUTH_USER_MODEL,\n+ verbose_name=_('App owner'),\n on_delete=models.CASCADE,\n related_name='owned_apps')\n- co_maintainers = models.ManyToManyField('auth.User',\n+ co_maintainers = models.ManyToManyField(settings.AUTH_USER_MODEL,\n verbose_name=_('Co-Maintainers'),\n related_name='co_maintained_apps',\n blank=True)\n- recommendations = models.ManyToManyField('auth.User',\n+ recommendations = models.ManyToManyField(settings.AUTH_USER_MODEL,\n verbose_name=_('Recommendations'),\n related_name='recommended_apps',\n blank=True)\n", "issue": "use settings.AUTH_USER_MODEL\nuse settings.AUTH_USER_MODEL to refer to the auth usr model.\n\nSee https://docs.djangoproject.com/en/1.9/topics/auth/customizing/#referencing-the-user-model\n\n", "before_files": [{"content": "from django.contrib.auth.models import User # type: ignore\nfrom django.db import models # type: ignore\nfrom django.utils.translation import ugettext_lazy as _ # type: ignore\nfrom parler.models import TranslatedFields, TranslatableModel # type: ignore\n\n\nclass App(TranslatableModel):\n id = models.CharField(max_length=128, unique=True, primary_key=True,\n verbose_name=_('Id'),\n help_text=_('app id, same as the folder name'))\n categories = models.ManyToManyField('Category', verbose_name=_('Category'))\n translations = TranslatedFields(\n name=models.CharField(max_length=128, verbose_name=_('Name'),\n help_text=_('Rendered app name for users')),\n description=models.TextField(verbose_name=_('Description'),\n help_text=_(\n 'Will be rendered as Markdown'))\n )\n # resources\n user_docs = models.URLField(max_length=256, blank=True,\n verbose_name=_('User documentation url'))\n admin_docs = models.URLField(max_length=256, blank=True,\n verbose_name=_('Admin documentation url'))\n developer_docs = models.URLField(max_length=256, blank=True,\n verbose_name=_(\n 'Developer documentation url'))\n issue_tracker = models.URLField(max_length=256, blank=True,\n verbose_name=_('Issue tracker url'))\n website = models.URLField(max_length=256, blank=True,\n verbose_name=_('Homepage'))\n created = models.DateTimeField(auto_now_add=True, editable=False,\n verbose_name=_('Created at'))\n last_modified = models.DateTimeField(auto_now=True, editable=False,\n verbose_name=_('Updated at'))\n owner = models.ForeignKey('auth.User', verbose_name=_('App owner'),\n on_delete=models.CASCADE,\n related_name='owned_apps')\n co_maintainers = models.ManyToManyField('auth.User',\n verbose_name=_('Co-Maintainers'),\n related_name='co_maintained_apps',\n blank=True)\n recommendations = models.ManyToManyField('auth.User',\n verbose_name=_('Recommendations'),\n related_name='recommended_apps',\n blank=True)\n\n class Meta:\n verbose_name = _('App')\n verbose_name_plural = _('Apps')\n\n def __str__(self) -> str:\n return self.name\n\n def can_update(self, user: User) -> bool:\n return self.owner == user or user in self.co_maintainers.all()\n\n def can_delete(self, user: User) -> bool:\n return self.owner == user\n\n\nclass AppRelease(models.Model):\n version = models.CharField(max_length=128, verbose_name=_('Version'),\n help_text=_(\n 'Version follows Semantic Versioning'))\n app = models.ForeignKey('App', on_delete=models.CASCADE,\n verbose_name=_('App'), related_name='releases')\n # dependencies\n php_extensions = models.ManyToManyField('PhpExtension',\n through='PhpExtensionDependency',\n verbose_name=_(\n 'PHP extension dependency'),\n blank=True)\n databases = models.ManyToManyField('Database',\n through='DatabaseDependency',\n verbose_name=_('Database dependency'),\n blank=True)\n licenses = models.ManyToManyField('License',\n verbose_name=_('License'))\n shell_commands = models.ManyToManyField('ShellCommand', verbose_name=_(\n 'Shell command dependency'), blank=True)\n php_min_version = models.CharField(max_length=128,\n verbose_name=_('PHP minimum version'),\n blank=True)\n php_max_version = models.CharField(max_length=128, blank=True,\n verbose_name=_('PHP maximum version'))\n platform_min_version = models.CharField(max_length=128,\n verbose_name=_(\n 'Platform minimum version'))\n platform_max_version = models.CharField(max_length=128, blank=True,\n verbose_name=_(\n 'Platform maximum version'))\n min_int_size = models.IntegerField(blank=True, default=0,\n verbose_name=_('Minimum Integer Bits'),\n help_text=_(\n 'e.g. 32 for 32bit Integers'))\n checksum = models.CharField(max_length=64,\n verbose_name=_('SHA256 checksum'))\n download = models.URLField(max_length=256, blank=True,\n verbose_name=_('Archive download Url'))\n created = models.DateTimeField(auto_now_add=True, editable=False,\n verbose_name=_('Created at'))\n last_modified = models.DateTimeField(auto_now=True, editable=False,\n verbose_name=_('Updated at'))\n\n class Meta:\n verbose_name = _('App Release')\n verbose_name_plural = _('App Releases')\n unique_together = (('app', 'version'),)\n\n def can_update(self, user: User) -> bool:\n return self.app.owner == user or user in self.app.co_maintainers.all()\n\n def can_delete(self, user: User) -> bool:\n return self.can_update(user)\n\n def __str__(self) -> str:\n return '%s %s' % (self.app, self.version)\n\n\nclass Screenshot(models.Model):\n url = models.URLField(max_length=256, verbose_name=_('Image url'))\n app = models.ForeignKey('App', on_delete=models.CASCADE,\n verbose_name=_('App'), related_name='screenshots')\n ordering = models.IntegerField(verbose_name=_('Ordering'))\n\n class Meta:\n verbose_name = _('Screenshot')\n verbose_name_plural = _('Screenshots')\n ordering = ['ordering']\n\n def __str__(self) -> str:\n return self.url\n\n\nclass ShellCommand(models.Model):\n name = models.CharField(max_length=128, unique=True, primary_key=True,\n verbose_name=_('Shell Command'),\n help_text=_(\n 'Name of a required shell command, e.g. grep'))\n\n class Meta:\n verbose_name = _('Shell Command')\n verbose_name_plural = _('Shell Commands')\n\n def __str__(self) -> str:\n return self.name\n\n\nclass Category(TranslatableModel):\n id = models.CharField(max_length=128, unique=True, primary_key=True,\n help_text=_(\n 'Category id which is used to identify a '\n 'category. Used to identify categories when '\n 'uploading an app'), verbose_name=_('Id'))\n created = models.DateTimeField(auto_now_add=True, editable=False,\n verbose_name=_('Created at'))\n last_modified = models.DateTimeField(auto_now=True, editable=False,\n verbose_name=_('Updated at'))\n translations = TranslatedFields(\n name=models.CharField(max_length=128, help_text=_(\n 'Category name which will be presented to the user'),\n verbose_name=_('Name')),\n description=models.TextField(verbose_name=_('Description'),\n help_text=_(\n 'Will be rendered as Markdown'))\n )\n\n class Meta:\n verbose_name = _('Category')\n verbose_name_plural = _('Categories')\n\n def __str__(self) -> str:\n return self.name\n\n\nclass License(models.Model):\n id = models.CharField(max_length=128, unique=True, primary_key=True,\n verbose_name=_('Id'),\n help_text=_(\n 'Key which is used to identify a license'))\n name = models.CharField(max_length=128, verbose_name=_('Name'),\n help_text=_(\n 'License name which will be presented to '\n 'the user'))\n\n class Meta:\n verbose_name = _('License')\n verbose_name_plural = _('Licenses')\n\n def __str__(self) -> str:\n return self.name\n\n\nclass Database(models.Model):\n id = models.CharField(max_length=128, unique=True, primary_key=True,\n verbose_name=_('Id'),\n help_text=_(\n 'Key which is used to identify a database'))\n name = models.CharField(max_length=128, verbose_name=_('Name'),\n help_text=_(\n 'Database name which will be presented to '\n 'the user'))\n\n class Meta:\n verbose_name = _('Database')\n verbose_name_plural = _('Databases')\n\n def __str__(self) -> str:\n return self.name\n\n\nclass DatabaseDependency(models.Model):\n app_release = models.ForeignKey('AppRelease', on_delete=models.CASCADE,\n verbose_name=_('App release'),\n related_name='databasedependencies')\n database = models.ForeignKey('Database', on_delete=models.CASCADE,\n verbose_name=_('Database'),\n related_name='releasedependencies')\n min_version = models.CharField(max_length=128,\n verbose_name=_(\n 'Database minimum version'))\n max_version = models.CharField(max_length=128, blank=True,\n verbose_name=_(\n 'Database maximum version'))\n\n class Meta:\n verbose_name = _('Database Dependency')\n verbose_name_plural = _('Database Dependencies')\n unique_together = (('app_release', 'database', 'min_version',\n 'max_version'),)\n\n def __str__(self) -> str:\n return '%s: %s >=%s, <=%s' % (self.app_release, self.database,\n self.min_version,\n self.max_version)\n\n\nclass PhpExtension(models.Model):\n id = models.CharField(max_length=128, unique=True, primary_key=True,\n verbose_name=_('PHP extension'),\n help_text=_('e.g. libxml'))\n\n class Meta:\n verbose_name = _('PHP Extension')\n verbose_name_plural = _('PHP Extensions')\n\n def __str__(self) -> str:\n return self.id\n\n\nclass PhpExtensionDependency(models.Model):\n app_release = models.ForeignKey('AppRelease', on_delete=models.CASCADE,\n verbose_name=_('App Release'),\n related_name='phpextensiondependencies')\n php_extension = models.ForeignKey('PhpExtension', on_delete=models.CASCADE,\n verbose_name=_('PHP Extension'),\n related_name='releasedependencies')\n min_version = models.CharField(max_length=128,\n verbose_name=_(\n 'Extension minimum version'))\n max_version = models.CharField(max_length=128,\n verbose_name=_(\n 'Extension maximum version'),\n blank=True)\n\n class Meta:\n verbose_name = _('PHP Extension Dependency')\n verbose_name_plural = _('PHP Extension Dependencies')\n unique_together = (('app_release', 'php_extension', 'min_version',\n 'max_version'),)\n\n def __str__(self) -> str:\n return '%s: %s >=%s, <=%s' % (self.app_release.app, self.php_extension,\n self.min_version,\n self.max_version)\n", "path": "nextcloudappstore/core/models.py"}]}
| 3,478 | 284 |
gh_patches_debug_31755
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-299
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Whidbey Coffee
http://www.whidbeycoffee.com/pages/locations
</issue>
<code>
[start of locations/spiders/whidbeycoffee.py]
1 import scrapy
2 import re
3 from locations.items import GeojsonPointItem
4
5 DAY_MAPPING = {
6 "Mon": "Mo",
7 "Tue": "Tu",
8 "Wed": "We",
9 "Thu": "Th",
10 "Fri": "Fr",
11 "Sat": "Sa",
12 "Sun": "Su"
13 }
14
15
16 class WhidbeycoffeeSpider(scrapy.Spider):
17
18 name = "whidbeycoffee"
19 allowed_domains = ["www.whidbeycoffee.com"]
20 download_delay = 1
21 start_urls = (
22 'http://www.whidbeycoffee.com/pages/locations',
23 )
24
25 def parse_day(self, day):
26 if re.search('-', day):
27 days = day.split('-')
28 osm_days = []
29 if len(days) == 2:
30 for day in days:
31 try:
32 osm_day = DAY_MAPPING[day.strip()]
33 osm_days.append(osm_day)
34 except:
35 return None
36 return ["-".join(osm_days)]
37 if re.search('Sat', day) or re.search('Sun', day):
38 if re.search('Sat', day) and re.search('Sun', day):
39 return ['Sa' ,'Su']
40 else:
41 return [DAY_MAPPING[day.strip()]]
42
43
44
45 def parse_times(self, times):
46 if times.strip() == 'Closed':
47 return 'off'
48 hours_to = [x.strip() for x in times.split('-')]
49 cleaned_times = []
50
51 for hour in hours_to:
52 if re.search('pm$', hour):
53 hour = re.sub('pm', '', hour).strip()
54 hour_min = hour.split(":")
55 if int(hour_min[0]) < 12:
56 hour_min[0] = str(12 + int(hour_min[0]))
57 cleaned_times.append(":".join(hour_min))
58
59 if re.search('am$', hour):
60 hour = re.sub('am', '', hour).strip()
61 hour_min = hour.split(":")
62 if len(hour_min[0]) <2:
63 hour_min[0] = hour_min[0].zfill(2)
64 else:
65 hour_min[0] = str(12 + int(hour_min[0]))
66
67 cleaned_times.append(":".join(hour_min))
68 return "-".join(cleaned_times)
69
70 def parse_hours(self, lis):
71 hours = []
72 for li in lis:
73 li= li.lstrip()
74 if re.search('&' ,li):
75 day = li.split(':')[0]
76 else:
77 day = re.findall(r"^[^( |:)]+" ,li)[0]
78 times = li.replace(day , "")[1:]
79 print(times)
80 if times and day:
81 parsed_time = self.parse_times(times)
82 parsed_day = self.parse_day(day)
83 if parsed_day!=None:
84 if (len(parsed_day)==2):
85 hours.append(parsed_day[0] + ' ' + parsed_time)
86 hours.append(parsed_day[1] + ' ' + parsed_time)
87 else:
88 hours.append(parsed_day[0] + ' ' + parsed_time)
89
90 return "; ".join(hours)
91
92 def parse(self, response):
93 stores = response.xpath('//h5')
94 for index , store in enumerate(stores):
95 direction_link = store.xpath('normalize-space(./following-sibling::p/a/@href)').extract_first()
96 properties = {
97 'addr_full': store.xpath('./following-sibling::p/a/text()').extract()[0],
98 'phone': store.xpath('./following-sibling::p/following-sibling::p/text()').extract()[0],
99 'city': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[0],
100 'state': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[1],
101 'postcode': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[2],
102 'ref':response.url,
103 'lat':re.findall(r"\/@[^(\/)]+", direction_link)[0].split(',')[0][2:],
104 'lon': re.findall(r"\/@[^(\/)]+", direction_link)[0].split(',')[1],
105 }
106 if(index==0):
107 hours = self.parse_hours(store.xpath('./following-sibling::p[3]/text()').extract())
108 else:
109 hours = self.parse_hours(store.xpath('./following-sibling::p[2]/text()').extract()[2:])
110
111 if hours:
112 properties['opening_hours'] = hours
113
114 yield GeojsonPointItem(**properties)
115
[end of locations/spiders/whidbeycoffee.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/whidbeycoffee.py b/locations/spiders/whidbeycoffee.py
--- a/locations/spiders/whidbeycoffee.py
+++ b/locations/spiders/whidbeycoffee.py
@@ -76,7 +76,6 @@
else:
day = re.findall(r"^[^( |:)]+" ,li)[0]
times = li.replace(day , "")[1:]
- print(times)
if times and day:
parsed_time = self.parse_times(times)
parsed_day = self.parse_day(day)
@@ -90,6 +89,7 @@
return "; ".join(hours)
def parse(self, response):
+
stores = response.xpath('//h5')
for index , store in enumerate(stores):
direction_link = store.xpath('normalize-space(./following-sibling::p/a/@href)').extract_first()
@@ -99,7 +99,7 @@
'city': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[0],
'state': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[1],
'postcode': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[2],
- 'ref':response.url,
+ 'ref':store.xpath('normalize-space(./text())').extract_first(),
'lat':re.findall(r"\/@[^(\/)]+", direction_link)[0].split(',')[0][2:],
'lon': re.findall(r"\/@[^(\/)]+", direction_link)[0].split(',')[1],
}
|
{"golden_diff": "diff --git a/locations/spiders/whidbeycoffee.py b/locations/spiders/whidbeycoffee.py\n--- a/locations/spiders/whidbeycoffee.py\n+++ b/locations/spiders/whidbeycoffee.py\n@@ -76,7 +76,6 @@\n else:\n day = re.findall(r\"^[^( |:)]+\" ,li)[0]\n times = li.replace(day , \"\")[1:]\n- print(times)\n if times and day:\n parsed_time = self.parse_times(times)\n parsed_day = self.parse_day(day)\n@@ -90,6 +89,7 @@\n return \"; \".join(hours)\n \n def parse(self, response):\n+\n stores = response.xpath('//h5')\n for index , store in enumerate(stores):\n direction_link = store.xpath('normalize-space(./following-sibling::p/a/@href)').extract_first()\n@@ -99,7 +99,7 @@\n 'city': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[0],\n 'state': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[1],\n 'postcode': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[2],\n- 'ref':response.url,\n+ 'ref':store.xpath('normalize-space(./text())').extract_first(),\n 'lat':re.findall(r\"\\/@[^(\\/)]+\", direction_link)[0].split(',')[0][2:],\n 'lon': re.findall(r\"\\/@[^(\\/)]+\", direction_link)[0].split(',')[1],\n }\n", "issue": "Whidbey Coffee\nhttp://www.whidbeycoffee.com/pages/locations\n", "before_files": [{"content": "import scrapy\nimport re\nfrom locations.items import GeojsonPointItem\n\nDAY_MAPPING = {\n \"Mon\": \"Mo\",\n \"Tue\": \"Tu\",\n \"Wed\": \"We\",\n \"Thu\": \"Th\",\n \"Fri\": \"Fr\",\n \"Sat\": \"Sa\",\n \"Sun\": \"Su\"\n}\n\n\nclass WhidbeycoffeeSpider(scrapy.Spider):\n\n name = \"whidbeycoffee\"\n allowed_domains = [\"www.whidbeycoffee.com\"]\n download_delay = 1\n start_urls = (\n 'http://www.whidbeycoffee.com/pages/locations',\n )\n\n def parse_day(self, day):\n if re.search('-', day):\n days = day.split('-')\n osm_days = []\n if len(days) == 2:\n for day in days:\n try:\n osm_day = DAY_MAPPING[day.strip()]\n osm_days.append(osm_day)\n except:\n return None\n return [\"-\".join(osm_days)]\n if re.search('Sat', day) or re.search('Sun', day):\n if re.search('Sat', day) and re.search('Sun', day):\n return ['Sa' ,'Su']\n else:\n return [DAY_MAPPING[day.strip()]]\n\n\n\n def parse_times(self, times):\n if times.strip() == 'Closed':\n return 'off'\n hours_to = [x.strip() for x in times.split('-')]\n cleaned_times = []\n\n for hour in hours_to:\n if re.search('pm$', hour):\n hour = re.sub('pm', '', hour).strip()\n hour_min = hour.split(\":\")\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n cleaned_times.append(\":\".join(hour_min))\n\n if re.search('am$', hour):\n hour = re.sub('am', '', hour).strip()\n hour_min = hour.split(\":\")\n if len(hour_min[0]) <2:\n hour_min[0] = hour_min[0].zfill(2)\n else:\n hour_min[0] = str(12 + int(hour_min[0]))\n\n cleaned_times.append(\":\".join(hour_min))\n return \"-\".join(cleaned_times)\n\n def parse_hours(self, lis):\n hours = []\n for li in lis:\n li= li.lstrip()\n if re.search('&' ,li):\n day = li.split(':')[0]\n else:\n day = re.findall(r\"^[^( |:)]+\" ,li)[0]\n times = li.replace(day , \"\")[1:]\n print(times)\n if times and day:\n parsed_time = self.parse_times(times)\n parsed_day = self.parse_day(day)\n if parsed_day!=None:\n if (len(parsed_day)==2):\n hours.append(parsed_day[0] + ' ' + parsed_time)\n hours.append(parsed_day[1] + ' ' + parsed_time)\n else:\n hours.append(parsed_day[0] + ' ' + parsed_time)\n\n return \"; \".join(hours)\n\n def parse(self, response):\n stores = response.xpath('//h5')\n for index , store in enumerate(stores):\n direction_link = store.xpath('normalize-space(./following-sibling::p/a/@href)').extract_first()\n properties = {\n 'addr_full': store.xpath('./following-sibling::p/a/text()').extract()[0],\n 'phone': store.xpath('./following-sibling::p/following-sibling::p/text()').extract()[0],\n 'city': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[0],\n 'state': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[1],\n 'postcode': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[2],\n 'ref':response.url,\n 'lat':re.findall(r\"\\/@[^(\\/)]+\", direction_link)[0].split(',')[0][2:],\n 'lon': re.findall(r\"\\/@[^(\\/)]+\", direction_link)[0].split(',')[1],\n }\n if(index==0):\n hours = self.parse_hours(store.xpath('./following-sibling::p[3]/text()').extract())\n else:\n hours = self.parse_hours(store.xpath('./following-sibling::p[2]/text()').extract()[2:])\n\n if hours:\n properties['opening_hours'] = hours\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/whidbeycoffee.py"}]}
| 1,798 | 372 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.