problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_54968 | rasdani/github-patches | git_diff | fedora-infra__bodhi-4148 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Crash in automatic update handler when submitting work_on_bugs_task
From bodhi-consumer logs:
```
2020-10-25 11:17:14,460 INFO [fedora_messaging.twisted.protocol][MainThread] Consuming message from topic org.fedoraproject.prod.buildsys.tag (message id c2d97737-444f-49b4-b4ca-1efb3a05e941)
2020-10-25 11:17:14,463 INFO [bodhi][PoolThread-twisted.internet.reactor-1] Received message from fedora-messaging with topic: org.fedoraproject.prod.buildsys.tag
2020-10-25 11:17:14,463 INFO [bodhi][PoolThread-twisted.internet.reactor-1] ginac-1.7.9-5.fc34 tagged into f34-updates-candidate
2020-10-25 11:17:14,469 INFO [bodhi][PoolThread-twisted.internet.reactor-1] Build was not submitted, skipping
2020-10-25 11:17:14,838 INFO [bodhi.server][PoolThread-twisted.internet.reactor-1] Sending mail to [email protected]: [Fedora Update] [comment] ginac-1.7.9-5.fc34
2020-10-25 11:17:15,016 ERROR [bodhi][PoolThread-twisted.internet.reactor-1] Instance <Update at 0x7fa3740f5910> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: http://sqlalche.me/e/13/bhk3): Unable to handle message in Automatic Update handler: Id: c2d97737-444f-49b4-b4ca-1efb3a05e941
Topic: org.fedoraproject.prod.buildsys.tag
Headers: {
"fedora_messaging_schema": "base.message",
"fedora_messaging_severity": 20,
"sent-at": "2020-10-25T11:17:14+00:00"
}
Body: {
"build_id": 1634116,
"instance": "primary",
"name": "ginac",
"owner": "---",
"release": "5.fc34",
"tag": "f34-updates-candidate",
"tag_id": 27040,
"user": "---",
"version": "1.7.9"
}
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/bodhi/server/consumers/__init__.py", line 79, in __call__
handler_info.handler(msg)
File "/usr/local/lib/python3.8/site-packages/bodhi/server/consumers/automatic_updates.py", line 197, in __call__
alias = update.alias
File "/usr/lib64/python3.8/site-packages/sqlalchemy/orm/attributes.py", line 287, in __get__
return self.impl.get(instance_state(instance), dict_)
File "/usr/lib64/python3.8/site-packages/sqlalchemy/orm/attributes.py", line 718, in get
value = state._load_expired(state, passive)
File "/usr/lib64/python3.8/site-packages/sqlalchemy/orm/state.py", line 652, in _load_expired
self.manager.deferred_scalar_loader(self, toload)
File "/usr/lib64/python3.8/site-packages/sqlalchemy/orm/loading.py", line 944, in load_scalar_attributes
raise orm_exc.DetachedInstanceError(
sqlalchemy.orm.exc.DetachedInstanceError: Instance <Update at 0x7fa3740f5910> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: http://sqlalche.me/e/13/bhk3 )
2020-10-25 11:17:15,053 WARNI [fedora_messaging.twisted.protocol][MainThread] Returning message id c2d97737-444f-49b4-b4ca-1efb3a05e941 to the queue
```
</issue>
<code>
[start of bodhi/server/consumers/automatic_updates.py]
1 # Copyright © 2019 Red Hat, Inc. and others.
2 #
3 # This file is part of Bodhi.
4 #
5 # This program is free software; you can redistribute it and/or
6 # modify it under the terms of the GNU General Public License
7 # as published by the Free Software Foundation; either version 2
8 # of the License, or (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License along with
16 # this program; if not, write to the Free Software Foundation, Inc., 51
17 # Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
18 """
19 The Bodhi handler that creates updates automatically from tagged builds.
20
21 This module is responsible for the process of creating updates when builds are
22 tagged with certain tags.
23 """
24
25 import logging
26 import re
27
28 import fedora_messaging
29
30 from bodhi.server import buildsys
31 from bodhi.server.config import config
32 from bodhi.server.models import (
33 Bug, Build, ContentType, Package, Release, Update, UpdateStatus, UpdateType, User)
34 from bodhi.server.tasks import work_on_bugs_task
35 from bodhi.server.util import transactional_session_maker
36
37 log = logging.getLogger('bodhi')
38
39
40 class AutomaticUpdateHandler:
41 """
42 The Bodhi Automatic Update Handler.
43
44 A consumer that listens for messages about tagged builds and creates
45 updates from them.
46 """
47
48 def __init__(self, db_factory: transactional_session_maker = None):
49 """
50 Initialize the Automatic Update Handler.
51
52 Args:
53 db_factory: If given, used as the db_factory for this handler. If
54 None (the default), a new TransactionalSessionMaker is created and
55 used.
56 """
57 if not db_factory:
58 self.db_factory = transactional_session_maker()
59 else:
60 self.db_factory = db_factory
61
62 def __call__(self, message: fedora_messaging.api.Message) -> None:
63 """Create updates from appropriately tagged builds.
64
65 Args:
66 message: The message we are processing.
67 """
68 body = message.body
69
70 missing = []
71 for mandatory in ('tag', 'build_id', 'name', 'version', 'release'):
72 if mandatory not in body:
73 missing.append(mandatory)
74 if missing:
75 log.debug(f"Received incomplete tag message. Missing: {', '.join(missing)}")
76 return
77
78 btag = body['tag']
79 bnvr = '{name}-{version}-{release}'.format(**body)
80
81 koji = buildsys.get_session()
82
83 kbuildinfo = koji.getBuild(bnvr)
84 if not kbuildinfo:
85 log.debug(f"Can't find Koji build for {bnvr}.")
86 return
87
88 if 'nvr' not in kbuildinfo:
89 log.debug(f"Koji build info for {bnvr} doesn't contain 'nvr'.")
90 return
91
92 if 'owner_name' not in kbuildinfo:
93 log.debug(f"Koji build info for {bnvr} doesn't contain 'owner_name'.")
94 return
95
96 if kbuildinfo['owner_name'] in config.get('automatic_updates_blacklist'):
97 log.debug(f"{bnvr} owned by {kbuildinfo['owner_name']} who is listed in "
98 "automatic_updates_blacklist, skipping.")
99 return
100
101 # some APIs want the Koji build info, some others want the same
102 # wrapped in a larger (request?) structure
103 rbuildinfo = {
104 'info': kbuildinfo,
105 'nvr': kbuildinfo['nvr'].rsplit('-', 2),
106 }
107
108 with self.db_factory() as dbsession:
109 rel = dbsession.query(Release).filter_by(create_automatic_updates=True,
110 candidate_tag=btag).first()
111 if not rel:
112 log.debug(f"Ignoring build being tagged into {btag!r}, no release configured for "
113 "automatic updates for it found.")
114 return
115
116 bcls = ContentType.infer_content_class(Build, kbuildinfo)
117 build = bcls.get(bnvr)
118 if build and build.update:
119 log.info(f"Build, active update for {bnvr} exists already, skipping.")
120 return
121
122 if not build:
123 log.debug(f"Build for {bnvr} doesn't exist yet, creating.")
124
125 # Package.get_or_create() infers content type already
126 log.debug("Getting/creating related package object.")
127 pkg = Package.get_or_create(dbsession, rbuildinfo)
128
129 log.debug("Creating build object, adding it to the DB.")
130 build = bcls(nvr=bnvr, package=pkg, release=rel)
131 dbsession.add(build)
132
133 owner_name = kbuildinfo['owner_name']
134 user = User.get(owner_name)
135 if not user:
136 log.debug(f"Creating bodhi user for '{owner_name}'.")
137 # Leave email, groups blank, these will be filled
138 # in or updated when they log into Bodhi next time, see
139 # bodhi.server.security:remember_me().
140 user = User(name=owner_name)
141 dbsession.add(user)
142
143 log.debug(f"Creating new update for {bnvr}.")
144 changelog = build.get_changelog(lastupdate=True)
145 closing_bugs = []
146 if changelog:
147 log.debug("Adding changelog to update notes.")
148 notes = f"""Automatic update for {bnvr}.
149
150 ##### **Changelog**
151
152 ```
153 {changelog}
154 ```"""
155
156 for b in re.finditer(config.get('bz_regex'), changelog, re.IGNORECASE):
157 idx = int(b.group(1))
158 log.debug(f'Adding bug #{idx} to the update.')
159 bug = Bug.get(idx)
160 if bug is None:
161 bug = Bug(bug_id=idx)
162 dbsession.add(bug)
163 dbsession.flush()
164 if bug not in closing_bugs:
165 closing_bugs.append(bug)
166 else:
167 notes = f"Automatic update for {bnvr}."
168 update = Update(
169 release=rel,
170 builds=[build],
171 bugs=closing_bugs,
172 notes=notes,
173 type=UpdateType.unspecified,
174 stable_karma=3,
175 unstable_karma=-3,
176 autokarma=False,
177 user=user,
178 status=UpdateStatus.pending,
179 )
180
181 # Comment on the update that it was automatically created.
182 update.comment(
183 dbsession,
184 str("This update was automatically created"),
185 author="bodhi",
186 )
187
188 update.add_tag(update.release.pending_signing_tag)
189
190 log.debug("Adding new update to the database.")
191 dbsession.add(update)
192
193 log.debug("Flushing changes to the database.")
194 dbsession.flush()
195
196 # Obsolete older updates which may be stuck in testing due to failed gating
197 try:
198 update.obsolete_older_updates(dbsession)
199 except Exception as e:
200 log.error(f'Problem obsoleting older updates: {e}')
201
202 # This must be run after dbsession is closed so changes are committed to db
203 alias = update.alias
204 work_on_bugs_task.delay(alias, closing_bugs)
205
[end of bodhi/server/consumers/automatic_updates.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bodhi/server/consumers/automatic_updates.py b/bodhi/server/consumers/automatic_updates.py
--- a/bodhi/server/consumers/automatic_updates.py
+++ b/bodhi/server/consumers/automatic_updates.py
@@ -199,6 +199,7 @@
except Exception as e:
log.error(f'Problem obsoleting older updates: {e}')
+ alias = update.alias
+
# This must be run after dbsession is closed so changes are committed to db
- alias = update.alias
work_on_bugs_task.delay(alias, closing_bugs)
| {"golden_diff": "diff --git a/bodhi/server/consumers/automatic_updates.py b/bodhi/server/consumers/automatic_updates.py\n--- a/bodhi/server/consumers/automatic_updates.py\n+++ b/bodhi/server/consumers/automatic_updates.py\n@@ -199,6 +199,7 @@\n except Exception as e:\n log.error(f'Problem obsoleting older updates: {e}')\n \n+ alias = update.alias\n+\n # This must be run after dbsession is closed so changes are committed to db\n- alias = update.alias\n work_on_bugs_task.delay(alias, closing_bugs)\n", "issue": "Crash in automatic update handler when submitting work_on_bugs_task\nFrom bodhi-consumer logs:\r\n```\r\n2020-10-25 11:17:14,460 INFO [fedora_messaging.twisted.protocol][MainThread] Consuming message from topic org.fedoraproject.prod.buildsys.tag (message id c2d97737-444f-49b4-b4ca-1efb3a05e941)\r\n2020-10-25 11:17:14,463 INFO [bodhi][PoolThread-twisted.internet.reactor-1] Received message from fedora-messaging with topic: org.fedoraproject.prod.buildsys.tag\r\n2020-10-25 11:17:14,463 INFO [bodhi][PoolThread-twisted.internet.reactor-1] ginac-1.7.9-5.fc34 tagged into f34-updates-candidate\r\n2020-10-25 11:17:14,469 INFO [bodhi][PoolThread-twisted.internet.reactor-1] Build was not submitted, skipping\r\n2020-10-25 11:17:14,838 INFO [bodhi.server][PoolThread-twisted.internet.reactor-1] Sending mail to [email protected]: [Fedora Update] [comment] ginac-1.7.9-5.fc34\r\n2020-10-25 11:17:15,016 ERROR [bodhi][PoolThread-twisted.internet.reactor-1] Instance <Update at 0x7fa3740f5910> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: http://sqlalche.me/e/13/bhk3): Unable to handle message in Automatic Update handler: Id: c2d97737-444f-49b4-b4ca-1efb3a05e941\r\nTopic: org.fedoraproject.prod.buildsys.tag\r\nHeaders: {\r\n \"fedora_messaging_schema\": \"base.message\",\r\n \"fedora_messaging_severity\": 20,\r\n \"sent-at\": \"2020-10-25T11:17:14+00:00\"\r\n}\r\nBody: {\r\n \"build_id\": 1634116,\r\n \"instance\": \"primary\",\r\n \"name\": \"ginac\",\r\n \"owner\": \"---\",\r\n \"release\": \"5.fc34\",\r\n \"tag\": \"f34-updates-candidate\",\r\n \"tag_id\": 27040,\r\n \"user\": \"---\",\r\n \"version\": \"1.7.9\"\r\n}\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/site-packages/bodhi/server/consumers/__init__.py\", line 79, in __call__\r\n handler_info.handler(msg)\r\n File \"/usr/local/lib/python3.8/site-packages/bodhi/server/consumers/automatic_updates.py\", line 197, in __call__\r\n alias = update.alias\r\n File \"/usr/lib64/python3.8/site-packages/sqlalchemy/orm/attributes.py\", line 287, in __get__\r\n return self.impl.get(instance_state(instance), dict_)\r\n File \"/usr/lib64/python3.8/site-packages/sqlalchemy/orm/attributes.py\", line 718, in get\r\n value = state._load_expired(state, passive)\r\n File \"/usr/lib64/python3.8/site-packages/sqlalchemy/orm/state.py\", line 652, in _load_expired\r\n self.manager.deferred_scalar_loader(self, toload)\r\n File \"/usr/lib64/python3.8/site-packages/sqlalchemy/orm/loading.py\", line 944, in load_scalar_attributes\r\n raise orm_exc.DetachedInstanceError(\r\nsqlalchemy.orm.exc.DetachedInstanceError: Instance <Update at 0x7fa3740f5910> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: http://sqlalche.me/e/13/bhk3 )\r\n2020-10-25 11:17:15,053 WARNI [fedora_messaging.twisted.protocol][MainThread] Returning message id c2d97737-444f-49b4-b4ca-1efb3a05e941 to the queue\r\n```\n", "before_files": [{"content": "# Copyright \u00a9 2019 Red Hat, Inc. and others.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nThe Bodhi handler that creates updates automatically from tagged builds.\n\nThis module is responsible for the process of creating updates when builds are\ntagged with certain tags.\n\"\"\"\n\nimport logging\nimport re\n\nimport fedora_messaging\n\nfrom bodhi.server import buildsys\nfrom bodhi.server.config import config\nfrom bodhi.server.models import (\n Bug, Build, ContentType, Package, Release, Update, UpdateStatus, UpdateType, User)\nfrom bodhi.server.tasks import work_on_bugs_task\nfrom bodhi.server.util import transactional_session_maker\n\nlog = logging.getLogger('bodhi')\n\n\nclass AutomaticUpdateHandler:\n \"\"\"\n The Bodhi Automatic Update Handler.\n\n A consumer that listens for messages about tagged builds and creates\n updates from them.\n \"\"\"\n\n def __init__(self, db_factory: transactional_session_maker = None):\n \"\"\"\n Initialize the Automatic Update Handler.\n\n Args:\n db_factory: If given, used as the db_factory for this handler. If\n None (the default), a new TransactionalSessionMaker is created and\n used.\n \"\"\"\n if not db_factory:\n self.db_factory = transactional_session_maker()\n else:\n self.db_factory = db_factory\n\n def __call__(self, message: fedora_messaging.api.Message) -> None:\n \"\"\"Create updates from appropriately tagged builds.\n\n Args:\n message: The message we are processing.\n \"\"\"\n body = message.body\n\n missing = []\n for mandatory in ('tag', 'build_id', 'name', 'version', 'release'):\n if mandatory not in body:\n missing.append(mandatory)\n if missing:\n log.debug(f\"Received incomplete tag message. Missing: {', '.join(missing)}\")\n return\n\n btag = body['tag']\n bnvr = '{name}-{version}-{release}'.format(**body)\n\n koji = buildsys.get_session()\n\n kbuildinfo = koji.getBuild(bnvr)\n if not kbuildinfo:\n log.debug(f\"Can't find Koji build for {bnvr}.\")\n return\n\n if 'nvr' not in kbuildinfo:\n log.debug(f\"Koji build info for {bnvr} doesn't contain 'nvr'.\")\n return\n\n if 'owner_name' not in kbuildinfo:\n log.debug(f\"Koji build info for {bnvr} doesn't contain 'owner_name'.\")\n return\n\n if kbuildinfo['owner_name'] in config.get('automatic_updates_blacklist'):\n log.debug(f\"{bnvr} owned by {kbuildinfo['owner_name']} who is listed in \"\n \"automatic_updates_blacklist, skipping.\")\n return\n\n # some APIs want the Koji build info, some others want the same\n # wrapped in a larger (request?) structure\n rbuildinfo = {\n 'info': kbuildinfo,\n 'nvr': kbuildinfo['nvr'].rsplit('-', 2),\n }\n\n with self.db_factory() as dbsession:\n rel = dbsession.query(Release).filter_by(create_automatic_updates=True,\n candidate_tag=btag).first()\n if not rel:\n log.debug(f\"Ignoring build being tagged into {btag!r}, no release configured for \"\n \"automatic updates for it found.\")\n return\n\n bcls = ContentType.infer_content_class(Build, kbuildinfo)\n build = bcls.get(bnvr)\n if build and build.update:\n log.info(f\"Build, active update for {bnvr} exists already, skipping.\")\n return\n\n if not build:\n log.debug(f\"Build for {bnvr} doesn't exist yet, creating.\")\n\n # Package.get_or_create() infers content type already\n log.debug(\"Getting/creating related package object.\")\n pkg = Package.get_or_create(dbsession, rbuildinfo)\n\n log.debug(\"Creating build object, adding it to the DB.\")\n build = bcls(nvr=bnvr, package=pkg, release=rel)\n dbsession.add(build)\n\n owner_name = kbuildinfo['owner_name']\n user = User.get(owner_name)\n if not user:\n log.debug(f\"Creating bodhi user for '{owner_name}'.\")\n # Leave email, groups blank, these will be filled\n # in or updated when they log into Bodhi next time, see\n # bodhi.server.security:remember_me().\n user = User(name=owner_name)\n dbsession.add(user)\n\n log.debug(f\"Creating new update for {bnvr}.\")\n changelog = build.get_changelog(lastupdate=True)\n closing_bugs = []\n if changelog:\n log.debug(\"Adding changelog to update notes.\")\n notes = f\"\"\"Automatic update for {bnvr}.\n\n##### **Changelog**\n\n```\n{changelog}\n```\"\"\"\n\n for b in re.finditer(config.get('bz_regex'), changelog, re.IGNORECASE):\n idx = int(b.group(1))\n log.debug(f'Adding bug #{idx} to the update.')\n bug = Bug.get(idx)\n if bug is None:\n bug = Bug(bug_id=idx)\n dbsession.add(bug)\n dbsession.flush()\n if bug not in closing_bugs:\n closing_bugs.append(bug)\n else:\n notes = f\"Automatic update for {bnvr}.\"\n update = Update(\n release=rel,\n builds=[build],\n bugs=closing_bugs,\n notes=notes,\n type=UpdateType.unspecified,\n stable_karma=3,\n unstable_karma=-3,\n autokarma=False,\n user=user,\n status=UpdateStatus.pending,\n )\n\n # Comment on the update that it was automatically created.\n update.comment(\n dbsession,\n str(\"This update was automatically created\"),\n author=\"bodhi\",\n )\n\n update.add_tag(update.release.pending_signing_tag)\n\n log.debug(\"Adding new update to the database.\")\n dbsession.add(update)\n\n log.debug(\"Flushing changes to the database.\")\n dbsession.flush()\n\n # Obsolete older updates which may be stuck in testing due to failed gating\n try:\n update.obsolete_older_updates(dbsession)\n except Exception as e:\n log.error(f'Problem obsoleting older updates: {e}')\n\n # This must be run after dbsession is closed so changes are committed to db\n alias = update.alias\n work_on_bugs_task.delay(alias, closing_bugs)\n", "path": "bodhi/server/consumers/automatic_updates.py"}]} | 3,695 | 138 |
gh_patches_debug_33977 | rasdani/github-patches | git_diff | interlegis__sapl-3282 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Link de Matéria Legislative em Audiência
## Comportamento Atual
<!--- Se está descrevendo um bug, conte-nos o que acontece em vez do comportamento esperado. -->
<!--- Se está sugerindo uma mudança/melhoria, explique a diferença com o comportamento atual. -->
Quando clica-se no link da Matéria Legislativa, é redirecionado para outra Matéria Legislativa e não a com o título no link
## Possível Solução
<!--- Não é obrigatório, mas sugira uma possível correção/razão para o bug -->
<!--- ou ideias de como implementar a adição/mudança. -->
Número das audiências estão repetidos. Talvez isso tá causando o bug.
</issue>
<code>
[start of sapl/audiencia/views.py]
1 import sapl
2
3 from django.http import HttpResponse
4 from django.urls import reverse
5 from django.views.decorators.clickjacking import xframe_options_exempt
6 from django.views.generic import UpdateView
7 from sapl.crud.base import RP_DETAIL, RP_LIST, Crud, MasterDetailCrud
8
9 from .forms import AudienciaForm, AnexoAudienciaPublicaForm
10 from .models import AudienciaPublica, AnexoAudienciaPublica
11
12
13 def index(request):
14 return HttpResponse("Audiência Pública")
15
16
17 class AudienciaCrud(Crud):
18 model = AudienciaPublica
19 public = [RP_LIST, RP_DETAIL, ]
20
21 class BaseMixin(Crud.BaseMixin):
22 list_field_names = ['numero', 'nome', 'tipo', 'materia',
23 'data']
24 ordering = '-data', 'nome', 'numero', 'tipo'
25
26 class ListView(Crud.ListView):
27 paginate_by = 10
28
29 def get_context_data(self, **kwargs):
30 context = super().get_context_data(**kwargs)
31
32 audiencia_materia = {}
33 for o in context['object_list']:
34 # indexado pelo numero da audiencia
35 audiencia_materia[str(o.numero)] = o.materia
36
37 for row in context['rows']:
38 coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui
39 if coluna_materia[0]:
40 materia = audiencia_materia[row[0][0]]
41 if materia:
42 url_materia = reverse('sapl.materia:materialegislativa_detail',
43 kwargs={'pk': materia.id})
44 else:
45 url_materia = None
46 row[3] = (coluna_materia[0], url_materia)
47 return context
48
49 class CreateView(Crud.CreateView):
50 form_class = AudienciaForm
51
52 def form_valid(self, form):
53 return super(Crud.CreateView, self).form_valid(form)
54
55 class UpdateView(Crud.UpdateView):
56 form_class = AudienciaForm
57
58 def get_initial(self):
59 initial = super(UpdateView, self).get_initial()
60 if self.object.materia:
61 initial['tipo_materia'] = self.object.materia.tipo.id
62 initial['numero_materia'] = self.object.materia.numero
63 initial['ano_materia'] = self.object.materia.ano
64 return initial
65
66 class DeleteView(Crud.DeleteView):
67 pass
68
69 class DetailView(Crud.DetailView):
70
71 layout_key = 'AudienciaPublicaDetail'
72
73 @xframe_options_exempt
74 def get(self, request, *args, **kwargs):
75 return super().get(request, *args, **kwargs)
76
77
78 class AudienciaPublicaMixin:
79
80 def has_permission(self):
81 app_config = sapl.base.models.AppConfig.objects.last()
82 if app_config and app_config.documentos_administrativos == 'O':
83 return True
84
85 return super().has_permission()
86
87
88 class AnexoAudienciaPublicaCrud(MasterDetailCrud):
89 model = AnexoAudienciaPublica
90 parent_field = 'audiencia'
91 help_topic = 'numeracao_docsacess'
92 public = [RP_LIST, RP_DETAIL, ]
93
94 class BaseMixin(MasterDetailCrud.BaseMixin):
95 list_field_names = ['assunto']
96
97 class CreateView(MasterDetailCrud.CreateView):
98 form_class = AnexoAudienciaPublicaForm
99 layout_key = None
100
101 class UpdateView(MasterDetailCrud.UpdateView):
102 form_class = AnexoAudienciaPublicaForm
103
104 class ListView(AudienciaPublicaMixin, MasterDetailCrud.ListView):
105
106 def get_queryset(self):
107 qs = super(MasterDetailCrud.ListView, self).get_queryset()
108 kwargs = {self.crud.parent_field: self.kwargs['pk']}
109 return qs.filter(**kwargs).order_by('-data', '-id')
110
111 class DetailView(AudienciaPublicaMixin, MasterDetailCrud.DetailView):
112 pass
113
[end of sapl/audiencia/views.py]
[start of sapl/audiencia/forms.py]
1 import logging
2
3 from django import forms
4 from django.core.exceptions import ObjectDoesNotExist, ValidationError
5 from django.db import transaction
6 from django.utils.translation import ugettext_lazy as _
7
8 from crispy_forms.layout import Button, Column, Fieldset, HTML, Layout
9
10 from sapl.audiencia.models import AudienciaPublica, TipoAudienciaPublica, AnexoAudienciaPublica
11 from sapl.crispy_layout_mixin import form_actions, SaplFormHelper, SaplFormLayout, to_row
12 from sapl.materia.models import MateriaLegislativa, TipoMateriaLegislativa
13 from sapl.parlamentares.models import Parlamentar
14 from sapl.utils import timezone, FileFieldCheckMixin, validar_arquivo
15
16 class AudienciaForm(FileFieldCheckMixin, forms.ModelForm):
17 logger = logging.getLogger(__name__)
18 data_atual = timezone.now()
19
20 tipo = forms.ModelChoiceField(
21 required=True,
22 label=_('Tipo de Audiência Pública'),
23 queryset=TipoAudienciaPublica.objects.all().order_by('nome'))
24
25 tipo_materia = forms.ModelChoiceField(
26 label=_('Tipo Matéria'),
27 required=False,
28 queryset=TipoMateriaLegislativa.objects.all(),
29 empty_label=_('Selecione'))
30
31 numero_materia = forms.CharField(
32 label=_('Número Matéria'),
33 required=False)
34
35 ano_materia = forms.CharField(
36 label=_('Ano Matéria'),
37 required=False)
38
39 materia = forms.ModelChoiceField(
40 required=False,
41 widget=forms.HiddenInput(),
42 queryset=MateriaLegislativa.objects.all())
43
44 parlamentar_autor = forms.ModelChoiceField(
45 label=_("Parlamentar Autor"),
46 required=False,
47 queryset=Parlamentar.objects.all())
48
49 requerimento = forms.ModelChoiceField(
50 label=_("Requerimento"),
51 required=False,
52 queryset=MateriaLegislativa.objects.select_related("tipo").filter(tipo__descricao="Requerimento"))
53
54 class Meta:
55 model = AudienciaPublica
56 fields = ['tipo', 'numero', 'nome',
57 'tema', 'data', 'hora_inicio', 'hora_fim',
58 'observacao', 'audiencia_cancelada', 'parlamentar_autor', 'requerimento', 'url_audio',
59 'url_video', 'upload_pauta', 'upload_ata',
60 'upload_anexo', 'tipo_materia', 'numero_materia',
61 'ano_materia', 'materia']
62
63 def __init__(self, **kwargs):
64 super(AudienciaForm, self).__init__(**kwargs)
65
66 tipos = []
67
68 if not self.fields['tipo'].queryset:
69 tipos.append(TipoAudienciaPublica.objects.create(nome='Audiência Pública', tipo='A'))
70 tipos.append(TipoAudienciaPublica.objects.create(nome='Plebiscito', tipo='P'))
71 tipos.append(TipoAudienciaPublica.objects.create(nome='Referendo', tipo='R'))
72 tipos.append(TipoAudienciaPublica.objects.create(nome='Iniciativa Popular', tipo='I'))
73
74 for t in tipos:
75 t.save()
76
77 def clean(self):
78 cleaned_data = super(AudienciaForm, self).clean()
79 if not self.is_valid():
80 return cleaned_data
81
82 materia = cleaned_data['numero_materia']
83 ano_materia = cleaned_data['ano_materia']
84 tipo_materia = cleaned_data['tipo_materia']
85 parlamentar_autor = cleaned_data["parlamentar_autor"]
86 requerimento = cleaned_data["requerimento"]
87
88 if materia and ano_materia and tipo_materia:
89 try:
90 self.logger.debug("Tentando obter MateriaLegislativa %s nº %s/%s." % (tipo_materia, materia, ano_materia))
91 materia = MateriaLegislativa.objects.get(
92 numero=materia,
93 ano=ano_materia,
94 tipo=tipo_materia)
95 except ObjectDoesNotExist:
96 msg = _('A matéria %s nº %s/%s não existe no cadastro'
97 ' de matérias legislativas.' % (tipo_materia, materia, ano_materia))
98 self.logger.warning(
99 'A MateriaLegislativa %s nº %s/%s não existe no cadastro'
100 ' de matérias legislativas.' % (tipo_materia, materia, ano_materia)
101 )
102 raise ValidationError(msg)
103 else:
104 self.logger.info("MateriaLegislativa %s nº %s/%s obtida com sucesso." % (tipo_materia, materia, ano_materia))
105 cleaned_data['materia'] = materia
106
107 else:
108 campos = [materia, tipo_materia, ano_materia]
109 if campos.count(None) + campos.count('') < len(campos):
110 msg = _('Preencha todos os campos relacionados à Matéria Legislativa')
111 self.logger.warning(
112 'Algum campo relacionado à MatériaLegislativa %s nº %s/%s \
113 não foi preenchido.' % (tipo_materia, materia, ano_materia)
114 )
115 raise ValidationError(msg)
116
117 if not cleaned_data['numero']:
118
119 ultima_audiencia = AudienciaPublica.objects.all().order_by('numero').last()
120 if ultima_audiencia:
121 cleaned_data['numero'] = ultima_audiencia.numero + 1
122 else:
123 cleaned_data['numero'] = 1
124
125 if self.cleaned_data['hora_inicio'] and self.cleaned_data['hora_fim']:
126 if self.cleaned_data['hora_fim'] < self.cleaned_data['hora_inicio']:
127 msg = _('A hora de fim ({}) não pode ser anterior a hora de início({})'
128 .format(self.cleaned_data['hora_fim'], self.cleaned_data['hora_inicio']))
129 self.logger.warning(
130 'Hora de fim anterior à hora de início.'
131 )
132 raise ValidationError(msg)
133
134 # requerimento é optativo
135 if parlamentar_autor and requerimento:
136 if parlamentar_autor.autor.first() not in requerimento.autores.all():
137 raise ValidationError("Parlamentar Autor selecionado não faz"
138 " parte da autoria do Requerimento "
139 "selecionado.")
140 elif parlamentar_autor:
141 raise ValidationError("Para informar um autor deve-se informar um requerimento.")
142 elif requerimento:
143 raise ValidationError("Para informar um requerimento deve-se informar um autor.")
144
145
146 upload_pauta = self.cleaned_data.get('upload_pauta', False)
147 upload_ata = self.cleaned_data.get('upload_ata', False)
148 upload_anexo = self.cleaned_data.get('upload_anexo', False)
149
150 if upload_pauta:
151 validar_arquivo(upload_pauta, "Pauta da Audiência Pública")
152
153 if upload_ata:
154 validar_arquivo(upload_ata, "Ata da Audiência Pública")
155
156 if upload_anexo:
157 validar_arquivo(upload_anexo, "Anexo da Audiência Pública")
158
159 return cleaned_data
160
161
162 class AnexoAudienciaPublicaForm(forms.ModelForm):
163
164 class Meta:
165 model = AnexoAudienciaPublica
166 fields = ['arquivo',
167 'assunto']
168
169 def __init__(self, *args, **kwargs):
170
171 row1 = to_row(
172 [('arquivo', 4)])
173
174 row2 = to_row(
175 [('assunto', 12)])
176
177 self.helper = SaplFormHelper()
178 self.helper.layout = SaplFormLayout(
179 Fieldset(_('Identificação Básica'),
180 row1, row2))
181 super(AnexoAudienciaPublicaForm, self).__init__(
182 *args, **kwargs)
183
184 def clean(self):
185 super(AnexoAudienciaPublicaForm, self).clean()
186
187 if not self.is_valid():
188 return self.cleaned_data
189
190 arquivo = self.cleaned_data.get('arquivo', False)
191
192 if arquivo:
193 validar_arquivo(arquivo, "Arquivo")
194
195 return self.cleaned_data
196
[end of sapl/audiencia/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sapl/audiencia/forms.py b/sapl/audiencia/forms.py
--- a/sapl/audiencia/forms.py
+++ b/sapl/audiencia/forms.py
@@ -115,12 +115,14 @@
raise ValidationError(msg)
if not cleaned_data['numero']:
-
ultima_audiencia = AudienciaPublica.objects.all().order_by('numero').last()
if ultima_audiencia:
cleaned_data['numero'] = ultima_audiencia.numero + 1
else:
cleaned_data['numero'] = 1
+ else:
+ if AudienciaPublica.objects.filter(numero=cleaned_data['numero']).exclude(pk=self.instance.pk).exists():
+ raise ValidationError(f"Já existe uma audiência com a numeração {cleaned_data['numero']}.")
if self.cleaned_data['hora_inicio'] and self.cleaned_data['hora_fim']:
if self.cleaned_data['hora_fim'] < self.cleaned_data['hora_inicio']:
diff --git a/sapl/audiencia/views.py b/sapl/audiencia/views.py
--- a/sapl/audiencia/views.py
+++ b/sapl/audiencia/views.py
@@ -29,15 +29,13 @@
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
- audiencia_materia = {}
- for o in context['object_list']:
- # indexado pelo numero da audiencia
- audiencia_materia[str(o.numero)] = o.materia
+ audiencia_materia = {str(a.id): a.materia for a in context['object_list']}
for row in context['rows']:
- coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui
+ coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui
if coluna_materia[0]:
- materia = audiencia_materia[row[0][0]]
+ audiencia_id = row[0][1].split('/')[-1]
+ materia = audiencia_materia[audiencia_id]
if materia:
url_materia = reverse('sapl.materia:materialegislativa_detail',
kwargs={'pk': materia.id})
| {"golden_diff": "diff --git a/sapl/audiencia/forms.py b/sapl/audiencia/forms.py\n--- a/sapl/audiencia/forms.py\n+++ b/sapl/audiencia/forms.py\n@@ -115,12 +115,14 @@\n raise ValidationError(msg)\n \n if not cleaned_data['numero']:\n-\n ultima_audiencia = AudienciaPublica.objects.all().order_by('numero').last()\n if ultima_audiencia:\n cleaned_data['numero'] = ultima_audiencia.numero + 1\n else:\n cleaned_data['numero'] = 1\n+ else:\n+ if AudienciaPublica.objects.filter(numero=cleaned_data['numero']).exclude(pk=self.instance.pk).exists():\n+ raise ValidationError(f\"J\u00e1 existe uma audi\u00eancia com a numera\u00e7\u00e3o {cleaned_data['numero']}.\")\n \n if self.cleaned_data['hora_inicio'] and self.cleaned_data['hora_fim']:\n if self.cleaned_data['hora_fim'] < self.cleaned_data['hora_inicio']:\ndiff --git a/sapl/audiencia/views.py b/sapl/audiencia/views.py\n--- a/sapl/audiencia/views.py\n+++ b/sapl/audiencia/views.py\n@@ -29,15 +29,13 @@\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n \n- audiencia_materia = {}\n- for o in context['object_list']:\n- # indexado pelo numero da audiencia\n- audiencia_materia[str(o.numero)] = o.materia\n+ audiencia_materia = {str(a.id): a.materia for a in context['object_list']}\n \n for row in context['rows']:\n- coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui\n+ coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui\n if coluna_materia[0]:\n- materia = audiencia_materia[row[0][0]]\n+ audiencia_id = row[0][1].split('/')[-1]\n+ materia = audiencia_materia[audiencia_id]\n if materia:\n url_materia = reverse('sapl.materia:materialegislativa_detail',\n kwargs={'pk': materia.id})\n", "issue": "Link de Mat\u00e9ria Legislative em Audi\u00eancia\n## Comportamento Atual\r\n<!--- Se est\u00e1 descrevendo um bug, conte-nos o que acontece em vez do comportamento esperado. -->\r\n<!--- Se est\u00e1 sugerindo uma mudan\u00e7a/melhoria, explique a diferen\u00e7a com o comportamento atual. -->\r\nQuando clica-se no link da Mat\u00e9ria Legislativa, \u00e9 redirecionado para outra Mat\u00e9ria Legislativa e n\u00e3o a com o t\u00edtulo no link\r\n\r\n## Poss\u00edvel Solu\u00e7\u00e3o\r\n<!--- N\u00e3o \u00e9 obrigat\u00f3rio, mas sugira uma poss\u00edvel corre\u00e7\u00e3o/raz\u00e3o para o bug -->\r\n<!--- ou ideias de como implementar a adi\u00e7\u00e3o/mudan\u00e7a. -->\r\nN\u00famero das audi\u00eancias est\u00e3o repetidos. Talvez isso t\u00e1 causando o bug.\n", "before_files": [{"content": "import sapl\n\nfrom django.http import HttpResponse\nfrom django.urls import reverse\nfrom django.views.decorators.clickjacking import xframe_options_exempt\nfrom django.views.generic import UpdateView\nfrom sapl.crud.base import RP_DETAIL, RP_LIST, Crud, MasterDetailCrud\n\nfrom .forms import AudienciaForm, AnexoAudienciaPublicaForm\nfrom .models import AudienciaPublica, AnexoAudienciaPublica\n\n\ndef index(request):\n return HttpResponse(\"Audi\u00eancia P\u00fablica\")\n\n\nclass AudienciaCrud(Crud):\n model = AudienciaPublica\n public = [RP_LIST, RP_DETAIL, ]\n\n class BaseMixin(Crud.BaseMixin):\n list_field_names = ['numero', 'nome', 'tipo', 'materia',\n 'data'] \n ordering = '-data', 'nome', 'numero', 'tipo'\n\n class ListView(Crud.ListView):\n paginate_by = 10\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n audiencia_materia = {}\n for o in context['object_list']:\n # indexado pelo numero da audiencia\n audiencia_materia[str(o.numero)] = o.materia\n\n for row in context['rows']:\n coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui\n if coluna_materia[0]:\n materia = audiencia_materia[row[0][0]]\n if materia:\n url_materia = reverse('sapl.materia:materialegislativa_detail',\n kwargs={'pk': materia.id})\n else:\n url_materia = None\n row[3] = (coluna_materia[0], url_materia)\n return context\n\n class CreateView(Crud.CreateView):\n form_class = AudienciaForm\n\n def form_valid(self, form):\n return super(Crud.CreateView, self).form_valid(form)\n\n class UpdateView(Crud.UpdateView):\n form_class = AudienciaForm\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n if self.object.materia:\n initial['tipo_materia'] = self.object.materia.tipo.id\n initial['numero_materia'] = self.object.materia.numero\n initial['ano_materia'] = self.object.materia.ano\n return initial\n \n class DeleteView(Crud.DeleteView):\n pass\n\n class DetailView(Crud.DetailView):\n\n layout_key = 'AudienciaPublicaDetail'\n\n @xframe_options_exempt\n def get(self, request, *args, **kwargs):\n return super().get(request, *args, **kwargs)\n\n\nclass AudienciaPublicaMixin:\n\n def has_permission(self):\n app_config = sapl.base.models.AppConfig.objects.last()\n if app_config and app_config.documentos_administrativos == 'O':\n return True\n\n return super().has_permission()\n\n\nclass AnexoAudienciaPublicaCrud(MasterDetailCrud):\n model = AnexoAudienciaPublica\n parent_field = 'audiencia'\n help_topic = 'numeracao_docsacess'\n public = [RP_LIST, RP_DETAIL, ]\n\n class BaseMixin(MasterDetailCrud.BaseMixin):\n list_field_names = ['assunto']\n\n class CreateView(MasterDetailCrud.CreateView):\n form_class = AnexoAudienciaPublicaForm\n layout_key = None\n\n class UpdateView(MasterDetailCrud.UpdateView):\n form_class = AnexoAudienciaPublicaForm\n\n class ListView(AudienciaPublicaMixin, MasterDetailCrud.ListView):\n\n def get_queryset(self):\n qs = super(MasterDetailCrud.ListView, self).get_queryset()\n kwargs = {self.crud.parent_field: self.kwargs['pk']}\n return qs.filter(**kwargs).order_by('-data', '-id')\n\n class DetailView(AudienciaPublicaMixin, MasterDetailCrud.DetailView):\n pass\n", "path": "sapl/audiencia/views.py"}, {"content": "import logging\n\nfrom django import forms\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.db import transaction\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom crispy_forms.layout import Button, Column, Fieldset, HTML, Layout\n\nfrom sapl.audiencia.models import AudienciaPublica, TipoAudienciaPublica, AnexoAudienciaPublica\nfrom sapl.crispy_layout_mixin import form_actions, SaplFormHelper, SaplFormLayout, to_row\nfrom sapl.materia.models import MateriaLegislativa, TipoMateriaLegislativa\nfrom sapl.parlamentares.models import Parlamentar\nfrom sapl.utils import timezone, FileFieldCheckMixin, validar_arquivo\n\nclass AudienciaForm(FileFieldCheckMixin, forms.ModelForm):\n logger = logging.getLogger(__name__)\n data_atual = timezone.now()\n\n tipo = forms.ModelChoiceField(\n required=True,\n label=_('Tipo de Audi\u00eancia P\u00fablica'),\n queryset=TipoAudienciaPublica.objects.all().order_by('nome'))\n\n tipo_materia = forms.ModelChoiceField(\n label=_('Tipo Mat\u00e9ria'),\n required=False,\n queryset=TipoMateriaLegislativa.objects.all(),\n empty_label=_('Selecione'))\n\n numero_materia = forms.CharField(\n label=_('N\u00famero Mat\u00e9ria'),\n required=False)\n\n ano_materia = forms.CharField(\n label=_('Ano Mat\u00e9ria'),\n required=False)\n\n materia = forms.ModelChoiceField(\n required=False,\n widget=forms.HiddenInput(),\n queryset=MateriaLegislativa.objects.all())\n\n parlamentar_autor = forms.ModelChoiceField(\n label=_(\"Parlamentar Autor\"),\n required=False,\n queryset=Parlamentar.objects.all())\n\n requerimento = forms.ModelChoiceField(\n label=_(\"Requerimento\"),\n required=False,\n queryset=MateriaLegislativa.objects.select_related(\"tipo\").filter(tipo__descricao=\"Requerimento\"))\n\n class Meta:\n model = AudienciaPublica\n fields = ['tipo', 'numero', 'nome',\n 'tema', 'data', 'hora_inicio', 'hora_fim',\n 'observacao', 'audiencia_cancelada', 'parlamentar_autor', 'requerimento', 'url_audio',\n 'url_video', 'upload_pauta', 'upload_ata',\n 'upload_anexo', 'tipo_materia', 'numero_materia',\n 'ano_materia', 'materia']\n\n def __init__(self, **kwargs):\n super(AudienciaForm, self).__init__(**kwargs)\n\n tipos = []\n\n if not self.fields['tipo'].queryset:\n tipos.append(TipoAudienciaPublica.objects.create(nome='Audi\u00eancia P\u00fablica', tipo='A'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Plebiscito', tipo='P'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Referendo', tipo='R'))\n tipos.append(TipoAudienciaPublica.objects.create(nome='Iniciativa Popular', tipo='I'))\n\n for t in tipos:\n t.save()\n\n def clean(self):\n cleaned_data = super(AudienciaForm, self).clean()\n if not self.is_valid():\n return cleaned_data\n\n materia = cleaned_data['numero_materia']\n ano_materia = cleaned_data['ano_materia']\n tipo_materia = cleaned_data['tipo_materia']\n parlamentar_autor = cleaned_data[\"parlamentar_autor\"]\n requerimento = cleaned_data[\"requerimento\"]\n\n if materia and ano_materia and tipo_materia:\n try:\n self.logger.debug(\"Tentando obter MateriaLegislativa %s n\u00ba %s/%s.\" % (tipo_materia, materia, ano_materia))\n materia = MateriaLegislativa.objects.get(\n numero=materia,\n ano=ano_materia,\n tipo=tipo_materia)\n except ObjectDoesNotExist:\n msg = _('A mat\u00e9ria %s n\u00ba %s/%s n\u00e3o existe no cadastro'\n ' de mat\u00e9rias legislativas.' % (tipo_materia, materia, ano_materia))\n self.logger.warning(\n 'A MateriaLegislativa %s n\u00ba %s/%s n\u00e3o existe no cadastro'\n ' de mat\u00e9rias legislativas.' % (tipo_materia, materia, ano_materia)\n )\n raise ValidationError(msg)\n else:\n self.logger.info(\"MateriaLegislativa %s n\u00ba %s/%s obtida com sucesso.\" % (tipo_materia, materia, ano_materia))\n cleaned_data['materia'] = materia\n\n else:\n campos = [materia, tipo_materia, ano_materia]\n if campos.count(None) + campos.count('') < len(campos):\n msg = _('Preencha todos os campos relacionados \u00e0 Mat\u00e9ria Legislativa')\n self.logger.warning(\n 'Algum campo relacionado \u00e0 Mat\u00e9riaLegislativa %s n\u00ba %s/%s \\\n n\u00e3o foi preenchido.' % (tipo_materia, materia, ano_materia)\n )\n raise ValidationError(msg)\n\n if not cleaned_data['numero']:\n\n ultima_audiencia = AudienciaPublica.objects.all().order_by('numero').last()\n if ultima_audiencia:\n cleaned_data['numero'] = ultima_audiencia.numero + 1\n else:\n cleaned_data['numero'] = 1\n\n if self.cleaned_data['hora_inicio'] and self.cleaned_data['hora_fim']:\n if self.cleaned_data['hora_fim'] < self.cleaned_data['hora_inicio']:\n msg = _('A hora de fim ({}) n\u00e3o pode ser anterior a hora de in\u00edcio({})'\n .format(self.cleaned_data['hora_fim'], self.cleaned_data['hora_inicio']))\n self.logger.warning(\n 'Hora de fim anterior \u00e0 hora de in\u00edcio.'\n )\n raise ValidationError(msg)\n\n # requerimento \u00e9 optativo\n if parlamentar_autor and requerimento:\n if parlamentar_autor.autor.first() not in requerimento.autores.all():\n raise ValidationError(\"Parlamentar Autor selecionado n\u00e3o faz\"\n \" parte da autoria do Requerimento \"\n \"selecionado.\")\n elif parlamentar_autor:\n raise ValidationError(\"Para informar um autor deve-se informar um requerimento.\")\n elif requerimento:\n raise ValidationError(\"Para informar um requerimento deve-se informar um autor.\")\n\n\n upload_pauta = self.cleaned_data.get('upload_pauta', False)\n upload_ata = self.cleaned_data.get('upload_ata', False)\n upload_anexo = self.cleaned_data.get('upload_anexo', False)\n\n if upload_pauta:\n validar_arquivo(upload_pauta, \"Pauta da Audi\u00eancia P\u00fablica\")\n\n if upload_ata:\n validar_arquivo(upload_ata, \"Ata da Audi\u00eancia P\u00fablica\")\n\n if upload_anexo:\n validar_arquivo(upload_anexo, \"Anexo da Audi\u00eancia P\u00fablica\")\n\n return cleaned_data\n\n\nclass AnexoAudienciaPublicaForm(forms.ModelForm):\n\n class Meta:\n model = AnexoAudienciaPublica\n fields = ['arquivo',\n 'assunto']\n\n def __init__(self, *args, **kwargs):\n\n row1 = to_row(\n [('arquivo', 4)])\n\n row2 = to_row(\n [('assunto', 12)])\n\n self.helper = SaplFormHelper()\n self.helper.layout = SaplFormLayout(\n Fieldset(_('Identifica\u00e7\u00e3o B\u00e1sica'),\n row1, row2))\n super(AnexoAudienciaPublicaForm, self).__init__(\n *args, **kwargs)\n\n def clean(self):\n super(AnexoAudienciaPublicaForm, self).clean()\n\n if not self.is_valid():\n return self.cleaned_data\n\n arquivo = self.cleaned_data.get('arquivo', False)\n\n if arquivo:\n validar_arquivo(arquivo, \"Arquivo\")\n\n return self.cleaned_data\n", "path": "sapl/audiencia/forms.py"}]} | 4,060 | 507 |
gh_patches_debug_5135 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-4845 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
WebSocket view jumps to top on new message
The WebSocket view keeps jumping to the top every time a new message arrives. This makes it basically impossible to work with while the connection is open. I hold down arrow -> it scrolls a bit -> message arrives -> I'm back at the top
_Originally posted by @Prinzhorn in https://github.com/mitmproxy/mitmproxy/issues/4486#issuecomment-796578909_
</issue>
<code>
[start of mitmproxy/tools/console/searchable.py]
1 import urwid
2
3 from mitmproxy.tools.console import signals
4
5
6 class Highlight(urwid.AttrMap):
7
8 def __init__(self, t):
9 urwid.AttrMap.__init__(
10 self,
11 urwid.Text(t.text),
12 "focusfield",
13 )
14 self.backup = t
15
16
17 class Searchable(urwid.ListBox):
18
19 def __init__(self, contents):
20 self.walker = urwid.SimpleFocusListWalker(contents)
21 urwid.ListBox.__init__(self, self.walker)
22 self.search_offset = 0
23 self.current_highlight = None
24 self.search_term = None
25 self.last_search = None
26
27 def keypress(self, size, key):
28 if key == "/":
29 signals.status_prompt.send(
30 prompt = "Search for",
31 text = "",
32 callback = self.set_search
33 )
34 elif key == "n":
35 self.find_next(False)
36 elif key == "N":
37 self.find_next(True)
38 elif key == "m_start":
39 self.set_focus(0)
40 self.walker._modified()
41 elif key == "m_end":
42 self.set_focus(len(self.walker) - 1)
43 self.walker._modified()
44 else:
45 return super().keypress(size, key)
46
47 def set_search(self, text):
48 self.last_search = text
49 self.search_term = text or None
50 self.find_next(False)
51
52 def set_highlight(self, offset):
53 if self.current_highlight is not None:
54 old = self.body[self.current_highlight]
55 self.body[self.current_highlight] = old.backup
56 if offset is None:
57 self.current_highlight = None
58 else:
59 self.body[offset] = Highlight(self.body[offset])
60 self.current_highlight = offset
61
62 def get_text(self, w):
63 if isinstance(w, urwid.Text):
64 return w.text
65 elif isinstance(w, Highlight):
66 return w.backup.text
67 else:
68 return None
69
70 def find_next(self, backwards):
71 if not self.search_term:
72 if self.last_search:
73 self.search_term = self.last_search
74 else:
75 self.set_highlight(None)
76 return
77 # Start search at focus + 1
78 if backwards:
79 rng = range(len(self.body) - 1, -1, -1)
80 else:
81 rng = range(1, len(self.body) + 1)
82 for i in rng:
83 off = (self.focus_position + i) % len(self.body)
84 w = self.body[off]
85 txt = self.get_text(w)
86 if txt and self.search_term in txt:
87 self.set_highlight(off)
88 self.set_focus(off, coming_from="above")
89 self.body._modified()
90 return
91 else:
92 self.set_highlight(None)
93 signals.status_message.send(message="Search not found.", expire=1)
94
[end of mitmproxy/tools/console/searchable.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/tools/console/searchable.py b/mitmproxy/tools/console/searchable.py
--- a/mitmproxy/tools/console/searchable.py
+++ b/mitmproxy/tools/console/searchable.py
@@ -19,6 +19,7 @@
def __init__(self, contents):
self.walker = urwid.SimpleFocusListWalker(contents)
urwid.ListBox.__init__(self, self.walker)
+ self.set_focus(len(self.walker) - 1)
self.search_offset = 0
self.current_highlight = None
self.search_term = None
| {"golden_diff": "diff --git a/mitmproxy/tools/console/searchable.py b/mitmproxy/tools/console/searchable.py\n--- a/mitmproxy/tools/console/searchable.py\n+++ b/mitmproxy/tools/console/searchable.py\n@@ -19,6 +19,7 @@\n def __init__(self, contents):\n self.walker = urwid.SimpleFocusListWalker(contents)\n urwid.ListBox.__init__(self, self.walker)\n+ self.set_focus(len(self.walker) - 1)\n self.search_offset = 0\n self.current_highlight = None\n self.search_term = None\n", "issue": "WebSocket view jumps to top on new message\nThe WebSocket view keeps jumping to the top every time a new message arrives. This makes it basically impossible to work with while the connection is open. I hold down arrow -> it scrolls a bit -> message arrives -> I'm back at the top\r\n\r\n_Originally posted by @Prinzhorn in https://github.com/mitmproxy/mitmproxy/issues/4486#issuecomment-796578909_\n", "before_files": [{"content": "import urwid\n\nfrom mitmproxy.tools.console import signals\n\n\nclass Highlight(urwid.AttrMap):\n\n def __init__(self, t):\n urwid.AttrMap.__init__(\n self,\n urwid.Text(t.text),\n \"focusfield\",\n )\n self.backup = t\n\n\nclass Searchable(urwid.ListBox):\n\n def __init__(self, contents):\n self.walker = urwid.SimpleFocusListWalker(contents)\n urwid.ListBox.__init__(self, self.walker)\n self.search_offset = 0\n self.current_highlight = None\n self.search_term = None\n self.last_search = None\n\n def keypress(self, size, key):\n if key == \"/\":\n signals.status_prompt.send(\n prompt = \"Search for\",\n text = \"\",\n callback = self.set_search\n )\n elif key == \"n\":\n self.find_next(False)\n elif key == \"N\":\n self.find_next(True)\n elif key == \"m_start\":\n self.set_focus(0)\n self.walker._modified()\n elif key == \"m_end\":\n self.set_focus(len(self.walker) - 1)\n self.walker._modified()\n else:\n return super().keypress(size, key)\n\n def set_search(self, text):\n self.last_search = text\n self.search_term = text or None\n self.find_next(False)\n\n def set_highlight(self, offset):\n if self.current_highlight is not None:\n old = self.body[self.current_highlight]\n self.body[self.current_highlight] = old.backup\n if offset is None:\n self.current_highlight = None\n else:\n self.body[offset] = Highlight(self.body[offset])\n self.current_highlight = offset\n\n def get_text(self, w):\n if isinstance(w, urwid.Text):\n return w.text\n elif isinstance(w, Highlight):\n return w.backup.text\n else:\n return None\n\n def find_next(self, backwards):\n if not self.search_term:\n if self.last_search:\n self.search_term = self.last_search\n else:\n self.set_highlight(None)\n return\n # Start search at focus + 1\n if backwards:\n rng = range(len(self.body) - 1, -1, -1)\n else:\n rng = range(1, len(self.body) + 1)\n for i in rng:\n off = (self.focus_position + i) % len(self.body)\n w = self.body[off]\n txt = self.get_text(w)\n if txt and self.search_term in txt:\n self.set_highlight(off)\n self.set_focus(off, coming_from=\"above\")\n self.body._modified()\n return\n else:\n self.set_highlight(None)\n signals.status_message.send(message=\"Search not found.\", expire=1)\n", "path": "mitmproxy/tools/console/searchable.py"}]} | 1,425 | 127 |
gh_patches_debug_24462 | rasdani/github-patches | git_diff | projectmesa__mesa-1343 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UX: Split UserSettableParameter into various classes
What if, instead of
```python
number_option = UserSettableParameter('number', 'My Number', value=123)
boolean_option = UserSettableParameter('checkbox', 'My Boolean', value=True)
slider_option = UserSettableParameter('slider', 'My Slider', value=123, min_value=10, max_value=200, step=0.1)
```
we do
```python
number_option = Number("My Number", 123)
boolean_option = Checkbox("My boolean", True)
slider_option = Slider("My slider", 123, min=10, max=200, step=0.1)
```
In addition to looking simpler, more importantly, this allows users to define their own UserParam class.
</issue>
<code>
[start of mesa/visualization/UserParam.py]
1 NUMBER = "number"
2 CHECKBOX = "checkbox"
3 CHOICE = "choice"
4 SLIDER = "slider"
5 STATIC_TEXT = "static_text"
6
7
8 class UserSettableParameter:
9 """A class for providing options to a visualization for a given parameter.
10
11 UserSettableParameter can be used instead of keyword arguments when specifying model parameters in an
12 instance of a `ModularServer` so that the parameter can be adjusted in the UI without restarting the server.
13
14 Validation of correctly-specified params happens on startup of a `ModularServer`. Each param is handled
15 individually in the UI and sends callback events to the server when an option is updated. That option is then
16 re-validated, in the `value.setter` property method to ensure input is correct from the UI to `reset_model`
17 callback.
18
19 Parameter types include:
20 - 'number' - a simple numerical input
21 - 'checkbox' - boolean checkbox
22 - 'choice' - String-based dropdown input, for selecting choices within a model
23 - 'slider' - A number-based slider input with settable increment
24 - 'static_text' - A non-input textbox for displaying model info.
25
26 Examples:
27
28 # Simple number input
29 number_option = UserSettableParameter('number', 'My Number', value=123)
30
31 # Checkbox input
32 boolean_option = UserSettableParameter('checkbox', 'My Boolean', value=True)
33
34 # Choice input
35 choice_option = UserSettableParameter('choice', 'My Choice', value='Default choice',
36 choices=['Default Choice', 'Alternate Choice'])
37
38 # Slider input
39 slider_option = UserSettableParameter('slider', 'My Slider', value=123, min_value=10, max_value=200, step=0.1)
40
41 # Static text
42 static_text = UserSettableParameter('static_text', value="This is a descriptive textbox")
43 """
44
45 NUMBER = NUMBER
46 CHECKBOX = CHECKBOX
47 CHOICE = CHOICE
48 SLIDER = SLIDER
49 STATIC_TEXT = STATIC_TEXT
50
51 TYPES = (NUMBER, CHECKBOX, CHOICE, SLIDER, STATIC_TEXT)
52
53 _ERROR_MESSAGE = "Missing or malformed inputs for '{}' Option '{}'"
54
55 def __init__(
56 self,
57 param_type=None,
58 name="",
59 value=None,
60 min_value=None,
61 max_value=None,
62 step=1,
63 choices=None,
64 description=None,
65 ):
66 if choices is None:
67 choices = list()
68 if param_type not in self.TYPES:
69 raise ValueError(f"{param_type} is not a valid Option type")
70 self.param_type = param_type
71 self.name = name
72 self._value = value
73 self.min_value = min_value
74 self.max_value = max_value
75 self.step = step
76 self.choices = choices
77 self.description = description
78
79 # Validate option types to make sure values are supplied properly
80 msg = self._ERROR_MESSAGE.format(self.param_type, name)
81 valid = True
82
83 if self.param_type == self.NUMBER:
84 valid = not (self.value is None)
85
86 elif self.param_type == self.SLIDER:
87 valid = not (
88 self.value is None or self.min_value is None or self.max_value is None
89 )
90
91 elif self.param_type == self.CHOICE:
92 valid = not (self.value is None or len(self.choices) == 0)
93
94 elif self.param_type == self.CHECKBOX:
95 valid = isinstance(self.value, bool)
96
97 elif self.param_type == self.STATIC_TEXT:
98 valid = isinstance(self.value, str)
99
100 if not valid:
101 raise ValueError(msg)
102
103 @property
104 def value(self):
105 return self._value
106
107 @value.setter
108 def value(self, value):
109 self._value = value
110 if self.param_type == self.SLIDER:
111 if self._value < self.min_value:
112 self._value = self.min_value
113 elif self._value > self.max_value:
114 self._value = self.max_value
115 elif self.param_type == self.CHOICE:
116 if self._value not in self.choices:
117 print(
118 "Selected choice value not in available choices, selected first choice from 'choices' list"
119 )
120 self._value = self.choices[0]
121
122 @property
123 def json(self):
124 result = self.__dict__.copy()
125 result["value"] = result.pop(
126 "_value"
127 ) # Return _value as value, value is the same
128 return result
129
130
131 class UserParam:
132 _ERROR_MESSAGE = "Missing or malformed inputs for '{}' Option '{}'"
133
134 @property
135 def json(self):
136 result = self.__dict__.copy()
137 result["value"] = result.pop(
138 "_value"
139 ) # Return _value as value, value is the same
140 return result
141
142 def maybe_raise_error(self, valid):
143 if not valid:
144 msg = self._ERROR_MESSAGE.format(self.param_type, self.name)
145 raise ValueError(msg)
146
147 @property
148 def value(self):
149 return self._value
150
151 @value.setter
152 def value(self, value):
153 self._value = value
154
155
156 class Slider(UserParam):
157 """
158 A number-based slider input with settable increment.
159
160 Example:
161
162 slider_option = Slider("My Slider", value=123, min_value=10, max_value=200, step=0.1)
163 """
164
165 def __init__(
166 self,
167 name="",
168 value=None,
169 min_value=None,
170 max_value=None,
171 step=1,
172 description=None,
173 ):
174 self.param_type = SLIDER
175 self.name = name
176 self._value = value
177 self.min_value = min_value
178 self.max_value = max_value
179 self.step = step
180 self.description = description
181
182 # Validate option type to make sure values are supplied properly
183 valid = not (
184 self.value is None or self.min_value is None or self.max_value is None
185 )
186 self.maybe_raise_error(valid)
187
188 @property
189 def value(self):
190 return self._value
191
192 @value.setter
193 def value(self, value):
194 self._value = value
195 if self._value < self.min_value:
196 self._value = self.min_value
197 elif self._value > self.max_value:
198 self._value = self.max_value
199
200
201 class Checkbox(UserParam):
202 """
203 Boolean checkbox.
204
205 Example:
206
207 boolean_option = Checkbox('My Boolean', True)
208 """
209
210 def __init__(self, name="", value=None, description=None):
211 self.param_type = CHECKBOX
212 self.name = name
213 self._value = value
214 self.description = description
215
216 # Validate option type to make sure values are supplied properly
217 valid = isinstance(self.value, bool)
218 self.maybe_raise_error(valid)
219
220
221 class Choice(UserParam):
222 """
223 String-based dropdown input, for selecting choices within a model
224
225 Example:
226 choice_option = Choice(
227 'My Choice',
228 value='Default choice',
229 choices=['Default Choice', 'Alternate Choice']
230 )
231 """
232
233 def __init__(self, name="", value=None, choices=None, description=None):
234 self.param_type = CHOICE
235 self.name = name
236 self._value = value
237 self.choices = choices
238 self.description = description
239
240 # Validate option type to make sure values are supplied properly
241 valid = not (self.value is None or len(self.choices) == 0)
242 self.maybe_raise_error(valid)
243
244 @property
245 def value(self):
246 return self._value
247
248 @value.setter
249 def value(self, value):
250 self._value = value
251 if self._value not in self.choices:
252 print(
253 "Selected choice value not in available choices, selected first choice from 'choices' list"
254 )
255 self._value = self.choices[0]
256
257
258 class StaticText(UserParam):
259 """
260 A non-input textbox for displaying model info.
261
262 Example:
263 static_text = StaticText("This is a descriptive textbox")
264 """
265
266 def __init__(self, value=None):
267 self.param_type = STATIC_TEXT
268 self._value = value
269 valid = isinstance(self.value, str)
270 self.maybe_raise_error(valid)
271
[end of mesa/visualization/UserParam.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mesa/visualization/UserParam.py b/mesa/visualization/UserParam.py
--- a/mesa/visualization/UserParam.py
+++ b/mesa/visualization/UserParam.py
@@ -1,3 +1,7 @@
+from numbers import Number
+from warnings import warn
+
+
NUMBER = "number"
CHECKBOX = "checkbox"
CHOICE = "choice"
@@ -63,6 +67,13 @@
choices=None,
description=None,
):
+
+ warn(
+ "UserSettableParameter is deprecated in favor of UserParam objects "
+ "such as Slider, Checkbox, Choice, StaticText, NumberInput. "
+ "See the examples folder for how to use them. "
+ "UserSettableParameter will be removed in the next major release."
+ )
if choices is None:
choices = list()
if param_type not in self.TYPES:
@@ -268,3 +279,19 @@
self._value = value
valid = isinstance(self.value, str)
self.maybe_raise_error(valid)
+
+
+class NumberInput(UserParam):
+ """
+ a simple numerical input
+
+ Example:
+ number_option = NumberInput("My Number", value=123)
+ """
+
+ def __init__(self, name="", value=None, description=None):
+ self.param_type = NUMBER
+ self.name = name
+ self._value = value
+ valid = isinstance(self.value, Number)
+ self.maybe_raise_error(valid)
| {"golden_diff": "diff --git a/mesa/visualization/UserParam.py b/mesa/visualization/UserParam.py\n--- a/mesa/visualization/UserParam.py\n+++ b/mesa/visualization/UserParam.py\n@@ -1,3 +1,7 @@\n+from numbers import Number\n+from warnings import warn\n+\n+\n NUMBER = \"number\"\n CHECKBOX = \"checkbox\"\n CHOICE = \"choice\"\n@@ -63,6 +67,13 @@\n choices=None,\n description=None,\n ):\n+\n+ warn(\n+ \"UserSettableParameter is deprecated in favor of UserParam objects \"\n+ \"such as Slider, Checkbox, Choice, StaticText, NumberInput. \"\n+ \"See the examples folder for how to use them. \"\n+ \"UserSettableParameter will be removed in the next major release.\"\n+ )\n if choices is None:\n choices = list()\n if param_type not in self.TYPES:\n@@ -268,3 +279,19 @@\n self._value = value\n valid = isinstance(self.value, str)\n self.maybe_raise_error(valid)\n+\n+\n+class NumberInput(UserParam):\n+ \"\"\"\n+ a simple numerical input\n+\n+ Example:\n+ number_option = NumberInput(\"My Number\", value=123)\n+ \"\"\"\n+\n+ def __init__(self, name=\"\", value=None, description=None):\n+ self.param_type = NUMBER\n+ self.name = name\n+ self._value = value\n+ valid = isinstance(self.value, Number)\n+ self.maybe_raise_error(valid)\n", "issue": "UX: Split UserSettableParameter into various classes\nWhat if, instead of\r\n```python\r\nnumber_option = UserSettableParameter('number', 'My Number', value=123)\r\nboolean_option = UserSettableParameter('checkbox', 'My Boolean', value=True)\r\nslider_option = UserSettableParameter('slider', 'My Slider', value=123, min_value=10, max_value=200, step=0.1)\r\n```\r\n\r\nwe do\r\n```python\r\nnumber_option = Number(\"My Number\", 123)\r\nboolean_option = Checkbox(\"My boolean\", True)\r\nslider_option = Slider(\"My slider\", 123, min=10, max=200, step=0.1)\r\n```\r\n\r\nIn addition to looking simpler, more importantly, this allows users to define their own UserParam class.\n", "before_files": [{"content": "NUMBER = \"number\"\nCHECKBOX = \"checkbox\"\nCHOICE = \"choice\"\nSLIDER = \"slider\"\nSTATIC_TEXT = \"static_text\"\n\n\nclass UserSettableParameter:\n \"\"\"A class for providing options to a visualization for a given parameter.\n\n UserSettableParameter can be used instead of keyword arguments when specifying model parameters in an\n instance of a `ModularServer` so that the parameter can be adjusted in the UI without restarting the server.\n\n Validation of correctly-specified params happens on startup of a `ModularServer`. Each param is handled\n individually in the UI and sends callback events to the server when an option is updated. That option is then\n re-validated, in the `value.setter` property method to ensure input is correct from the UI to `reset_model`\n callback.\n\n Parameter types include:\n - 'number' - a simple numerical input\n - 'checkbox' - boolean checkbox\n - 'choice' - String-based dropdown input, for selecting choices within a model\n - 'slider' - A number-based slider input with settable increment\n - 'static_text' - A non-input textbox for displaying model info.\n\n Examples:\n\n # Simple number input\n number_option = UserSettableParameter('number', 'My Number', value=123)\n\n # Checkbox input\n boolean_option = UserSettableParameter('checkbox', 'My Boolean', value=True)\n\n # Choice input\n choice_option = UserSettableParameter('choice', 'My Choice', value='Default choice',\n choices=['Default Choice', 'Alternate Choice'])\n\n # Slider input\n slider_option = UserSettableParameter('slider', 'My Slider', value=123, min_value=10, max_value=200, step=0.1)\n\n # Static text\n static_text = UserSettableParameter('static_text', value=\"This is a descriptive textbox\")\n \"\"\"\n\n NUMBER = NUMBER\n CHECKBOX = CHECKBOX\n CHOICE = CHOICE\n SLIDER = SLIDER\n STATIC_TEXT = STATIC_TEXT\n\n TYPES = (NUMBER, CHECKBOX, CHOICE, SLIDER, STATIC_TEXT)\n\n _ERROR_MESSAGE = \"Missing or malformed inputs for '{}' Option '{}'\"\n\n def __init__(\n self,\n param_type=None,\n name=\"\",\n value=None,\n min_value=None,\n max_value=None,\n step=1,\n choices=None,\n description=None,\n ):\n if choices is None:\n choices = list()\n if param_type not in self.TYPES:\n raise ValueError(f\"{param_type} is not a valid Option type\")\n self.param_type = param_type\n self.name = name\n self._value = value\n self.min_value = min_value\n self.max_value = max_value\n self.step = step\n self.choices = choices\n self.description = description\n\n # Validate option types to make sure values are supplied properly\n msg = self._ERROR_MESSAGE.format(self.param_type, name)\n valid = True\n\n if self.param_type == self.NUMBER:\n valid = not (self.value is None)\n\n elif self.param_type == self.SLIDER:\n valid = not (\n self.value is None or self.min_value is None or self.max_value is None\n )\n\n elif self.param_type == self.CHOICE:\n valid = not (self.value is None or len(self.choices) == 0)\n\n elif self.param_type == self.CHECKBOX:\n valid = isinstance(self.value, bool)\n\n elif self.param_type == self.STATIC_TEXT:\n valid = isinstance(self.value, str)\n\n if not valid:\n raise ValueError(msg)\n\n @property\n def value(self):\n return self._value\n\n @value.setter\n def value(self, value):\n self._value = value\n if self.param_type == self.SLIDER:\n if self._value < self.min_value:\n self._value = self.min_value\n elif self._value > self.max_value:\n self._value = self.max_value\n elif self.param_type == self.CHOICE:\n if self._value not in self.choices:\n print(\n \"Selected choice value not in available choices, selected first choice from 'choices' list\"\n )\n self._value = self.choices[0]\n\n @property\n def json(self):\n result = self.__dict__.copy()\n result[\"value\"] = result.pop(\n \"_value\"\n ) # Return _value as value, value is the same\n return result\n\n\nclass UserParam:\n _ERROR_MESSAGE = \"Missing or malformed inputs for '{}' Option '{}'\"\n\n @property\n def json(self):\n result = self.__dict__.copy()\n result[\"value\"] = result.pop(\n \"_value\"\n ) # Return _value as value, value is the same\n return result\n\n def maybe_raise_error(self, valid):\n if not valid:\n msg = self._ERROR_MESSAGE.format(self.param_type, self.name)\n raise ValueError(msg)\n\n @property\n def value(self):\n return self._value\n\n @value.setter\n def value(self, value):\n self._value = value\n\n\nclass Slider(UserParam):\n \"\"\"\n A number-based slider input with settable increment.\n\n Example:\n\n slider_option = Slider(\"My Slider\", value=123, min_value=10, max_value=200, step=0.1)\n \"\"\"\n\n def __init__(\n self,\n name=\"\",\n value=None,\n min_value=None,\n max_value=None,\n step=1,\n description=None,\n ):\n self.param_type = SLIDER\n self.name = name\n self._value = value\n self.min_value = min_value\n self.max_value = max_value\n self.step = step\n self.description = description\n\n # Validate option type to make sure values are supplied properly\n valid = not (\n self.value is None or self.min_value is None or self.max_value is None\n )\n self.maybe_raise_error(valid)\n\n @property\n def value(self):\n return self._value\n\n @value.setter\n def value(self, value):\n self._value = value\n if self._value < self.min_value:\n self._value = self.min_value\n elif self._value > self.max_value:\n self._value = self.max_value\n\n\nclass Checkbox(UserParam):\n \"\"\"\n Boolean checkbox.\n\n Example:\n\n boolean_option = Checkbox('My Boolean', True)\n \"\"\"\n\n def __init__(self, name=\"\", value=None, description=None):\n self.param_type = CHECKBOX\n self.name = name\n self._value = value\n self.description = description\n\n # Validate option type to make sure values are supplied properly\n valid = isinstance(self.value, bool)\n self.maybe_raise_error(valid)\n\n\nclass Choice(UserParam):\n \"\"\"\n String-based dropdown input, for selecting choices within a model\n\n Example:\n choice_option = Choice(\n 'My Choice',\n value='Default choice',\n choices=['Default Choice', 'Alternate Choice']\n )\n \"\"\"\n\n def __init__(self, name=\"\", value=None, choices=None, description=None):\n self.param_type = CHOICE\n self.name = name\n self._value = value\n self.choices = choices\n self.description = description\n\n # Validate option type to make sure values are supplied properly\n valid = not (self.value is None or len(self.choices) == 0)\n self.maybe_raise_error(valid)\n\n @property\n def value(self):\n return self._value\n\n @value.setter\n def value(self, value):\n self._value = value\n if self._value not in self.choices:\n print(\n \"Selected choice value not in available choices, selected first choice from 'choices' list\"\n )\n self._value = self.choices[0]\n\n\nclass StaticText(UserParam):\n \"\"\"\n A non-input textbox for displaying model info.\n\n Example:\n static_text = StaticText(\"This is a descriptive textbox\")\n \"\"\"\n\n def __init__(self, value=None):\n self.param_type = STATIC_TEXT\n self._value = value\n valid = isinstance(self.value, str)\n self.maybe_raise_error(valid)\n", "path": "mesa/visualization/UserParam.py"}]} | 3,235 | 339 |
gh_patches_debug_51302 | rasdani/github-patches | git_diff | translate__pootle-5929 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TMX export incorrectly names files
the tmx files are getting a .zip extension inside the zip archive
</issue>
<code>
[start of pootle/apps/import_export/utils.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import logging
10 import os
11
12 from io import BytesIO
13 from zipfile import ZipFile
14
15 from translate.storage.factory import getclass
16 from translate.storage import tmx
17
18 from django.conf import settings
19 from django.utils.functional import cached_property
20
21 from pootle.core.delegate import revision
22 from pootle.core.url_helpers import urljoin
23 from pootle.i18n.gettext import ugettext_lazy as _
24 from pootle_app.models.permissions import check_user_permission
25 from pootle_statistics.models import SubmissionTypes
26 from pootle_store.constants import TRANSLATED
27 from pootle_store.models import Store
28
29 from .exceptions import (FileImportError, MissingPootlePathError,
30 MissingPootleRevError, UnsupportedFiletypeError)
31
32
33 logger = logging.getLogger(__name__)
34
35
36 def import_file(f, user=None):
37 ttk = getclass(f)(f.read())
38 if not hasattr(ttk, "parseheader"):
39 raise UnsupportedFiletypeError(_("Unsupported filetype '%s', only PO "
40 "files are supported at this time\n",
41 f.name))
42 header = ttk.parseheader()
43 pootle_path = header.get("X-Pootle-Path")
44 if not pootle_path:
45 raise MissingPootlePathError(_("File '%s' missing X-Pootle-Path "
46 "header\n", f.name))
47
48 rev = header.get("X-Pootle-Revision")
49 if not rev or not rev.isdigit():
50 raise MissingPootleRevError(_("File '%s' missing or invalid "
51 "X-Pootle-Revision header\n",
52 f.name))
53 rev = int(rev)
54
55 try:
56 store = Store.objects.get(pootle_path=pootle_path)
57 except Store.DoesNotExist as e:
58 raise FileImportError(_("Could not create '%s'. Missing "
59 "Project/Language? (%s)", (f.name, e)))
60
61 tp = store.translation_project
62 allow_add_and_obsolete = ((tp.project.checkstyle == 'terminology'
63 or tp.is_template_project)
64 and check_user_permission(user,
65 'administrate',
66 tp.directory))
67 try:
68 store.update(store=ttk, user=user,
69 submission_type=SubmissionTypes.UPLOAD,
70 store_revision=rev,
71 allow_add_and_obsolete=allow_add_and_obsolete)
72 except Exception as e:
73 # This should not happen!
74 logger.error("Error importing file: %s", str(e))
75 raise FileImportError(_("There was an error uploading your file"))
76
77
78 class TPTMXExporter(object):
79
80 def __init__(self, context):
81 self.context = context
82
83 @property
84 def exported_revision(self):
85 return revision.get(self.context.__class__)(
86 self.context).get(key="pootle.offline.tm")
87
88 @cached_property
89 def revision(self):
90 return revision.get(self.context.__class__)(
91 self.context.directory).get(key="stats")[:10] or "0"
92
93 def get_url(self):
94 if self.exported_revision:
95 relative_path = "offline_tm/%s/%s" % (
96 self.context.language.code,
97 self.get_filename(self.exported_revision)
98 )
99 return urljoin(settings.MEDIA_URL, relative_path)
100 return None
101
102 def update_exported_revision(self):
103 if self.has_changes():
104 revision.get(self.context.__class__)(
105 self.context).set(keys=["pootle.offline.tm"],
106 value=self.revision)
107
108 def has_changes(self):
109 return self.revision != self.exported_revision
110
111 def file_exists(self):
112 return os.path.exists(self.abs_filepath)
113
114 @property
115 def last_exported_file_path(self):
116 if not self.exported_revision:
117 return None
118 exported_filename = self.get_filename(self.exported_revision)
119 return os.path.join(self.directory, exported_filename)
120
121 def exported_file_exists(self):
122 if self.last_exported_file_path is None:
123 return False
124 return os.path.exists(self.last_exported_file_path)
125
126 @property
127 def directory(self):
128 return os.path.join(settings.MEDIA_ROOT,
129 'offline_tm',
130 self.context.language.code)
131
132 def get_filename(self, revision):
133 return ".".join([self.context.project.code,
134 self.context.language.code, revision, 'tmx',
135 'zip'])
136
137 def check_tp(self, filename):
138 """Check if filename relates to the context TP."""
139
140 return filename.startswith(".".join([
141 self.context.project.code,
142 self.context.language.code]))
143
144 @property
145 def filename(self):
146 return self.get_filename(self.revision)
147
148 @property
149 def abs_filepath(self):
150 return os.path.join(self.directory, self.filename)
151
152 def export(self, rotate=False):
153 source_language = self.context.project.source_language.code
154 target_language = self.context.language.code
155
156 if not os.path.exists(self.directory):
157 os.makedirs(self.directory)
158
159 tmxfile = tmx.tmxfile()
160 for store in self.context.stores.live().iterator():
161 for unit in store.units.filter(state=TRANSLATED):
162 tmxfile.addtranslation(unit.source, source_language,
163 unit.target, target_language,
164 unit.developer_comment)
165
166 bs = BytesIO()
167 tmxfile.serialize(bs)
168 with open(self.abs_filepath, "wb") as f:
169 with ZipFile(f, "w") as zf:
170 zf.writestr(self.filename, bs.getvalue())
171
172 last_exported_filepath = self.last_exported_file_path
173 self.update_exported_revision()
174
175 removed = []
176 if rotate:
177 for fn in os.listdir(self.directory):
178 # Skip files from other projects.
179 if not self.check_tp(fn):
180 continue
181 filepath = os.path.join(self.directory, fn)
182 if filepath not in [self.abs_filepath, last_exported_filepath]:
183 removed.append(filepath)
184 os.remove(filepath)
185
186 return self.abs_filepath, removed
187
[end of pootle/apps/import_export/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pootle/apps/import_export/utils.py b/pootle/apps/import_export/utils.py
--- a/pootle/apps/import_export/utils.py
+++ b/pootle/apps/import_export/utils.py
@@ -167,7 +167,7 @@
tmxfile.serialize(bs)
with open(self.abs_filepath, "wb") as f:
with ZipFile(f, "w") as zf:
- zf.writestr(self.filename, bs.getvalue())
+ zf.writestr(self.filename.rstrip('.zip'), bs.getvalue())
last_exported_filepath = self.last_exported_file_path
self.update_exported_revision()
| {"golden_diff": "diff --git a/pootle/apps/import_export/utils.py b/pootle/apps/import_export/utils.py\n--- a/pootle/apps/import_export/utils.py\n+++ b/pootle/apps/import_export/utils.py\n@@ -167,7 +167,7 @@\n tmxfile.serialize(bs)\n with open(self.abs_filepath, \"wb\") as f:\n with ZipFile(f, \"w\") as zf:\n- zf.writestr(self.filename, bs.getvalue())\n+ zf.writestr(self.filename.rstrip('.zip'), bs.getvalue())\n \n last_exported_filepath = self.last_exported_file_path\n self.update_exported_revision()\n", "issue": "TMX export incorrectly names files\nthe tmx files are getting a .zip extension inside the zip archive\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\nimport os\n\nfrom io import BytesIO\nfrom zipfile import ZipFile\n\nfrom translate.storage.factory import getclass\nfrom translate.storage import tmx\n\nfrom django.conf import settings\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.delegate import revision\nfrom pootle.core.url_helpers import urljoin\nfrom pootle.i18n.gettext import ugettext_lazy as _\nfrom pootle_app.models.permissions import check_user_permission\nfrom pootle_statistics.models import SubmissionTypes\nfrom pootle_store.constants import TRANSLATED\nfrom pootle_store.models import Store\n\nfrom .exceptions import (FileImportError, MissingPootlePathError,\n MissingPootleRevError, UnsupportedFiletypeError)\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef import_file(f, user=None):\n ttk = getclass(f)(f.read())\n if not hasattr(ttk, \"parseheader\"):\n raise UnsupportedFiletypeError(_(\"Unsupported filetype '%s', only PO \"\n \"files are supported at this time\\n\",\n f.name))\n header = ttk.parseheader()\n pootle_path = header.get(\"X-Pootle-Path\")\n if not pootle_path:\n raise MissingPootlePathError(_(\"File '%s' missing X-Pootle-Path \"\n \"header\\n\", f.name))\n\n rev = header.get(\"X-Pootle-Revision\")\n if not rev or not rev.isdigit():\n raise MissingPootleRevError(_(\"File '%s' missing or invalid \"\n \"X-Pootle-Revision header\\n\",\n f.name))\n rev = int(rev)\n\n try:\n store = Store.objects.get(pootle_path=pootle_path)\n except Store.DoesNotExist as e:\n raise FileImportError(_(\"Could not create '%s'. Missing \"\n \"Project/Language? (%s)\", (f.name, e)))\n\n tp = store.translation_project\n allow_add_and_obsolete = ((tp.project.checkstyle == 'terminology'\n or tp.is_template_project)\n and check_user_permission(user,\n 'administrate',\n tp.directory))\n try:\n store.update(store=ttk, user=user,\n submission_type=SubmissionTypes.UPLOAD,\n store_revision=rev,\n allow_add_and_obsolete=allow_add_and_obsolete)\n except Exception as e:\n # This should not happen!\n logger.error(\"Error importing file: %s\", str(e))\n raise FileImportError(_(\"There was an error uploading your file\"))\n\n\nclass TPTMXExporter(object):\n\n def __init__(self, context):\n self.context = context\n\n @property\n def exported_revision(self):\n return revision.get(self.context.__class__)(\n self.context).get(key=\"pootle.offline.tm\")\n\n @cached_property\n def revision(self):\n return revision.get(self.context.__class__)(\n self.context.directory).get(key=\"stats\")[:10] or \"0\"\n\n def get_url(self):\n if self.exported_revision:\n relative_path = \"offline_tm/%s/%s\" % (\n self.context.language.code,\n self.get_filename(self.exported_revision)\n )\n return urljoin(settings.MEDIA_URL, relative_path)\n return None\n\n def update_exported_revision(self):\n if self.has_changes():\n revision.get(self.context.__class__)(\n self.context).set(keys=[\"pootle.offline.tm\"],\n value=self.revision)\n\n def has_changes(self):\n return self.revision != self.exported_revision\n\n def file_exists(self):\n return os.path.exists(self.abs_filepath)\n\n @property\n def last_exported_file_path(self):\n if not self.exported_revision:\n return None\n exported_filename = self.get_filename(self.exported_revision)\n return os.path.join(self.directory, exported_filename)\n\n def exported_file_exists(self):\n if self.last_exported_file_path is None:\n return False\n return os.path.exists(self.last_exported_file_path)\n\n @property\n def directory(self):\n return os.path.join(settings.MEDIA_ROOT,\n 'offline_tm',\n self.context.language.code)\n\n def get_filename(self, revision):\n return \".\".join([self.context.project.code,\n self.context.language.code, revision, 'tmx',\n 'zip'])\n\n def check_tp(self, filename):\n \"\"\"Check if filename relates to the context TP.\"\"\"\n\n return filename.startswith(\".\".join([\n self.context.project.code,\n self.context.language.code]))\n\n @property\n def filename(self):\n return self.get_filename(self.revision)\n\n @property\n def abs_filepath(self):\n return os.path.join(self.directory, self.filename)\n\n def export(self, rotate=False):\n source_language = self.context.project.source_language.code\n target_language = self.context.language.code\n\n if not os.path.exists(self.directory):\n os.makedirs(self.directory)\n\n tmxfile = tmx.tmxfile()\n for store in self.context.stores.live().iterator():\n for unit in store.units.filter(state=TRANSLATED):\n tmxfile.addtranslation(unit.source, source_language,\n unit.target, target_language,\n unit.developer_comment)\n\n bs = BytesIO()\n tmxfile.serialize(bs)\n with open(self.abs_filepath, \"wb\") as f:\n with ZipFile(f, \"w\") as zf:\n zf.writestr(self.filename, bs.getvalue())\n\n last_exported_filepath = self.last_exported_file_path\n self.update_exported_revision()\n\n removed = []\n if rotate:\n for fn in os.listdir(self.directory):\n # Skip files from other projects.\n if not self.check_tp(fn):\n continue\n filepath = os.path.join(self.directory, fn)\n if filepath not in [self.abs_filepath, last_exported_filepath]:\n removed.append(filepath)\n os.remove(filepath)\n\n return self.abs_filepath, removed\n", "path": "pootle/apps/import_export/utils.py"}]} | 2,350 | 136 |
gh_patches_debug_42691 | rasdani/github-patches | git_diff | Kinto__kinto-1966 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove the delete-collection command
This command was introduced to fix a very specific use-case a long time ago where we had created a collection with an invalid ID. It could thus not be deleted via the HTTP API.
I think we can remove it.
(from #1953)
</issue>
<code>
[start of kinto/scripts.py]
1 import logging
2
3 import transaction as current_transaction
4 from pyramid.settings import asbool
5
6 from kinto.core.storage import exceptions as storage_exceptions
7 from kinto.plugins.quotas import scripts as quotas
8
9
10 logger = logging.getLogger(__name__)
11
12
13 def delete_collection(env, bucket_id, collection_id):
14 registry = env["registry"]
15 settings = registry.settings
16 readonly_mode = asbool(settings.get("readonly", False))
17
18 if readonly_mode:
19 message = "Cannot delete the collection while in readonly mode."
20 logger.error(message)
21 return 31
22
23 bucket = f"/buckets/{bucket_id}"
24 collection = f"/buckets/{bucket_id}/collections/{collection_id}"
25
26 try:
27 registry.storage.get(resource_name="bucket", parent_id="", object_id=bucket_id)
28 except storage_exceptions.ObjectNotFoundError:
29 logger.error(f"Bucket '{bucket}' does not exist.")
30 return 32
31
32 try:
33 registry.storage.get(resource_name="collection", parent_id=bucket, object_id=collection_id)
34 except storage_exceptions.ObjectNotFoundError:
35 logger.error(f"Collection '{collection}' does not exist.")
36 return 33
37
38 deleted = registry.storage.delete_all(
39 resource_name="record", parent_id=collection, with_deleted=False
40 )
41 if len(deleted) == 0:
42 logger.info(f"No records found for '{collection}'.")
43 else:
44 logger.info(f"{len(deleted)} record(s) were deleted.")
45
46 registry.storage.delete(
47 resource_name="collection", parent_id=bucket, object_id=collection_id, with_deleted=False
48 )
49 logger.info(f"'{collection}' collection object was deleted.")
50
51 obj = "/buckets/{bucket_id}/collections/{collection_id}/records/{object_id}"
52 registry.permission.delete_object_permissions(
53 collection,
54 *[
55 obj.format(bucket_id=bucket_id, collection_id=collection_id, object_id=r["id"])
56 for r in deleted
57 ],
58 )
59 logger.info("Related permissions were deleted.")
60
61 current_transaction.commit()
62
63 return 0
64
65
66 def rebuild_quotas(env, dry_run=False):
67 """Administrative command to rebuild quota usage information.
68
69 This command recomputes the amount of space used by all
70 collections and all buckets and updates the quota objects in the
71 storage backend to their correct values. This can be useful when
72 cleaning up after a bug like e.g.
73 https://github.com/Kinto/kinto/issues/1226.
74 """
75 registry = env["registry"]
76 settings = registry.settings
77 readonly_mode = asbool(settings.get("readonly", False))
78
79 # FIXME: readonly_mode is not meant to be a "maintenance mode" but
80 # rather used with a database user that has read-only permissions.
81 # If we ever introduce a maintenance mode, we should maybe enforce
82 # it here.
83 if readonly_mode:
84 message = "Cannot rebuild quotas while in readonly mode."
85 logger.error(message)
86 return 41
87
88 if "kinto.plugins.quotas" not in settings["includes"]:
89 message = "Cannot rebuild quotas when quotas plugin is not installed."
90 logger.error(message)
91 return 42
92
93 quotas.rebuild_quotas(registry.storage, dry_run=dry_run)
94 current_transaction.commit()
95 return 0
96
[end of kinto/scripts.py]
[start of kinto/__main__.py]
1 import argparse
2 import os
3 import subprocess
4 import sys
5 import logging
6 import logging.config
7
8 from kinto.core import scripts as core_scripts
9 from kinto import scripts as kinto_scripts
10 from kinto.plugins.accounts import scripts as accounts_scripts
11 from pyramid.scripts import pserve
12 from pyramid.paster import bootstrap
13 from kinto import __version__
14 from kinto.config import init
15
16 DEFAULT_CONFIG_FILE = os.getenv("KINTO_INI", "config/kinto.ini")
17 DEFAULT_PORT = 8888
18 DEFAULT_LOG_LEVEL = logging.INFO
19 DEFAULT_LOG_FORMAT = "%(levelname)-5.5s %(message)s"
20
21
22 def main(args=None):
23 """The main routine."""
24 if args is None:
25 args = sys.argv[1:]
26
27 parser = argparse.ArgumentParser(description="Kinto Command-Line " "Interface")
28 commands = (
29 "init",
30 "start",
31 "migrate",
32 "delete-collection",
33 "version",
34 "rebuild-quotas",
35 "create-user",
36 )
37 subparsers = parser.add_subparsers(
38 title="subcommands",
39 description="Main Kinto CLI commands",
40 dest="subcommand",
41 help="Choose and run with --help",
42 )
43 subparsers.required = True
44
45 for command in commands:
46 subparser = subparsers.add_parser(command)
47 subparser.set_defaults(which=command)
48
49 subparser.add_argument(
50 "--ini",
51 help="Application configuration file",
52 dest="ini_file",
53 required=False,
54 default=DEFAULT_CONFIG_FILE,
55 )
56
57 subparser.add_argument(
58 "-q",
59 "--quiet",
60 action="store_const",
61 const=logging.CRITICAL,
62 dest="verbosity",
63 help="Show only critical errors.",
64 )
65
66 subparser.add_argument(
67 "-v",
68 "--debug",
69 action="store_const",
70 const=logging.DEBUG,
71 dest="verbosity",
72 help="Show all messages, including debug messages.",
73 )
74
75 if command == "init":
76 subparser.add_argument(
77 "--backend",
78 help="{memory,redis,postgresql}",
79 dest="backend",
80 required=False,
81 default=None,
82 )
83 subparser.add_argument(
84 "--cache-backend",
85 help="{memory,redis,postgresql,memcached}",
86 dest="cache-backend",
87 required=False,
88 default=None,
89 )
90 subparser.add_argument(
91 "--host",
92 help="Host to listen() on.",
93 dest="host",
94 required=False,
95 default="127.0.0.1",
96 )
97 elif command == "migrate":
98 subparser.add_argument(
99 "--dry-run",
100 action="store_true",
101 help="Simulate the migration operations " "and show information",
102 dest="dry_run",
103 required=False,
104 default=False,
105 )
106 elif command == "delete-collection":
107 subparser.add_argument(
108 "--bucket", help="The bucket where the collection " "belongs to.", required=True
109 )
110 subparser.add_argument("--collection", help="The collection to remove.", required=True)
111
112 elif command == "rebuild-quotas":
113 subparser.add_argument(
114 "--dry-run",
115 action="store_true",
116 help="Simulate the rebuild operation " "and show information",
117 dest="dry_run",
118 required=False,
119 default=False,
120 )
121
122 elif command == "start":
123 subparser.add_argument(
124 "--reload",
125 action="store_true",
126 help="Restart when code or config changes",
127 required=False,
128 default=False,
129 )
130 subparser.add_argument(
131 "--port",
132 type=int,
133 help="Listening port number",
134 required=False,
135 default=DEFAULT_PORT,
136 )
137
138 elif command == "create-user":
139 subparser.add_argument(
140 "-u", "--username", help="Superuser username", required=False, default=None
141 )
142 subparser.add_argument(
143 "-p", "--password", help="Superuser password", required=False, default=None
144 )
145
146 # Parse command-line arguments
147 parsed_args = vars(parser.parse_args(args))
148
149 config_file = parsed_args["ini_file"]
150 which_command = parsed_args["which"]
151
152 # Initialize logging from
153 level = parsed_args.get("verbosity") or DEFAULT_LOG_LEVEL
154 logging.basicConfig(level=level, format=DEFAULT_LOG_FORMAT)
155
156 if which_command == "init":
157 if os.path.exists(config_file):
158 print(f"{config_file} already exists.", file=sys.stderr)
159 return 1
160
161 backend = parsed_args["backend"]
162 cache_backend = parsed_args["cache-backend"]
163 if not backend:
164 while True:
165 prompt = (
166 "Select the backend you would like to use: "
167 "(1 - postgresql, 2 - redis, default - memory) "
168 )
169 answer = input(prompt).strip()
170 try:
171 backends = {"1": "postgresql", "2": "redis", "": "memory"}
172 backend = backends[answer]
173 break
174 except KeyError:
175 pass
176
177 if not cache_backend:
178 while True:
179 prompt = (
180 "Select the cache backend you would like to use: "
181 "(1 - postgresql, 2 - redis, 3 - memcached, default - memory) "
182 )
183 answer = input(prompt).strip()
184 try:
185 cache_backends = {
186 "1": "postgresql",
187 "2": "redis",
188 "3": "memcached",
189 "": "memory",
190 }
191 cache_backend = cache_backends[answer]
192 break
193 except KeyError:
194 pass
195
196 init(config_file, backend, cache_backend, parsed_args["host"])
197
198 # Install postgresql libraries if necessary
199 if backend == "postgresql" or cache_backend == "postgresql":
200 try:
201 import psycopg2 # NOQA
202 except ImportError:
203 subprocess.check_call(
204 [sys.executable, "-m", "pip", "install", "kinto[postgresql]"]
205 )
206 elif backend == "redis" or cache_backend == "redis":
207 try:
208 import kinto_redis # NOQA
209 except ImportError:
210 subprocess.check_call([sys.executable, "-m", "pip", "install", "kinto[redis]"])
211 elif cache_backend == "memcached":
212 try:
213 import memcache # NOQA
214 except ImportError:
215 subprocess.check_call([sys.executable, "-m", "pip", "install", "kinto[memcached]"])
216
217 elif which_command == "migrate":
218 dry_run = parsed_args["dry_run"]
219 env = bootstrap(config_file, options={"command": "migrate"})
220 core_scripts.migrate(env, dry_run=dry_run)
221
222 elif which_command == "delete-collection":
223 env = bootstrap(config_file, options={"command": "delete-collection"})
224 return kinto_scripts.delete_collection(
225 env, parsed_args["bucket"], parsed_args["collection"]
226 )
227
228 elif which_command == "rebuild-quotas":
229 dry_run = parsed_args["dry_run"]
230 env = bootstrap(config_file, options={"command": "rebuild-quotas"})
231 return kinto_scripts.rebuild_quotas(env, dry_run=dry_run)
232
233 elif which_command == "create-user":
234 username = parsed_args["username"]
235 password = parsed_args["password"]
236 env = bootstrap(config_file, options={"command": "create-user"})
237 return accounts_scripts.create_user(env, username=username, password=password)
238
239 elif which_command == "start":
240 pserve_argv = ["pserve"]
241
242 if parsed_args["reload"]:
243 pserve_argv.append("--reload")
244
245 if level == logging.DEBUG:
246 pserve_argv.append("-v")
247
248 if level == logging.CRITICAL:
249 pserve_argv.append("-q")
250
251 pserve_argv.append(config_file)
252 pserve_argv.append(f"http_port={parsed_args['port']}")
253 pserve.main(argv=pserve_argv)
254
255 else:
256 print(__version__)
257
258 return 0
259
[end of kinto/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/__main__.py b/kinto/__main__.py
--- a/kinto/__main__.py
+++ b/kinto/__main__.py
@@ -25,15 +25,7 @@
args = sys.argv[1:]
parser = argparse.ArgumentParser(description="Kinto Command-Line " "Interface")
- commands = (
- "init",
- "start",
- "migrate",
- "delete-collection",
- "version",
- "rebuild-quotas",
- "create-user",
- )
+ commands = ("init", "start", "migrate", "version", "rebuild-quotas", "create-user")
subparsers = parser.add_subparsers(
title="subcommands",
description="Main Kinto CLI commands",
@@ -103,11 +95,6 @@
required=False,
default=False,
)
- elif command == "delete-collection":
- subparser.add_argument(
- "--bucket", help="The bucket where the collection " "belongs to.", required=True
- )
- subparser.add_argument("--collection", help="The collection to remove.", required=True)
elif command == "rebuild-quotas":
subparser.add_argument(
@@ -219,12 +206,6 @@
env = bootstrap(config_file, options={"command": "migrate"})
core_scripts.migrate(env, dry_run=dry_run)
- elif which_command == "delete-collection":
- env = bootstrap(config_file, options={"command": "delete-collection"})
- return kinto_scripts.delete_collection(
- env, parsed_args["bucket"], parsed_args["collection"]
- )
-
elif which_command == "rebuild-quotas":
dry_run = parsed_args["dry_run"]
env = bootstrap(config_file, options={"command": "rebuild-quotas"})
diff --git a/kinto/scripts.py b/kinto/scripts.py
--- a/kinto/scripts.py
+++ b/kinto/scripts.py
@@ -3,66 +3,12 @@
import transaction as current_transaction
from pyramid.settings import asbool
-from kinto.core.storage import exceptions as storage_exceptions
from kinto.plugins.quotas import scripts as quotas
logger = logging.getLogger(__name__)
-def delete_collection(env, bucket_id, collection_id):
- registry = env["registry"]
- settings = registry.settings
- readonly_mode = asbool(settings.get("readonly", False))
-
- if readonly_mode:
- message = "Cannot delete the collection while in readonly mode."
- logger.error(message)
- return 31
-
- bucket = f"/buckets/{bucket_id}"
- collection = f"/buckets/{bucket_id}/collections/{collection_id}"
-
- try:
- registry.storage.get(resource_name="bucket", parent_id="", object_id=bucket_id)
- except storage_exceptions.ObjectNotFoundError:
- logger.error(f"Bucket '{bucket}' does not exist.")
- return 32
-
- try:
- registry.storage.get(resource_name="collection", parent_id=bucket, object_id=collection_id)
- except storage_exceptions.ObjectNotFoundError:
- logger.error(f"Collection '{collection}' does not exist.")
- return 33
-
- deleted = registry.storage.delete_all(
- resource_name="record", parent_id=collection, with_deleted=False
- )
- if len(deleted) == 0:
- logger.info(f"No records found for '{collection}'.")
- else:
- logger.info(f"{len(deleted)} record(s) were deleted.")
-
- registry.storage.delete(
- resource_name="collection", parent_id=bucket, object_id=collection_id, with_deleted=False
- )
- logger.info(f"'{collection}' collection object was deleted.")
-
- obj = "/buckets/{bucket_id}/collections/{collection_id}/records/{object_id}"
- registry.permission.delete_object_permissions(
- collection,
- *[
- obj.format(bucket_id=bucket_id, collection_id=collection_id, object_id=r["id"])
- for r in deleted
- ],
- )
- logger.info("Related permissions were deleted.")
-
- current_transaction.commit()
-
- return 0
-
-
def rebuild_quotas(env, dry_run=False):
"""Administrative command to rebuild quota usage information.
| {"golden_diff": "diff --git a/kinto/__main__.py b/kinto/__main__.py\n--- a/kinto/__main__.py\n+++ b/kinto/__main__.py\n@@ -25,15 +25,7 @@\n args = sys.argv[1:]\n \n parser = argparse.ArgumentParser(description=\"Kinto Command-Line \" \"Interface\")\n- commands = (\n- \"init\",\n- \"start\",\n- \"migrate\",\n- \"delete-collection\",\n- \"version\",\n- \"rebuild-quotas\",\n- \"create-user\",\n- )\n+ commands = (\"init\", \"start\", \"migrate\", \"version\", \"rebuild-quotas\", \"create-user\")\n subparsers = parser.add_subparsers(\n title=\"subcommands\",\n description=\"Main Kinto CLI commands\",\n@@ -103,11 +95,6 @@\n required=False,\n default=False,\n )\n- elif command == \"delete-collection\":\n- subparser.add_argument(\n- \"--bucket\", help=\"The bucket where the collection \" \"belongs to.\", required=True\n- )\n- subparser.add_argument(\"--collection\", help=\"The collection to remove.\", required=True)\n \n elif command == \"rebuild-quotas\":\n subparser.add_argument(\n@@ -219,12 +206,6 @@\n env = bootstrap(config_file, options={\"command\": \"migrate\"})\n core_scripts.migrate(env, dry_run=dry_run)\n \n- elif which_command == \"delete-collection\":\n- env = bootstrap(config_file, options={\"command\": \"delete-collection\"})\n- return kinto_scripts.delete_collection(\n- env, parsed_args[\"bucket\"], parsed_args[\"collection\"]\n- )\n-\n elif which_command == \"rebuild-quotas\":\n dry_run = parsed_args[\"dry_run\"]\n env = bootstrap(config_file, options={\"command\": \"rebuild-quotas\"})\ndiff --git a/kinto/scripts.py b/kinto/scripts.py\n--- a/kinto/scripts.py\n+++ b/kinto/scripts.py\n@@ -3,66 +3,12 @@\n import transaction as current_transaction\n from pyramid.settings import asbool\n \n-from kinto.core.storage import exceptions as storage_exceptions\n from kinto.plugins.quotas import scripts as quotas\n \n \n logger = logging.getLogger(__name__)\n \n \n-def delete_collection(env, bucket_id, collection_id):\n- registry = env[\"registry\"]\n- settings = registry.settings\n- readonly_mode = asbool(settings.get(\"readonly\", False))\n-\n- if readonly_mode:\n- message = \"Cannot delete the collection while in readonly mode.\"\n- logger.error(message)\n- return 31\n-\n- bucket = f\"/buckets/{bucket_id}\"\n- collection = f\"/buckets/{bucket_id}/collections/{collection_id}\"\n-\n- try:\n- registry.storage.get(resource_name=\"bucket\", parent_id=\"\", object_id=bucket_id)\n- except storage_exceptions.ObjectNotFoundError:\n- logger.error(f\"Bucket '{bucket}' does not exist.\")\n- return 32\n-\n- try:\n- registry.storage.get(resource_name=\"collection\", parent_id=bucket, object_id=collection_id)\n- except storage_exceptions.ObjectNotFoundError:\n- logger.error(f\"Collection '{collection}' does not exist.\")\n- return 33\n-\n- deleted = registry.storage.delete_all(\n- resource_name=\"record\", parent_id=collection, with_deleted=False\n- )\n- if len(deleted) == 0:\n- logger.info(f\"No records found for '{collection}'.\")\n- else:\n- logger.info(f\"{len(deleted)} record(s) were deleted.\")\n-\n- registry.storage.delete(\n- resource_name=\"collection\", parent_id=bucket, object_id=collection_id, with_deleted=False\n- )\n- logger.info(f\"'{collection}' collection object was deleted.\")\n-\n- obj = \"/buckets/{bucket_id}/collections/{collection_id}/records/{object_id}\"\n- registry.permission.delete_object_permissions(\n- collection,\n- *[\n- obj.format(bucket_id=bucket_id, collection_id=collection_id, object_id=r[\"id\"])\n- for r in deleted\n- ],\n- )\n- logger.info(\"Related permissions were deleted.\")\n-\n- current_transaction.commit()\n-\n- return 0\n-\n-\n def rebuild_quotas(env, dry_run=False):\n \"\"\"Administrative command to rebuild quota usage information.\n", "issue": "Remove the delete-collection command \nThis command was introduced to fix a very specific use-case a long time ago where we had created a collection with an invalid ID. It could thus not be deleted via the HTTP API. \r\n\r\nI think we can remove it.\r\n\r\n(from #1953)\n", "before_files": [{"content": "import logging\n\nimport transaction as current_transaction\nfrom pyramid.settings import asbool\n\nfrom kinto.core.storage import exceptions as storage_exceptions\nfrom kinto.plugins.quotas import scripts as quotas\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef delete_collection(env, bucket_id, collection_id):\n registry = env[\"registry\"]\n settings = registry.settings\n readonly_mode = asbool(settings.get(\"readonly\", False))\n\n if readonly_mode:\n message = \"Cannot delete the collection while in readonly mode.\"\n logger.error(message)\n return 31\n\n bucket = f\"/buckets/{bucket_id}\"\n collection = f\"/buckets/{bucket_id}/collections/{collection_id}\"\n\n try:\n registry.storage.get(resource_name=\"bucket\", parent_id=\"\", object_id=bucket_id)\n except storage_exceptions.ObjectNotFoundError:\n logger.error(f\"Bucket '{bucket}' does not exist.\")\n return 32\n\n try:\n registry.storage.get(resource_name=\"collection\", parent_id=bucket, object_id=collection_id)\n except storage_exceptions.ObjectNotFoundError:\n logger.error(f\"Collection '{collection}' does not exist.\")\n return 33\n\n deleted = registry.storage.delete_all(\n resource_name=\"record\", parent_id=collection, with_deleted=False\n )\n if len(deleted) == 0:\n logger.info(f\"No records found for '{collection}'.\")\n else:\n logger.info(f\"{len(deleted)} record(s) were deleted.\")\n\n registry.storage.delete(\n resource_name=\"collection\", parent_id=bucket, object_id=collection_id, with_deleted=False\n )\n logger.info(f\"'{collection}' collection object was deleted.\")\n\n obj = \"/buckets/{bucket_id}/collections/{collection_id}/records/{object_id}\"\n registry.permission.delete_object_permissions(\n collection,\n *[\n obj.format(bucket_id=bucket_id, collection_id=collection_id, object_id=r[\"id\"])\n for r in deleted\n ],\n )\n logger.info(\"Related permissions were deleted.\")\n\n current_transaction.commit()\n\n return 0\n\n\ndef rebuild_quotas(env, dry_run=False):\n \"\"\"Administrative command to rebuild quota usage information.\n\n This command recomputes the amount of space used by all\n collections and all buckets and updates the quota objects in the\n storage backend to their correct values. This can be useful when\n cleaning up after a bug like e.g.\n https://github.com/Kinto/kinto/issues/1226.\n \"\"\"\n registry = env[\"registry\"]\n settings = registry.settings\n readonly_mode = asbool(settings.get(\"readonly\", False))\n\n # FIXME: readonly_mode is not meant to be a \"maintenance mode\" but\n # rather used with a database user that has read-only permissions.\n # If we ever introduce a maintenance mode, we should maybe enforce\n # it here.\n if readonly_mode:\n message = \"Cannot rebuild quotas while in readonly mode.\"\n logger.error(message)\n return 41\n\n if \"kinto.plugins.quotas\" not in settings[\"includes\"]:\n message = \"Cannot rebuild quotas when quotas plugin is not installed.\"\n logger.error(message)\n return 42\n\n quotas.rebuild_quotas(registry.storage, dry_run=dry_run)\n current_transaction.commit()\n return 0\n", "path": "kinto/scripts.py"}, {"content": "import argparse\nimport os\nimport subprocess\nimport sys\nimport logging\nimport logging.config\n\nfrom kinto.core import scripts as core_scripts\nfrom kinto import scripts as kinto_scripts\nfrom kinto.plugins.accounts import scripts as accounts_scripts\nfrom pyramid.scripts import pserve\nfrom pyramid.paster import bootstrap\nfrom kinto import __version__\nfrom kinto.config import init\n\nDEFAULT_CONFIG_FILE = os.getenv(\"KINTO_INI\", \"config/kinto.ini\")\nDEFAULT_PORT = 8888\nDEFAULT_LOG_LEVEL = logging.INFO\nDEFAULT_LOG_FORMAT = \"%(levelname)-5.5s %(message)s\"\n\n\ndef main(args=None):\n \"\"\"The main routine.\"\"\"\n if args is None:\n args = sys.argv[1:]\n\n parser = argparse.ArgumentParser(description=\"Kinto Command-Line \" \"Interface\")\n commands = (\n \"init\",\n \"start\",\n \"migrate\",\n \"delete-collection\",\n \"version\",\n \"rebuild-quotas\",\n \"create-user\",\n )\n subparsers = parser.add_subparsers(\n title=\"subcommands\",\n description=\"Main Kinto CLI commands\",\n dest=\"subcommand\",\n help=\"Choose and run with --help\",\n )\n subparsers.required = True\n\n for command in commands:\n subparser = subparsers.add_parser(command)\n subparser.set_defaults(which=command)\n\n subparser.add_argument(\n \"--ini\",\n help=\"Application configuration file\",\n dest=\"ini_file\",\n required=False,\n default=DEFAULT_CONFIG_FILE,\n )\n\n subparser.add_argument(\n \"-q\",\n \"--quiet\",\n action=\"store_const\",\n const=logging.CRITICAL,\n dest=\"verbosity\",\n help=\"Show only critical errors.\",\n )\n\n subparser.add_argument(\n \"-v\",\n \"--debug\",\n action=\"store_const\",\n const=logging.DEBUG,\n dest=\"verbosity\",\n help=\"Show all messages, including debug messages.\",\n )\n\n if command == \"init\":\n subparser.add_argument(\n \"--backend\",\n help=\"{memory,redis,postgresql}\",\n dest=\"backend\",\n required=False,\n default=None,\n )\n subparser.add_argument(\n \"--cache-backend\",\n help=\"{memory,redis,postgresql,memcached}\",\n dest=\"cache-backend\",\n required=False,\n default=None,\n )\n subparser.add_argument(\n \"--host\",\n help=\"Host to listen() on.\",\n dest=\"host\",\n required=False,\n default=\"127.0.0.1\",\n )\n elif command == \"migrate\":\n subparser.add_argument(\n \"--dry-run\",\n action=\"store_true\",\n help=\"Simulate the migration operations \" \"and show information\",\n dest=\"dry_run\",\n required=False,\n default=False,\n )\n elif command == \"delete-collection\":\n subparser.add_argument(\n \"--bucket\", help=\"The bucket where the collection \" \"belongs to.\", required=True\n )\n subparser.add_argument(\"--collection\", help=\"The collection to remove.\", required=True)\n\n elif command == \"rebuild-quotas\":\n subparser.add_argument(\n \"--dry-run\",\n action=\"store_true\",\n help=\"Simulate the rebuild operation \" \"and show information\",\n dest=\"dry_run\",\n required=False,\n default=False,\n )\n\n elif command == \"start\":\n subparser.add_argument(\n \"--reload\",\n action=\"store_true\",\n help=\"Restart when code or config changes\",\n required=False,\n default=False,\n )\n subparser.add_argument(\n \"--port\",\n type=int,\n help=\"Listening port number\",\n required=False,\n default=DEFAULT_PORT,\n )\n\n elif command == \"create-user\":\n subparser.add_argument(\n \"-u\", \"--username\", help=\"Superuser username\", required=False, default=None\n )\n subparser.add_argument(\n \"-p\", \"--password\", help=\"Superuser password\", required=False, default=None\n )\n\n # Parse command-line arguments\n parsed_args = vars(parser.parse_args(args))\n\n config_file = parsed_args[\"ini_file\"]\n which_command = parsed_args[\"which\"]\n\n # Initialize logging from\n level = parsed_args.get(\"verbosity\") or DEFAULT_LOG_LEVEL\n logging.basicConfig(level=level, format=DEFAULT_LOG_FORMAT)\n\n if which_command == \"init\":\n if os.path.exists(config_file):\n print(f\"{config_file} already exists.\", file=sys.stderr)\n return 1\n\n backend = parsed_args[\"backend\"]\n cache_backend = parsed_args[\"cache-backend\"]\n if not backend:\n while True:\n prompt = (\n \"Select the backend you would like to use: \"\n \"(1 - postgresql, 2 - redis, default - memory) \"\n )\n answer = input(prompt).strip()\n try:\n backends = {\"1\": \"postgresql\", \"2\": \"redis\", \"\": \"memory\"}\n backend = backends[answer]\n break\n except KeyError:\n pass\n\n if not cache_backend:\n while True:\n prompt = (\n \"Select the cache backend you would like to use: \"\n \"(1 - postgresql, 2 - redis, 3 - memcached, default - memory) \"\n )\n answer = input(prompt).strip()\n try:\n cache_backends = {\n \"1\": \"postgresql\",\n \"2\": \"redis\",\n \"3\": \"memcached\",\n \"\": \"memory\",\n }\n cache_backend = cache_backends[answer]\n break\n except KeyError:\n pass\n\n init(config_file, backend, cache_backend, parsed_args[\"host\"])\n\n # Install postgresql libraries if necessary\n if backend == \"postgresql\" or cache_backend == \"postgresql\":\n try:\n import psycopg2 # NOQA\n except ImportError:\n subprocess.check_call(\n [sys.executable, \"-m\", \"pip\", \"install\", \"kinto[postgresql]\"]\n )\n elif backend == \"redis\" or cache_backend == \"redis\":\n try:\n import kinto_redis # NOQA\n except ImportError:\n subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"kinto[redis]\"])\n elif cache_backend == \"memcached\":\n try:\n import memcache # NOQA\n except ImportError:\n subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"kinto[memcached]\"])\n\n elif which_command == \"migrate\":\n dry_run = parsed_args[\"dry_run\"]\n env = bootstrap(config_file, options={\"command\": \"migrate\"})\n core_scripts.migrate(env, dry_run=dry_run)\n\n elif which_command == \"delete-collection\":\n env = bootstrap(config_file, options={\"command\": \"delete-collection\"})\n return kinto_scripts.delete_collection(\n env, parsed_args[\"bucket\"], parsed_args[\"collection\"]\n )\n\n elif which_command == \"rebuild-quotas\":\n dry_run = parsed_args[\"dry_run\"]\n env = bootstrap(config_file, options={\"command\": \"rebuild-quotas\"})\n return kinto_scripts.rebuild_quotas(env, dry_run=dry_run)\n\n elif which_command == \"create-user\":\n username = parsed_args[\"username\"]\n password = parsed_args[\"password\"]\n env = bootstrap(config_file, options={\"command\": \"create-user\"})\n return accounts_scripts.create_user(env, username=username, password=password)\n\n elif which_command == \"start\":\n pserve_argv = [\"pserve\"]\n\n if parsed_args[\"reload\"]:\n pserve_argv.append(\"--reload\")\n\n if level == logging.DEBUG:\n pserve_argv.append(\"-v\")\n\n if level == logging.CRITICAL:\n pserve_argv.append(\"-q\")\n\n pserve_argv.append(config_file)\n pserve_argv.append(f\"http_port={parsed_args['port']}\")\n pserve.main(argv=pserve_argv)\n\n else:\n print(__version__)\n\n return 0\n", "path": "kinto/__main__.py"}]} | 3,870 | 945 |
gh_patches_debug_18865 | rasdani/github-patches | git_diff | dask__dask-5332 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
read_sql_table creates delayed objects without provided sqlalchemy connection arguments
I am trying to load a table from presto using Dask `read_sql_table`. The presto server runs on `https`. As per the [pyhive docs](https://github.com/dropbox/PyHive#passing-session-configuration) if server is on SSL you have to pass `{'engine_kwargs': {'connect_args': {'protocol': 'https'}}}`.
Code:
```python
import dask.dataframe as dd
df = read_sql_table('test_extension', 'presto://server_url/hive',
index_col='col_name', divisions=[0, 30000, 60000], head_rows=100,
engine_kwargs={'connect_args': {'protocol': 'https'}})
# till here everything is fine it extracts metadata , creates correct number of partitions.
df.head()
```
`df.head` fails saying an http request was made to an https endpoint. I checked this using a debugger as well and the request is made with http protocol.
I suspect the problem is at https://github.com/dask/dask/blob/6fd2eb57158dbf260e29311e84f6a389904f0ef2/dask/dataframe/io/sql.py#L191, this does not pass engine kwargs to https://github.com/dask/dask/blob/6fd2eb57158dbf260e29311e84f6a389904f0ef2/dask/dataframe/io/sql.py#L196
If the server is on http and I change protocol param to http then everything works.
</issue>
<code>
[start of dask/dataframe/io/sql.py]
1 import numpy as np
2 import pandas as pd
3
4 from ... import delayed
5 from ...compatibility import string_types
6 from .io import from_delayed, from_pandas
7
8
9 def read_sql_table(
10 table,
11 uri,
12 index_col,
13 divisions=None,
14 npartitions=None,
15 limits=None,
16 columns=None,
17 bytes_per_chunk=256 * 2 ** 20,
18 head_rows=5,
19 schema=None,
20 meta=None,
21 engine_kwargs=None,
22 **kwargs
23 ):
24 """
25 Create dataframe from an SQL table.
26
27 If neither divisions or npartitions is given, the memory footprint of the
28 first few rows will be determined, and partitions of size ~256MB will
29 be used.
30
31 Parameters
32 ----------
33 table : string or sqlalchemy expression
34 Select columns from here.
35 uri : string
36 Full sqlalchemy URI for the database connection
37 index_col : string
38 Column which becomes the index, and defines the partitioning. Should
39 be a indexed column in the SQL server, and any orderable type. If the
40 type is number or time, then partition boundaries can be inferred from
41 npartitions or bytes_per_chunk; otherwide must supply explicit
42 ``divisions=``.
43 ``index_col`` could be a function to return a value, e.g.,
44 ``sql.func.abs(sql.column('value')).label('abs(value)')``.
45 Labeling columns created by functions or arithmetic operations is
46 required.
47 divisions: sequence
48 Values of the index column to split the table by. If given, this will
49 override npartitions and bytes_per_chunk. The divisions are the value
50 boundaries of the index column used to define the partitions. For
51 example, ``divisions=list('acegikmoqsuwz')`` could be used to partition
52 a string column lexographically into 12 partitions, with the implicit
53 assumption that each partition contains similar numbers of records.
54 npartitions : int
55 Number of partitions, if divisions is not given. Will split the values
56 of the index column linearly between limits, if given, or the column
57 max/min. The index column must be numeric or time for this to work
58 limits: 2-tuple or None
59 Manually give upper and lower range of values for use with npartitions;
60 if None, first fetches max/min from the DB. Upper limit, if
61 given, is inclusive.
62 columns : list of strings or None
63 Which columns to select; if None, gets all; can include sqlalchemy
64 functions, e.g.,
65 ``sql.func.abs(sql.column('value')).label('abs(value)')``.
66 Labeling columns created by functions or arithmetic operations is
67 recommended.
68 bytes_per_chunk : int
69 If both divisions and npartitions is None, this is the target size of
70 each partition, in bytes
71 head_rows : int
72 How many rows to load for inferring the data-types, unless passing meta
73 meta : empty DataFrame or None
74 If provided, do not attempt to infer dtypes, but use these, coercing
75 all chunks on load
76 schema : str or None
77 If using a table name, pass this to sqlalchemy to select which DB
78 schema to use within the URI connection
79 engine_kwargs : dict or None
80 Specific db engine parameters for sqlalchemy
81 kwargs : dict
82 Additional parameters to pass to `pd.read_sql()`
83
84 Returns
85 -------
86 dask.dataframe
87
88 Examples
89 --------
90 >>> df = dd.read_sql_table('accounts', 'sqlite:///path/to/bank.db',
91 ... npartitions=10, index_col='id') # doctest: +SKIP
92 """
93 import sqlalchemy as sa
94 from sqlalchemy import sql
95 from sqlalchemy.sql import elements
96
97 if index_col is None:
98 raise ValueError("Must specify index column to partition on")
99 engine_kwargs = {} if engine_kwargs is None else engine_kwargs
100 engine = sa.create_engine(uri, **engine_kwargs)
101 m = sa.MetaData()
102 if isinstance(table, string_types):
103 table = sa.Table(table, m, autoload=True, autoload_with=engine, schema=schema)
104
105 index = (
106 table.columns[index_col] if isinstance(index_col, string_types) else index_col
107 )
108 if not isinstance(index_col, string_types + (elements.Label,)):
109 raise ValueError(
110 "Use label when passing an SQLAlchemy instance as the index (%s)" % index
111 )
112 if divisions and npartitions:
113 raise TypeError("Must supply either divisions or npartitions, not both")
114
115 columns = (
116 [(table.columns[c] if isinstance(c, string_types) else c) for c in columns]
117 if columns
118 else list(table.columns)
119 )
120 if index_col not in columns:
121 columns.append(
122 table.columns[index_col]
123 if isinstance(index_col, string_types)
124 else index_col
125 )
126
127 if isinstance(index_col, string_types):
128 kwargs["index_col"] = index_col
129 else:
130 # function names get pandas auto-named
131 kwargs["index_col"] = index_col.name
132
133 if meta is None:
134 # derrive metadata from first few rows
135 q = sql.select(columns).limit(head_rows).select_from(table)
136 head = pd.read_sql(q, engine, **kwargs)
137
138 if head.empty:
139 # no results at all
140 name = table.name
141 schema = table.schema
142 head = pd.read_sql_table(name, uri, schema=schema, index_col=index_col)
143 return from_pandas(head, npartitions=1)
144
145 bytes_per_row = (head.memory_usage(deep=True, index=True)).sum() / head_rows
146 meta = head[:0]
147 else:
148 if divisions is None and npartitions is None:
149 raise ValueError(
150 "Must provide divisions or npartitions when using explicit meta."
151 )
152
153 if divisions is None:
154 if limits is None:
155 # calculate max and min for given index
156 q = sql.select([sql.func.max(index), sql.func.min(index)]).select_from(
157 table
158 )
159 minmax = pd.read_sql(q, engine)
160 maxi, mini = minmax.iloc[0]
161 dtype = minmax.dtypes["max_1"]
162 else:
163 mini, maxi = limits
164 dtype = pd.Series(limits).dtype
165
166 if npartitions is None:
167 q = sql.select([sql.func.count(index)]).select_from(table)
168 count = pd.read_sql(q, engine)["count_1"][0]
169 npartitions = int(round(count * bytes_per_row / bytes_per_chunk)) or 1
170 if dtype.kind == "M":
171 divisions = pd.date_range(
172 start=mini,
173 end=maxi,
174 freq="%iS" % ((maxi - mini).total_seconds() / npartitions),
175 ).tolist()
176 divisions[0] = mini
177 divisions[-1] = maxi
178 elif dtype.kind in ["i", "u", "f"]:
179 divisions = np.linspace(mini, maxi, npartitions + 1).tolist()
180 else:
181 raise TypeError(
182 'Provided index column is of type "{}". If divisions is not provided the '
183 "index column type must be numeric or datetime.".format(dtype)
184 )
185
186 parts = []
187 lowers, uppers = divisions[:-1], divisions[1:]
188 for i, (lower, upper) in enumerate(zip(lowers, uppers)):
189 cond = index <= upper if i == len(lowers) - 1 else index < upper
190 q = sql.select(columns).where(sql.and_(index >= lower, cond)).select_from(table)
191 parts.append(delayed(_read_sql_chunk)(q, uri, meta, **kwargs))
192
193 return from_delayed(parts, meta, divisions=divisions)
194
195
196 def _read_sql_chunk(q, uri, meta, **kwargs):
197 df = pd.read_sql(q, uri, **kwargs)
198 if df.empty:
199 return meta
200 else:
201 return df.astype(meta.dtypes.to_dict(), copy=False)
202
[end of dask/dataframe/io/sql.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dask/dataframe/io/sql.py b/dask/dataframe/io/sql.py
--- a/dask/dataframe/io/sql.py
+++ b/dask/dataframe/io/sql.py
@@ -188,13 +188,21 @@
for i, (lower, upper) in enumerate(zip(lowers, uppers)):
cond = index <= upper if i == len(lowers) - 1 else index < upper
q = sql.select(columns).where(sql.and_(index >= lower, cond)).select_from(table)
- parts.append(delayed(_read_sql_chunk)(q, uri, meta, **kwargs))
+ parts.append(
+ delayed(_read_sql_chunk)(
+ q, uri, meta, engine_kwargs=engine_kwargs, **kwargs
+ )
+ )
return from_delayed(parts, meta, divisions=divisions)
-def _read_sql_chunk(q, uri, meta, **kwargs):
- df = pd.read_sql(q, uri, **kwargs)
+def _read_sql_chunk(q, uri, meta, engine_kwargs=None, **kwargs):
+ import sqlalchemy as sa
+
+ engine_kwargs = engine_kwargs or {}
+ conn = sa.create_engine(uri, **engine_kwargs)
+ df = pd.read_sql(q, conn, **kwargs)
if df.empty:
return meta
else:
| {"golden_diff": "diff --git a/dask/dataframe/io/sql.py b/dask/dataframe/io/sql.py\n--- a/dask/dataframe/io/sql.py\n+++ b/dask/dataframe/io/sql.py\n@@ -188,13 +188,21 @@\n for i, (lower, upper) in enumerate(zip(lowers, uppers)):\n cond = index <= upper if i == len(lowers) - 1 else index < upper\n q = sql.select(columns).where(sql.and_(index >= lower, cond)).select_from(table)\n- parts.append(delayed(_read_sql_chunk)(q, uri, meta, **kwargs))\n+ parts.append(\n+ delayed(_read_sql_chunk)(\n+ q, uri, meta, engine_kwargs=engine_kwargs, **kwargs\n+ )\n+ )\n \n return from_delayed(parts, meta, divisions=divisions)\n \n \n-def _read_sql_chunk(q, uri, meta, **kwargs):\n- df = pd.read_sql(q, uri, **kwargs)\n+def _read_sql_chunk(q, uri, meta, engine_kwargs=None, **kwargs):\n+ import sqlalchemy as sa\n+\n+ engine_kwargs = engine_kwargs or {}\n+ conn = sa.create_engine(uri, **engine_kwargs)\n+ df = pd.read_sql(q, conn, **kwargs)\n if df.empty:\n return meta\n else:\n", "issue": "read_sql_table creates delayed objects without provided sqlalchemy connection arguments\nI am trying to load a table from presto using Dask `read_sql_table`. The presto server runs on `https`. As per the [pyhive docs](https://github.com/dropbox/PyHive#passing-session-configuration) if server is on SSL you have to pass `{'engine_kwargs': {'connect_args': {'protocol': 'https'}}}`. \r\n\r\nCode:\r\n```python\r\nimport dask.dataframe as dd\r\ndf = read_sql_table('test_extension', 'presto://server_url/hive',\r\n index_col='col_name', divisions=[0, 30000, 60000], head_rows=100,\r\n engine_kwargs={'connect_args': {'protocol': 'https'}})\r\n# till here everything is fine it extracts metadata , creates correct number of partitions.\r\ndf.head()\r\n```\r\n\r\n`df.head` fails saying an http request was made to an https endpoint. I checked this using a debugger as well and the request is made with http protocol. \r\n\r\nI suspect the problem is at https://github.com/dask/dask/blob/6fd2eb57158dbf260e29311e84f6a389904f0ef2/dask/dataframe/io/sql.py#L191, this does not pass engine kwargs to https://github.com/dask/dask/blob/6fd2eb57158dbf260e29311e84f6a389904f0ef2/dask/dataframe/io/sql.py#L196\r\n\r\nIf the server is on http and I change protocol param to http then everything works.\n", "before_files": [{"content": "import numpy as np\nimport pandas as pd\n\nfrom ... import delayed\nfrom ...compatibility import string_types\nfrom .io import from_delayed, from_pandas\n\n\ndef read_sql_table(\n table,\n uri,\n index_col,\n divisions=None,\n npartitions=None,\n limits=None,\n columns=None,\n bytes_per_chunk=256 * 2 ** 20,\n head_rows=5,\n schema=None,\n meta=None,\n engine_kwargs=None,\n **kwargs\n):\n \"\"\"\n Create dataframe from an SQL table.\n\n If neither divisions or npartitions is given, the memory footprint of the\n first few rows will be determined, and partitions of size ~256MB will\n be used.\n\n Parameters\n ----------\n table : string or sqlalchemy expression\n Select columns from here.\n uri : string\n Full sqlalchemy URI for the database connection\n index_col : string\n Column which becomes the index, and defines the partitioning. Should\n be a indexed column in the SQL server, and any orderable type. If the\n type is number or time, then partition boundaries can be inferred from\n npartitions or bytes_per_chunk; otherwide must supply explicit\n ``divisions=``.\n ``index_col`` could be a function to return a value, e.g.,\n ``sql.func.abs(sql.column('value')).label('abs(value)')``.\n Labeling columns created by functions or arithmetic operations is\n required.\n divisions: sequence\n Values of the index column to split the table by. If given, this will\n override npartitions and bytes_per_chunk. The divisions are the value\n boundaries of the index column used to define the partitions. For\n example, ``divisions=list('acegikmoqsuwz')`` could be used to partition\n a string column lexographically into 12 partitions, with the implicit\n assumption that each partition contains similar numbers of records.\n npartitions : int\n Number of partitions, if divisions is not given. Will split the values\n of the index column linearly between limits, if given, or the column\n max/min. The index column must be numeric or time for this to work\n limits: 2-tuple or None\n Manually give upper and lower range of values for use with npartitions;\n if None, first fetches max/min from the DB. Upper limit, if\n given, is inclusive.\n columns : list of strings or None\n Which columns to select; if None, gets all; can include sqlalchemy\n functions, e.g.,\n ``sql.func.abs(sql.column('value')).label('abs(value)')``.\n Labeling columns created by functions or arithmetic operations is\n recommended.\n bytes_per_chunk : int\n If both divisions and npartitions is None, this is the target size of\n each partition, in bytes\n head_rows : int\n How many rows to load for inferring the data-types, unless passing meta\n meta : empty DataFrame or None\n If provided, do not attempt to infer dtypes, but use these, coercing\n all chunks on load\n schema : str or None\n If using a table name, pass this to sqlalchemy to select which DB\n schema to use within the URI connection\n engine_kwargs : dict or None\n Specific db engine parameters for sqlalchemy\n kwargs : dict\n Additional parameters to pass to `pd.read_sql()`\n\n Returns\n -------\n dask.dataframe\n\n Examples\n --------\n >>> df = dd.read_sql_table('accounts', 'sqlite:///path/to/bank.db',\n ... npartitions=10, index_col='id') # doctest: +SKIP\n \"\"\"\n import sqlalchemy as sa\n from sqlalchemy import sql\n from sqlalchemy.sql import elements\n\n if index_col is None:\n raise ValueError(\"Must specify index column to partition on\")\n engine_kwargs = {} if engine_kwargs is None else engine_kwargs\n engine = sa.create_engine(uri, **engine_kwargs)\n m = sa.MetaData()\n if isinstance(table, string_types):\n table = sa.Table(table, m, autoload=True, autoload_with=engine, schema=schema)\n\n index = (\n table.columns[index_col] if isinstance(index_col, string_types) else index_col\n )\n if not isinstance(index_col, string_types + (elements.Label,)):\n raise ValueError(\n \"Use label when passing an SQLAlchemy instance as the index (%s)\" % index\n )\n if divisions and npartitions:\n raise TypeError(\"Must supply either divisions or npartitions, not both\")\n\n columns = (\n [(table.columns[c] if isinstance(c, string_types) else c) for c in columns]\n if columns\n else list(table.columns)\n )\n if index_col not in columns:\n columns.append(\n table.columns[index_col]\n if isinstance(index_col, string_types)\n else index_col\n )\n\n if isinstance(index_col, string_types):\n kwargs[\"index_col\"] = index_col\n else:\n # function names get pandas auto-named\n kwargs[\"index_col\"] = index_col.name\n\n if meta is None:\n # derrive metadata from first few rows\n q = sql.select(columns).limit(head_rows).select_from(table)\n head = pd.read_sql(q, engine, **kwargs)\n\n if head.empty:\n # no results at all\n name = table.name\n schema = table.schema\n head = pd.read_sql_table(name, uri, schema=schema, index_col=index_col)\n return from_pandas(head, npartitions=1)\n\n bytes_per_row = (head.memory_usage(deep=True, index=True)).sum() / head_rows\n meta = head[:0]\n else:\n if divisions is None and npartitions is None:\n raise ValueError(\n \"Must provide divisions or npartitions when using explicit meta.\"\n )\n\n if divisions is None:\n if limits is None:\n # calculate max and min for given index\n q = sql.select([sql.func.max(index), sql.func.min(index)]).select_from(\n table\n )\n minmax = pd.read_sql(q, engine)\n maxi, mini = minmax.iloc[0]\n dtype = minmax.dtypes[\"max_1\"]\n else:\n mini, maxi = limits\n dtype = pd.Series(limits).dtype\n\n if npartitions is None:\n q = sql.select([sql.func.count(index)]).select_from(table)\n count = pd.read_sql(q, engine)[\"count_1\"][0]\n npartitions = int(round(count * bytes_per_row / bytes_per_chunk)) or 1\n if dtype.kind == \"M\":\n divisions = pd.date_range(\n start=mini,\n end=maxi,\n freq=\"%iS\" % ((maxi - mini).total_seconds() / npartitions),\n ).tolist()\n divisions[0] = mini\n divisions[-1] = maxi\n elif dtype.kind in [\"i\", \"u\", \"f\"]:\n divisions = np.linspace(mini, maxi, npartitions + 1).tolist()\n else:\n raise TypeError(\n 'Provided index column is of type \"{}\". If divisions is not provided the '\n \"index column type must be numeric or datetime.\".format(dtype)\n )\n\n parts = []\n lowers, uppers = divisions[:-1], divisions[1:]\n for i, (lower, upper) in enumerate(zip(lowers, uppers)):\n cond = index <= upper if i == len(lowers) - 1 else index < upper\n q = sql.select(columns).where(sql.and_(index >= lower, cond)).select_from(table)\n parts.append(delayed(_read_sql_chunk)(q, uri, meta, **kwargs))\n\n return from_delayed(parts, meta, divisions=divisions)\n\n\ndef _read_sql_chunk(q, uri, meta, **kwargs):\n df = pd.read_sql(q, uri, **kwargs)\n if df.empty:\n return meta\n else:\n return df.astype(meta.dtypes.to_dict(), copy=False)\n", "path": "dask/dataframe/io/sql.py"}]} | 3,169 | 291 |
gh_patches_debug_58219 | rasdani/github-patches | git_diff | opsdroid__opsdroid-169 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
arrow dep missing
Fresh install of ubuntu 16.04
```
$ sudo apt update && sudo apt install python3-pip
...
$ pip3 install opsdroid
...
$ opsdroid
Traceback (most recent call last):
File "/home/ubuntu/.local/bin/opsdroid", line 7, in <module>
from opsdroid.__main__ import main
File "/home/ubuntu/.local/lib/python3.5/site-packages/opsdroid/__main__.py", line 8, in <module>
from opsdroid.core import OpsDroid
File "/home/ubuntu/.local/lib/python3.5/site-packages/opsdroid/core.py", line 15, in <module>
from opsdroid.parsers.crontab import parse_crontab
File "/home/ubuntu/.local/lib/python3.5/site-packages/opsdroid/parsers/crontab.py", line 6, in <module>
import arrow
ImportError: No module named 'arrow'
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2 import os
3 from setuptools import setup, find_packages
4 from opsdroid.const import __version__
5
6 PACKAGE_NAME = 'opsdroid'
7 HERE = os.path.abspath(os.path.dirname(__file__))
8
9 PACKAGES = find_packages(exclude=['tests', 'tests.*', 'modules',
10 'modules.*', 'docs', 'docs.*'])
11
12 REQUIRES = [
13 'pyyaml>=3.11,<4',
14 'aiohttp>=1.2.0,<2',
15 'pycron>=0.40',
16 ]
17
18 setup(
19 name=PACKAGE_NAME,
20 version=__version__,
21 license='GNU GENERAL PUBLIC LICENSE V3',
22 url='',
23 download_url='',
24 author='Jacob Tomlinson',
25 author_email='[email protected]',
26 description='An open source chat-ops bot.',
27 packages=PACKAGES,
28 include_package_data=True,
29 zip_safe=False,
30 platforms='any',
31 install_requires=REQUIRES,
32 test_suite='tests',
33 keywords=['bot', 'chatops'],
34 entry_points={
35 'console_scripts': [
36 'opsdroid = opsdroid.__main__:main'
37 ]
38 },
39 )
40
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,9 +10,10 @@
'modules.*', 'docs', 'docs.*'])
REQUIRES = [
- 'pyyaml>=3.11,<4',
- 'aiohttp>=1.2.0,<2',
- 'pycron>=0.40',
+ 'arrow==0.10.0',
+ 'aiohttp==2.1.0',
+ 'pycron==0.40',
+ 'pyyaml==3.12'
]
setup(
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,9 +10,10 @@\n 'modules.*', 'docs', 'docs.*'])\n \n REQUIRES = [\n- 'pyyaml>=3.11,<4',\n- 'aiohttp>=1.2.0,<2',\n- 'pycron>=0.40',\n+ 'arrow==0.10.0',\n+ 'aiohttp==2.1.0',\n+ 'pycron==0.40',\n+ 'pyyaml==3.12'\n ]\n \n setup(\n", "issue": "arrow dep missing\nFresh install of ubuntu 16.04\r\n\r\n```\r\n$ sudo apt update && sudo apt install python3-pip\r\n...\r\n$ pip3 install opsdroid\r\n...\r\n$ opsdroid\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/.local/bin/opsdroid\", line 7, in <module>\r\n from opsdroid.__main__ import main\r\n File \"/home/ubuntu/.local/lib/python3.5/site-packages/opsdroid/__main__.py\", line 8, in <module>\r\n from opsdroid.core import OpsDroid\r\n File \"/home/ubuntu/.local/lib/python3.5/site-packages/opsdroid/core.py\", line 15, in <module>\r\n from opsdroid.parsers.crontab import parse_crontab\r\n File \"/home/ubuntu/.local/lib/python3.5/site-packages/opsdroid/parsers/crontab.py\", line 6, in <module>\r\n import arrow\r\nImportError: No module named 'arrow'\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport os\nfrom setuptools import setup, find_packages\nfrom opsdroid.const import __version__\n\nPACKAGE_NAME = 'opsdroid'\nHERE = os.path.abspath(os.path.dirname(__file__))\n\nPACKAGES = find_packages(exclude=['tests', 'tests.*', 'modules',\n 'modules.*', 'docs', 'docs.*'])\n\nREQUIRES = [\n 'pyyaml>=3.11,<4',\n 'aiohttp>=1.2.0,<2',\n 'pycron>=0.40',\n]\n\nsetup(\n name=PACKAGE_NAME,\n version=__version__,\n license='GNU GENERAL PUBLIC LICENSE V3',\n url='',\n download_url='',\n author='Jacob Tomlinson',\n author_email='[email protected]',\n description='An open source chat-ops bot.',\n packages=PACKAGES,\n include_package_data=True,\n zip_safe=False,\n platforms='any',\n install_requires=REQUIRES,\n test_suite='tests',\n keywords=['bot', 'chatops'],\n entry_points={\n 'console_scripts': [\n 'opsdroid = opsdroid.__main__:main'\n ]\n },\n)\n", "path": "setup.py"}]} | 1,075 | 140 |
gh_patches_debug_22644 | rasdani/github-patches | git_diff | optuna__optuna-3085 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`ZeroDivisionError` when plotting parameter importance
Plotting parameter importance of a multi-objective study causes a `ZeroDivisionError: Weights sum to zero, can't be normalized.` The error happens with a with two-objective study, which otherwise runs nicely and produces a Pareto plot without any problem.
## Expected behavior
Generate a plot of parameter importance based on a multi-objective study, as described in the documentation.
## Environment
- Optuna version: 2.6.0
- Python version: 3.8
- OS: Ubuntu 18.04
## Error messages, stack traces, or logs
```
ERROR Traceback (most recent call last):
File "/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/visualization/_param_importances.py", line 126, in plot_param_importances
importances = optuna.importance.get_param_importances(
File "/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/__init__.py", line 91, in get_param_importances
return evaluator.evaluate(study, params=params, target=target)
File "/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_evaluator.py", line 121, in evaluate
evaluator.fit(
File "/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_fanova.py", line 74, in fit
self._trees = [_FanovaTree(e.tree_, search_spaces) for e in self._forest.estimators_]
File "/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_fanova.py", line 74, in <listcomp>
self._trees = [_FanovaTree(e.tree_, search_spaces) for e in self._forest.estimators_]
File "/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_tree.py", line 23, in __init__
statistics = self._precompute_statistics()
File "/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_tree.py", line 185, in _precompute_statistics
value = numpy.average(child_values, weights=child_weights)
File "<__array_function__ internals>", line 5, in average
File "/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/numpy/lib/function_base.py", line 409, in average
raise ZeroDivisionError(
ZeroDivisionError: Weights sum to zero, can't be normalized
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<ipython-input-10-380c9d9887b3> in <module>
/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/... in create_param_importance_plot(self)
168
169 def create_param_importance_plot(self):
--> 170 param_importance_fig = optuna.visualization.plot_param_importances(
171 self.study, target_name=Q_METRIC_NAME, target=lambda t: t.values[0]
172 )
/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/visualization/_param_importances.py in plot_param_importances(study, evaluator, params, target, target_name)
124 return go.Figure(data=[], layout=layout)
125
--> 126 importances = optuna.importance.get_param_importances(
127 study, evaluator=evaluator, params=params, target=target
128 )
/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/__init__.py in get_param_importances(study, evaluator, params, target)
89 raise TypeError("Evaluator must be a subclass of BaseImportanceEvaluator.")
90
---> 91 return evaluator.evaluate(study, params=params, target=target)
/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_evaluator.py in evaluate(self, study, params, target)
119
120 evaluator = self._evaluator
--> 121 evaluator.fit(
122 X=trans_params,
123 y=trans_values,
/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_fanova.py in fit(self, X, y, search_spaces, column_to_encoded_columns)
72 self._forest.fit(X, y)
73
---> 74 self._trees = [_FanovaTree(e.tree_, search_spaces) for e in self._forest.estimators_]
75 self._column_to_encoded_columns = column_to_encoded_columns
76 self._variances = {}
/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_fanova.py in <listcomp>(.0)
72 self._forest.fit(X, y)
73
---> 74 self._trees = [_FanovaTree(e.tree_, search_spaces) for e in self._forest.estimators_]
75 self._column_to_encoded_columns = column_to_encoded_columns
76 self._variances = {}
/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_tree.py in __init__(self, tree, search_spaces)
21 self._search_spaces = search_spaces
22
---> 23 statistics = self._precompute_statistics()
24 split_midpoints, split_sizes = self._precompute_split_midpoints_and_sizes()
25 subtree_active_features = self._precompute_subtree_active_features()
/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_tree.py in _precompute_statistics(self)
183 child_values.append(statistics[child_node_index, 0])
184 child_weights.append(statistics[child_node_index, 1])
--> 185 value = numpy.average(child_values, weights=child_weights)
186 weight = numpy.sum(child_weights)
187 statistics[node_index] = [value, weight]
<__array_function__ internals> in average(*args, **kwargs)
/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/numpy/lib/function_base.py in average(a, axis, weights, returned)
407 scl = wgt.sum(axis=axis, dtype=result_dtype)
408 if np.any(scl == 0.0):
--> 409 raise ZeroDivisionError(
410 "Weights sum to zero, can't be normalized")
411
ZeroDivisionError: Weights sum to zero, can't be normalized
```
## Steps to reproduce
1. Configure and run a default study with two objectives (maximize, maximize), the main function returning the result of a rounded float for one objective and an integer division for the second objective.
2. Generate a parameter importance plot with:
```python
fig = optuna.visualization.plot_param_importances(study, target_name=METRIC_NAME, target=lambda t: t.values[0])
```
## Additional context (optional)
Apart from the param importances not being generated, the study runs smoothly and I can generate a Pareto front plot without any problem.
</issue>
<code>
[start of optuna/importance/_fanova/_evaluator.py]
1 from collections import OrderedDict
2 from typing import Callable
3 from typing import Dict
4 from typing import List
5 from typing import Optional
6
7 import numpy
8
9 from optuna._transform import _SearchSpaceTransform
10 from optuna.importance._base import _get_distributions
11 from optuna.importance._base import BaseImportanceEvaluator
12 from optuna.importance._fanova._fanova import _Fanova
13 from optuna.study import Study
14 from optuna.trial import FrozenTrial
15 from optuna.trial import TrialState
16
17
18 class FanovaImportanceEvaluator(BaseImportanceEvaluator):
19 """fANOVA importance evaluator.
20
21 Implements the fANOVA hyperparameter importance evaluation algorithm in
22 `An Efficient Approach for Assessing Hyperparameter Importance
23 <http://proceedings.mlr.press/v32/hutter14.html>`_.
24
25 Given a study, fANOVA fits a random forest regression model that predicts the objective value
26 given a parameter configuration. The more accurate this model is, the more reliable the
27 importances assessed by this class are.
28
29 .. note::
30
31 Requires the `sklearn <https://github.com/scikit-learn/scikit-learn>`_ Python package.
32
33 .. note::
34
35 Pairwise and higher order importances are not supported through this class. They can be
36 computed using :class:`~optuna.importance._fanova._fanova._Fanova` directly but is not
37 recommended as interfaces may change without prior notice.
38
39 .. note::
40
41 The performance of fANOVA depends on the prediction performance of the underlying
42 random forest model. In order to obtain high prediction performance, it is necessary to
43 cover a wide range of the hyperparameter search space. It is recommended to use an
44 exploration-oriented sampler such as :class:`~optuna.samplers.RandomSampler`.
45
46 .. note::
47
48 For how to cite the original work, please refer to
49 https://automl.github.io/fanova/cite.html.
50
51 Args:
52 n_trees:
53 The number of trees in the forest.
54 max_depth:
55 The maximum depth of the trees in the forest.
56 seed:
57 Controls the randomness of the forest. For deterministic behavior, specify a value
58 other than :obj:`None`.
59
60 """
61
62 def __init__(
63 self, *, n_trees: int = 64, max_depth: int = 64, seed: Optional[int] = None
64 ) -> None:
65 self._evaluator = _Fanova(
66 n_trees=n_trees,
67 max_depth=max_depth,
68 min_samples_split=2,
69 min_samples_leaf=1,
70 seed=seed,
71 )
72
73 def evaluate(
74 self,
75 study: Study,
76 params: Optional[List[str]] = None,
77 *,
78 target: Optional[Callable[[FrozenTrial], float]] = None,
79 ) -> Dict[str, float]:
80 if target is None and study._is_multi_objective():
81 raise ValueError(
82 "If the `study` is being used for multi-objective optimization, "
83 "please specify the `target`. For example, use "
84 "`target=lambda t: t.values[0]` for the first objective value."
85 )
86
87 distributions = _get_distributions(study, params)
88 if len(distributions) == 0:
89 return OrderedDict()
90
91 trials = []
92 for trial in study.trials:
93 if trial.state != TrialState.COMPLETE:
94 continue
95 if any(name not in trial.params for name in distributions.keys()):
96 continue
97 trials.append(trial)
98
99 trans = _SearchSpaceTransform(distributions, transform_log=False, transform_step=False)
100
101 n_trials = len(trials)
102 trans_params = numpy.empty((n_trials, trans.bounds.shape[0]), dtype=numpy.float64)
103 trans_values = numpy.empty(n_trials, dtype=numpy.float64)
104
105 for trial_idx, trial in enumerate(trials):
106 trans_params[trial_idx] = trans.transform(trial.params)
107 trans_values[trial_idx] = trial.value if target is None else target(trial)
108
109 trans_bounds = trans.bounds
110 column_to_encoded_columns = trans.column_to_encoded_columns
111
112 if trans_params.size == 0: # `params` were given but as an empty list.
113 return OrderedDict()
114
115 # Many (deep) copies of the search spaces are required during the tree traversal and using
116 # Optuna distributions will create a bottleneck.
117 # Therefore, search spaces (parameter distributions) are represented by a single
118 # `numpy.ndarray`, coupled with a list of flags that indicate whether they are categorical
119 # or not.
120
121 evaluator = self._evaluator
122 evaluator.fit(
123 X=trans_params,
124 y=trans_values,
125 search_spaces=trans_bounds,
126 column_to_encoded_columns=column_to_encoded_columns,
127 )
128
129 importances = {}
130 for i, name in enumerate(distributions.keys()):
131 importance, _ = evaluator.get_importance((i,))
132 importances[name] = importance
133
134 total_importance = sum(importances.values())
135 for name in importances:
136 importances[name] /= total_importance
137
138 sorted_importances = OrderedDict(
139 reversed(
140 sorted(importances.items(), key=lambda name_and_importance: name_and_importance[1])
141 )
142 )
143 return sorted_importances
144
[end of optuna/importance/_fanova/_evaluator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/optuna/importance/_fanova/_evaluator.py b/optuna/importance/_fanova/_evaluator.py
--- a/optuna/importance/_fanova/_evaluator.py
+++ b/optuna/importance/_fanova/_evaluator.py
@@ -88,6 +88,12 @@
if len(distributions) == 0:
return OrderedDict()
+ # fANOVA does not support parameter distributions with a single value.
+ # However, there is no reason to calculate parameter importance in such case anyway,
+ # since it will always be 0 as the parameter is constant in the objective function.
+ zero_importances = {name: 0.0 for name, dist in distributions.items() if dist.single()}
+ distributions = {name: dist for name, dist in distributions.items() if not dist.single()}
+
trials = []
for trial in study.trials:
if trial.state != TrialState.COMPLETE:
@@ -131,6 +137,7 @@
importance, _ = evaluator.get_importance((i,))
importances[name] = importance
+ importances = {**importances, **zero_importances}
total_importance = sum(importances.values())
for name in importances:
importances[name] /= total_importance
| {"golden_diff": "diff --git a/optuna/importance/_fanova/_evaluator.py b/optuna/importance/_fanova/_evaluator.py\n--- a/optuna/importance/_fanova/_evaluator.py\n+++ b/optuna/importance/_fanova/_evaluator.py\n@@ -88,6 +88,12 @@\n if len(distributions) == 0:\n return OrderedDict()\n \n+ # fANOVA does not support parameter distributions with a single value.\n+ # However, there is no reason to calculate parameter importance in such case anyway,\n+ # since it will always be 0 as the parameter is constant in the objective function.\n+ zero_importances = {name: 0.0 for name, dist in distributions.items() if dist.single()}\n+ distributions = {name: dist for name, dist in distributions.items() if not dist.single()}\n+\n trials = []\n for trial in study.trials:\n if trial.state != TrialState.COMPLETE:\n@@ -131,6 +137,7 @@\n importance, _ = evaluator.get_importance((i,))\n importances[name] = importance\n \n+ importances = {**importances, **zero_importances}\n total_importance = sum(importances.values())\n for name in importances:\n importances[name] /= total_importance\n", "issue": "`ZeroDivisionError` when plotting parameter importance\nPlotting parameter importance of a multi-objective study causes a `ZeroDivisionError: Weights sum to zero, can't be normalized.` The error happens with a with two-objective study, which otherwise runs nicely and produces a Pareto plot without any problem.\r\n\r\n## Expected behavior\r\n\r\nGenerate a plot of parameter importance based on a multi-objective study, as described in the documentation.\r\n\r\n## Environment\r\n\r\n- Optuna version: 2.6.0\r\n- Python version: 3.8\r\n- OS: Ubuntu 18.04\r\n\r\n## Error messages, stack traces, or logs\r\n\r\n```\r\nERROR Traceback (most recent call last):\r\n File \"/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/visualization/_param_importances.py\", line 126, in plot_param_importances\r\n importances = optuna.importance.get_param_importances(\r\n File \"/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/__init__.py\", line 91, in get_param_importances\r\n return evaluator.evaluate(study, params=params, target=target)\r\n File \"/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_evaluator.py\", line 121, in evaluate\r\n evaluator.fit(\r\n File \"/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_fanova.py\", line 74, in fit\r\n self._trees = [_FanovaTree(e.tree_, search_spaces) for e in self._forest.estimators_]\r\n File \"/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_fanova.py\", line 74, in <listcomp>\r\n self._trees = [_FanovaTree(e.tree_, search_spaces) for e in self._forest.estimators_]\r\n File \"/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_tree.py\", line 23, in __init__\r\n statistics = self._precompute_statistics()\r\n File \"/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_tree.py\", line 185, in _precompute_statistics\r\n value = numpy.average(child_values, weights=child_weights)\r\n File \"<__array_function__ internals>\", line 5, in average\r\n File \"/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/numpy/lib/function_base.py\", line 409, in average\r\n raise ZeroDivisionError(\r\nZeroDivisionError: Weights sum to zero, can't be normalized\r\n\r\n---------------------------------------------------------------------------\r\nZeroDivisionError Traceback (most recent call last)\r\n<ipython-input-10-380c9d9887b3> in <module>\r\n/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/... in create_param_importance_plot(self)\r\n 168 \r\n 169 def create_param_importance_plot(self):\r\n--> 170 param_importance_fig = optuna.visualization.plot_param_importances(\r\n 171 self.study, target_name=Q_METRIC_NAME, target=lambda t: t.values[0]\r\n 172 )\r\n\r\n/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/visualization/_param_importances.py in plot_param_importances(study, evaluator, params, target, target_name)\r\n 124 return go.Figure(data=[], layout=layout)\r\n 125 \r\n--> 126 importances = optuna.importance.get_param_importances(\r\n 127 study, evaluator=evaluator, params=params, target=target\r\n 128 )\r\n\r\n/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/__init__.py in get_param_importances(study, evaluator, params, target)\r\n 89 raise TypeError(\"Evaluator must be a subclass of BaseImportanceEvaluator.\")\r\n 90 \r\n---> 91 return evaluator.evaluate(study, params=params, target=target)\r\n\r\n/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_evaluator.py in evaluate(self, study, params, target)\r\n 119 \r\n 120 evaluator = self._evaluator\r\n--> 121 evaluator.fit(\r\n 122 X=trans_params,\r\n 123 y=trans_values,\r\n\r\n/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_fanova.py in fit(self, X, y, search_spaces, column_to_encoded_columns)\r\n 72 self._forest.fit(X, y)\r\n 73 \r\n---> 74 self._trees = [_FanovaTree(e.tree_, search_spaces) for e in self._forest.estimators_]\r\n 75 self._column_to_encoded_columns = column_to_encoded_columns\r\n 76 self._variances = {}\r\n\r\n/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_fanova.py in <listcomp>(.0)\r\n 72 self._forest.fit(X, y)\r\n 73 \r\n---> 74 self._trees = [_FanovaTree(e.tree_, search_spaces) for e in self._forest.estimators_]\r\n 75 self._column_to_encoded_columns = column_to_encoded_columns\r\n 76 self._variances = {}\r\n\r\n/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_tree.py in __init__(self, tree, search_spaces)\r\n 21 self._search_spaces = search_spaces\r\n 22 \r\n---> 23 statistics = self._precompute_statistics()\r\n 24 split_midpoints, split_sizes = self._precompute_split_midpoints_and_sizes()\r\n 25 subtree_active_features = self._precompute_subtree_active_features()\r\n\r\n/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/optuna/importance/_fanova/_tree.py in _precompute_statistics(self)\r\n 183 child_values.append(statistics[child_node_index, 0])\r\n 184 child_weights.append(statistics[child_node_index, 1])\r\n--> 185 value = numpy.average(child_values, weights=child_weights)\r\n 186 weight = numpy.sum(child_weights)\r\n 187 statistics[node_index] = [value, weight]\r\n\r\n<__array_function__ internals> in average(*args, **kwargs)\r\n\r\n/opt/mt/anaconda3/envs/develop/lib/python3.8/site-packages/numpy/lib/function_base.py in average(a, axis, weights, returned)\r\n 407 scl = wgt.sum(axis=axis, dtype=result_dtype)\r\n 408 if np.any(scl == 0.0):\r\n--> 409 raise ZeroDivisionError(\r\n 410 \"Weights sum to zero, can't be normalized\")\r\n 411 \r\n\r\nZeroDivisionError: Weights sum to zero, can't be normalized\r\n```\r\n\r\n## Steps to reproduce\r\n\r\n1. Configure and run a default study with two objectives (maximize, maximize), the main function returning the result of a rounded float for one objective and an integer division for the second objective.\r\n2. Generate a parameter importance plot with:\r\n```python\r\nfig = optuna.visualization.plot_param_importances(study, target_name=METRIC_NAME, target=lambda t: t.values[0]) \r\n```\r\n\r\n## Additional context (optional)\r\n\r\nApart from the param importances not being generated, the study runs smoothly and I can generate a Pareto front plot without any problem.\r\n\n", "before_files": [{"content": "from collections import OrderedDict\nfrom typing import Callable\nfrom typing import Dict\nfrom typing import List\nfrom typing import Optional\n\nimport numpy\n\nfrom optuna._transform import _SearchSpaceTransform\nfrom optuna.importance._base import _get_distributions\nfrom optuna.importance._base import BaseImportanceEvaluator\nfrom optuna.importance._fanova._fanova import _Fanova\nfrom optuna.study import Study\nfrom optuna.trial import FrozenTrial\nfrom optuna.trial import TrialState\n\n\nclass FanovaImportanceEvaluator(BaseImportanceEvaluator):\n \"\"\"fANOVA importance evaluator.\n\n Implements the fANOVA hyperparameter importance evaluation algorithm in\n `An Efficient Approach for Assessing Hyperparameter Importance\n <http://proceedings.mlr.press/v32/hutter14.html>`_.\n\n Given a study, fANOVA fits a random forest regression model that predicts the objective value\n given a parameter configuration. The more accurate this model is, the more reliable the\n importances assessed by this class are.\n\n .. note::\n\n Requires the `sklearn <https://github.com/scikit-learn/scikit-learn>`_ Python package.\n\n .. note::\n\n Pairwise and higher order importances are not supported through this class. They can be\n computed using :class:`~optuna.importance._fanova._fanova._Fanova` directly but is not\n recommended as interfaces may change without prior notice.\n\n .. note::\n\n The performance of fANOVA depends on the prediction performance of the underlying\n random forest model. In order to obtain high prediction performance, it is necessary to\n cover a wide range of the hyperparameter search space. It is recommended to use an\n exploration-oriented sampler such as :class:`~optuna.samplers.RandomSampler`.\n\n .. note::\n\n For how to cite the original work, please refer to\n https://automl.github.io/fanova/cite.html.\n\n Args:\n n_trees:\n The number of trees in the forest.\n max_depth:\n The maximum depth of the trees in the forest.\n seed:\n Controls the randomness of the forest. For deterministic behavior, specify a value\n other than :obj:`None`.\n\n \"\"\"\n\n def __init__(\n self, *, n_trees: int = 64, max_depth: int = 64, seed: Optional[int] = None\n ) -> None:\n self._evaluator = _Fanova(\n n_trees=n_trees,\n max_depth=max_depth,\n min_samples_split=2,\n min_samples_leaf=1,\n seed=seed,\n )\n\n def evaluate(\n self,\n study: Study,\n params: Optional[List[str]] = None,\n *,\n target: Optional[Callable[[FrozenTrial], float]] = None,\n ) -> Dict[str, float]:\n if target is None and study._is_multi_objective():\n raise ValueError(\n \"If the `study` is being used for multi-objective optimization, \"\n \"please specify the `target`. For example, use \"\n \"`target=lambda t: t.values[0]` for the first objective value.\"\n )\n\n distributions = _get_distributions(study, params)\n if len(distributions) == 0:\n return OrderedDict()\n\n trials = []\n for trial in study.trials:\n if trial.state != TrialState.COMPLETE:\n continue\n if any(name not in trial.params for name in distributions.keys()):\n continue\n trials.append(trial)\n\n trans = _SearchSpaceTransform(distributions, transform_log=False, transform_step=False)\n\n n_trials = len(trials)\n trans_params = numpy.empty((n_trials, trans.bounds.shape[0]), dtype=numpy.float64)\n trans_values = numpy.empty(n_trials, dtype=numpy.float64)\n\n for trial_idx, trial in enumerate(trials):\n trans_params[trial_idx] = trans.transform(trial.params)\n trans_values[trial_idx] = trial.value if target is None else target(trial)\n\n trans_bounds = trans.bounds\n column_to_encoded_columns = trans.column_to_encoded_columns\n\n if trans_params.size == 0: # `params` were given but as an empty list.\n return OrderedDict()\n\n # Many (deep) copies of the search spaces are required during the tree traversal and using\n # Optuna distributions will create a bottleneck.\n # Therefore, search spaces (parameter distributions) are represented by a single\n # `numpy.ndarray`, coupled with a list of flags that indicate whether they are categorical\n # or not.\n\n evaluator = self._evaluator\n evaluator.fit(\n X=trans_params,\n y=trans_values,\n search_spaces=trans_bounds,\n column_to_encoded_columns=column_to_encoded_columns,\n )\n\n importances = {}\n for i, name in enumerate(distributions.keys()):\n importance, _ = evaluator.get_importance((i,))\n importances[name] = importance\n\n total_importance = sum(importances.values())\n for name in importances:\n importances[name] /= total_importance\n\n sorted_importances = OrderedDict(\n reversed(\n sorted(importances.items(), key=lambda name_and_importance: name_and_importance[1])\n )\n )\n return sorted_importances\n", "path": "optuna/importance/_fanova/_evaluator.py"}]} | 3,765 | 283 |
gh_patches_debug_31455 | rasdani/github-patches | git_diff | pypa__pipenv-3186 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Quote command if parentheses exist
Thank you for contributing to Pipenv!
### The issue
Fixes #3168
### The fix
Quote the command if it contains `()`.
### The checklist
* [x] Associated issue
* [x] A news fragment in the `news/` directory to describe this fix with the extension `.bugfix`, `.feature`, `.behavior`, `.doc`. `.vendor`. or `.trivial` (this will appear in the release changelog). Use semantic line breaks and name the file after the issue number or the PR #.
<!--
### If this is a patch to the `vendor` directory…
Please try to refrain from submitting patches directly to `vendor` or `patched`, but raise your issue to the upstream project instead, and inform Pipenv to upgrade when the upstream project accepts the fix.
A pull request to upgrade vendor packages is strongly discouraged, unless there is a very good reason (e.g. you need to test Pipenv’s integration to a new vendor feature). Pipenv audits and performs vendor upgrades regularly, generally before a new release is about to drop.
If your patch is not or cannot be accepted by upstream, but is essential to Pipenv (make sure to discuss this with maintainers!), please remember to attach a patch file in `tasks/vendoring/patched`, so this divergence from upstream can be recorded and replayed afterwards.
-->
</issue>
<code>
[start of pipenv/cmdparse.py]
1 import re
2 import shlex
3
4 import six
5
6
7 class ScriptEmptyError(ValueError):
8 pass
9
10
11 class Script(object):
12 """Parse a script line (in Pipfile's [scripts] section).
13
14 This always works in POSIX mode, even on Windows.
15 """
16
17 def __init__(self, command, args=None):
18 self._parts = [command]
19 if args:
20 self._parts.extend(args)
21
22 @classmethod
23 def parse(cls, value):
24 if isinstance(value, six.string_types):
25 value = shlex.split(value)
26 if not value:
27 raise ScriptEmptyError(value)
28 return cls(value[0], value[1:])
29
30 def __repr__(self):
31 return "Script({0!r})".format(self._parts)
32
33 @property
34 def command(self):
35 return self._parts[0]
36
37 @property
38 def args(self):
39 return self._parts[1:]
40
41 def extend(self, extra_args):
42 self._parts.extend(extra_args)
43
44 def cmdify(self):
45 """Encode into a cmd-executable string.
46
47 This re-implements CreateProcess's quoting logic to turn a list of
48 arguments into one single string for the shell to interpret.
49
50 * All double quotes are escaped with a backslash.
51 * Existing backslashes before a quote are doubled, so they are all
52 escaped properly.
53 * Backslashes elsewhere are left as-is; cmd will interpret them
54 literally.
55
56 The result is then quoted into a pair of double quotes to be grouped.
57
58 An argument is intentionally not quoted if it does not contain
59 whitespaces. This is done to be compatible with Windows built-in
60 commands that don't work well with quotes, e.g. everything with `echo`,
61 and DOS-style (forward slash) switches.
62
63 The intended use of this function is to pre-process an argument list
64 before passing it into ``subprocess.Popen(..., shell=True)``.
65
66 See also: https://docs.python.org/3/library/subprocess.html#converting-argument-sequence
67 """
68 return " ".join(
69 arg if not next(re.finditer(r'\s', arg), None)
70 else '"{0}"'.format(re.sub(r'(\\*)"', r'\1\1\\"', arg))
71 for arg in self._parts
72 )
73
[end of pipenv/cmdparse.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pipenv/cmdparse.py b/pipenv/cmdparse.py
--- a/pipenv/cmdparse.py
+++ b/pipenv/cmdparse.py
@@ -1,3 +1,4 @@
+import itertools
import re
import shlex
@@ -8,6 +9,12 @@
pass
+def _quote_if_contains(value, pattern):
+ if next(re.finditer(pattern, value), None):
+ return '"{0}"'.format(re.sub(r'(\\*)"', r'\1\1\\"', value))
+ return value
+
+
class Script(object):
"""Parse a script line (in Pipfile's [scripts] section).
@@ -56,17 +63,21 @@
The result is then quoted into a pair of double quotes to be grouped.
An argument is intentionally not quoted if it does not contain
- whitespaces. This is done to be compatible with Windows built-in
+ foul characters. This is done to be compatible with Windows built-in
commands that don't work well with quotes, e.g. everything with `echo`,
and DOS-style (forward slash) switches.
+ Foul characters include:
+
+ * Whitespaces.
+ * Parentheses in the command. (pypa/pipenv#3168)
+
The intended use of this function is to pre-process an argument list
before passing it into ``subprocess.Popen(..., shell=True)``.
See also: https://docs.python.org/3/library/subprocess.html#converting-argument-sequence
"""
- return " ".join(
- arg if not next(re.finditer(r'\s', arg), None)
- else '"{0}"'.format(re.sub(r'(\\*)"', r'\1\1\\"', arg))
- for arg in self._parts
- )
+ return " ".join(itertools.chain(
+ [_quote_if_contains(self.command, r'[\s()]')],
+ (_quote_if_contains(arg, r'\s') for arg in self.args),
+ ))
| {"golden_diff": "diff --git a/pipenv/cmdparse.py b/pipenv/cmdparse.py\n--- a/pipenv/cmdparse.py\n+++ b/pipenv/cmdparse.py\n@@ -1,3 +1,4 @@\n+import itertools\n import re\n import shlex\n \n@@ -8,6 +9,12 @@\n pass\n \n \n+def _quote_if_contains(value, pattern):\n+ if next(re.finditer(pattern, value), None):\n+ return '\"{0}\"'.format(re.sub(r'(\\\\*)\"', r'\\1\\1\\\\\"', value))\n+ return value\n+\n+\n class Script(object):\n \"\"\"Parse a script line (in Pipfile's [scripts] section).\n \n@@ -56,17 +63,21 @@\n The result is then quoted into a pair of double quotes to be grouped.\n \n An argument is intentionally not quoted if it does not contain\n- whitespaces. This is done to be compatible with Windows built-in\n+ foul characters. This is done to be compatible with Windows built-in\n commands that don't work well with quotes, e.g. everything with `echo`,\n and DOS-style (forward slash) switches.\n \n+ Foul characters include:\n+\n+ * Whitespaces.\n+ * Parentheses in the command. (pypa/pipenv#3168)\n+\n The intended use of this function is to pre-process an argument list\n before passing it into ``subprocess.Popen(..., shell=True)``.\n \n See also: https://docs.python.org/3/library/subprocess.html#converting-argument-sequence\n \"\"\"\n- return \" \".join(\n- arg if not next(re.finditer(r'\\s', arg), None)\n- else '\"{0}\"'.format(re.sub(r'(\\\\*)\"', r'\\1\\1\\\\\"', arg))\n- for arg in self._parts\n- )\n+ return \" \".join(itertools.chain(\n+ [_quote_if_contains(self.command, r'[\\s()]')],\n+ (_quote_if_contains(arg, r'\\s') for arg in self.args),\n+ ))\n", "issue": "Quote command if parentheses exist\nThank you for contributing to Pipenv!\r\n\r\n\r\n### The issue\r\n\r\nFixes #3168 \r\n\r\n### The fix\r\n\r\nQuote the command if it contains `()`.\r\n\r\n### The checklist\r\n\r\n* [x] Associated issue\r\n* [x] A news fragment in the `news/` directory to describe this fix with the extension `.bugfix`, `.feature`, `.behavior`, `.doc`. `.vendor`. or `.trivial` (this will appear in the release changelog). Use semantic line breaks and name the file after the issue number or the PR #.\r\n\r\n<!--\r\n### If this is a patch to the `vendor` directory\u2026\r\n\r\nPlease try to refrain from submitting patches directly to `vendor` or `patched`, but raise your issue to the upstream project instead, and inform Pipenv to upgrade when the upstream project accepts the fix.\r\n\r\nA pull request to upgrade vendor packages is strongly discouraged, unless there is a very good reason (e.g. you need to test Pipenv\u2019s integration to a new vendor feature). Pipenv audits and performs vendor upgrades regularly, generally before a new release is about to drop.\r\n\r\nIf your patch is not or cannot be accepted by upstream, but is essential to Pipenv (make sure to discuss this with maintainers!), please remember to attach a patch file in `tasks/vendoring/patched`, so this divergence from upstream can be recorded and replayed afterwards.\r\n-->\r\n\n", "before_files": [{"content": "import re\nimport shlex\n\nimport six\n\n\nclass ScriptEmptyError(ValueError):\n pass\n\n\nclass Script(object):\n \"\"\"Parse a script line (in Pipfile's [scripts] section).\n\n This always works in POSIX mode, even on Windows.\n \"\"\"\n\n def __init__(self, command, args=None):\n self._parts = [command]\n if args:\n self._parts.extend(args)\n\n @classmethod\n def parse(cls, value):\n if isinstance(value, six.string_types):\n value = shlex.split(value)\n if not value:\n raise ScriptEmptyError(value)\n return cls(value[0], value[1:])\n\n def __repr__(self):\n return \"Script({0!r})\".format(self._parts)\n\n @property\n def command(self):\n return self._parts[0]\n\n @property\n def args(self):\n return self._parts[1:]\n\n def extend(self, extra_args):\n self._parts.extend(extra_args)\n\n def cmdify(self):\n \"\"\"Encode into a cmd-executable string.\n\n This re-implements CreateProcess's quoting logic to turn a list of\n arguments into one single string for the shell to interpret.\n\n * All double quotes are escaped with a backslash.\n * Existing backslashes before a quote are doubled, so they are all\n escaped properly.\n * Backslashes elsewhere are left as-is; cmd will interpret them\n literally.\n\n The result is then quoted into a pair of double quotes to be grouped.\n\n An argument is intentionally not quoted if it does not contain\n whitespaces. This is done to be compatible with Windows built-in\n commands that don't work well with quotes, e.g. everything with `echo`,\n and DOS-style (forward slash) switches.\n\n The intended use of this function is to pre-process an argument list\n before passing it into ``subprocess.Popen(..., shell=True)``.\n\n See also: https://docs.python.org/3/library/subprocess.html#converting-argument-sequence\n \"\"\"\n return \" \".join(\n arg if not next(re.finditer(r'\\s', arg), None)\n else '\"{0}\"'.format(re.sub(r'(\\\\*)\"', r'\\1\\1\\\\\"', arg))\n for arg in self._parts\n )\n", "path": "pipenv/cmdparse.py"}]} | 1,469 | 454 |
gh_patches_debug_10456 | rasdani/github-patches | git_diff | conan-io__conan-8728 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[feature] Consider using CMake's `include_guard()` in `conan_toolchain.cmake`
Right now at the beginning of `conan_toolchain.cmake` `CMakeToolchain` generates:
```cmake
# Avoid including toolchain file several times (bad if appending to variables like
# CMAKE_CXX_FLAGS. See https://github.com/android/ndk/issues/323
if(CONAN_TOOLCHAIN_INCLUDED)
return()
endif()
set(CONAN_TOOLCHAIN_INCLUDED TRUE)
```
When minimum version of CMake (3.10) will not be an issue anymore please consider using: https://cmake.org/cmake/help/latest/command/include_guard.html
- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).
</issue>
<code>
[start of conan/tools/cmake/base.py]
1 import textwrap
2 from collections import OrderedDict
3
4 import six
5 from jinja2 import DictLoader, Environment
6
7 from conans.util.files import save
8
9
10 class Variables(OrderedDict):
11 _configuration_types = None # Needed for py27 to avoid infinite recursion
12
13 def __init__(self):
14 super(Variables, self).__init__()
15 self._configuration_types = {}
16
17 def __getattribute__(self, config):
18 try:
19 return super(Variables, self).__getattribute__(config)
20 except AttributeError:
21 return self._configuration_types.setdefault(config, OrderedDict())
22
23 @property
24 def configuration_types(self):
25 # Reverse index for the configuration_types variables
26 ret = OrderedDict()
27 for conf, definitions in self._configuration_types.items():
28 for k, v in definitions.items():
29 ret.setdefault(k, []).append((conf, v))
30 return ret
31
32 def quote_preprocessor_strings(self):
33 for key, var in self.items():
34 if isinstance(var, six.string_types):
35 self[key] = '"{}"'.format(var)
36 for config, data in self._configuration_types.items():
37 for key, var in data.items():
38 if isinstance(var, six.string_types):
39 data[key] = '"{}"'.format(var)
40
41
42 class CMakeToolchainBase(object):
43 filename = "conan_toolchain.cmake"
44
45 _toolchain_macros_tpl = textwrap.dedent("""
46 {% macro iterate_configs(var_config, action) -%}
47 {% for it, values in var_config.items() -%}
48 {%- set genexpr = namespace(str='') %}
49 {%- for conf, value in values -%}
50 {%- set genexpr.str = genexpr.str +
51 '$<IF:$<CONFIG:' + conf + '>,' + value|string + ',' %}
52 {%- if loop.last %}{% set genexpr.str = genexpr.str + '""' -%}{%- endif -%}
53 {%- endfor -%}
54 {% for i in range(values|count) %}{%- set genexpr.str = genexpr.str + '>' %}
55 {%- endfor -%}
56 {% if action=='set' %}
57 set({{ it }} {{ genexpr.str }} CACHE STRING
58 "Variable {{ it }} conan-toolchain defined")
59 {% elif action=='add_definitions' %}
60 add_definitions(-D{{ it }}={{ genexpr.str }})
61 {% endif %}
62 {%- endfor %}
63 {% endmacro %}
64 """)
65
66 _base_toolchain_tpl = textwrap.dedent("""
67 {% import 'toolchain_macros' as toolchain_macros %}
68
69 # Conan automatically generated toolchain file
70 # DO NOT EDIT MANUALLY, it will be overwritten
71
72 # Avoid including toolchain file several times (bad if appending to variables like
73 # CMAKE_CXX_FLAGS. See https://github.com/android/ndk/issues/323
74 if(CONAN_TOOLCHAIN_INCLUDED)
75 return()
76 endif()
77 set(CONAN_TOOLCHAIN_INCLUDED TRUE)
78
79 {% block before_try_compile %}
80 {# build_type (Release, Debug, etc) is only defined for single-config generators #}
81 {%- if build_type %}
82 set(CMAKE_BUILD_TYPE "{{ build_type }}" CACHE STRING "Choose the type of build." FORCE)
83 {%- endif %}
84 {% endblock %}
85
86 get_property( _CMAKE_IN_TRY_COMPILE GLOBAL PROPERTY IN_TRY_COMPILE )
87 if(_CMAKE_IN_TRY_COMPILE)
88 message(STATUS "Running toolchain IN_TRY_COMPILE")
89 return()
90 endif()
91
92 message("Using Conan toolchain through ${CMAKE_TOOLCHAIN_FILE}.")
93
94 {% block main %}
95 # We are going to adjust automagically many things as requested by Conan
96 # these are the things done by 'conan_basic_setup()'
97 set(CMAKE_EXPORT_NO_PACKAGE_REGISTRY ON)
98 {%- if find_package_prefer_config %}
99 set(CMAKE_FIND_PACKAGE_PREFER_CONFIG {{ find_package_prefer_config }})
100 {%- endif %}
101 # To support the cmake_find_package generators
102 {% if cmake_module_path -%}
103 set(CMAKE_MODULE_PATH {{ cmake_module_path }} ${CMAKE_MODULE_PATH})
104 {%- endif %}
105 {% if cmake_prefix_path -%}
106 set(CMAKE_PREFIX_PATH {{ cmake_prefix_path }} ${CMAKE_PREFIX_PATH})
107 {%- endif %}
108 {% endblock %}
109
110 {% if shared_libs -%}
111 message(STATUS "Conan toolchain: Setting BUILD_SHARED_LIBS= {{ shared_libs }}")
112 set(BUILD_SHARED_LIBS {{ shared_libs }})
113 {%- endif %}
114
115 {% if parallel -%}
116 set(CONAN_CXX_FLAGS "${CONAN_CXX_FLAGS} {{ parallel }}")
117 set(CONAN_C_FLAGS "${CONAN_C_FLAGS} {{ parallel }}")
118 {%- endif %}
119
120 {% if cppstd -%}
121 message(STATUS "Conan C++ Standard {{ cppstd }} with extensions {{ cppstd_extensions }}}")
122 set(CMAKE_CXX_STANDARD {{ cppstd }})
123 set(CMAKE_CXX_EXTENSIONS {{ cppstd_extensions }})
124 {%- endif %}
125
126 set(CMAKE_CXX_FLAGS_INIT "${CONAN_CXX_FLAGS}" CACHE STRING "" FORCE)
127 set(CMAKE_C_FLAGS_INIT "${CONAN_C_FLAGS}" CACHE STRING "" FORCE)
128 set(CMAKE_SHARED_LINKER_FLAGS_INIT "${CONAN_SHARED_LINKER_FLAGS}" CACHE STRING "" FORCE)
129 set(CMAKE_EXE_LINKER_FLAGS_INIT "${CONAN_EXE_LINKER_FLAGS}" CACHE STRING "" FORCE)
130
131 # Variables
132 {% for it, value in variables.items() %}
133 set({{ it }} "{{ value }}" CACHE STRING "Variable {{ it }} conan-toolchain defined")
134 {%- endfor %}
135 # Variables per configuration
136 {{ toolchain_macros.iterate_configs(variables_config, action='set') }}
137
138 # Preprocessor definitions
139 {% for it, value in preprocessor_definitions.items() -%}
140 # add_compile_definitions only works in cmake >= 3.12
141 add_definitions(-D{{ it }}={{ value }})
142 {%- endfor %}
143 # Preprocessor definitions per configuration
144 {{ toolchain_macros.iterate_configs(preprocessor_definitions_config,
145 action='add_definitions') }}
146 """)
147
148 def __init__(self, conanfile, **kwargs):
149 self._conanfile = conanfile
150 self.variables = Variables()
151 self.preprocessor_definitions = Variables()
152
153 # To find the generated cmake_find_package finders
154 self.cmake_prefix_path = "${CMAKE_BINARY_DIR}"
155 self.cmake_module_path = "${CMAKE_BINARY_DIR}"
156
157 self.build_type = None
158
159 self.find_package_prefer_config = "ON" # assume ON by default if not specified in conf
160 prefer_config = conanfile.conf["tools.cmake.cmaketoolchain"].find_package_prefer_config
161 if prefer_config is not None and prefer_config.lower() in ("false", "0", "off"):
162 self.find_package_prefer_config = "OFF"
163
164 def _get_templates(self):
165 return {
166 'toolchain_macros': self._toolchain_macros_tpl,
167 'base_toolchain': self._base_toolchain_tpl
168 }
169
170 def _get_template_context_data(self):
171 """ Returns dict, the context for the '_template_toolchain'
172 """
173 self.preprocessor_definitions.quote_preprocessor_strings()
174
175 ctxt_toolchain = {
176 "variables": self.variables,
177 "variables_config": self.variables.configuration_types,
178 "preprocessor_definitions": self.preprocessor_definitions,
179 "preprocessor_definitions_config": self.preprocessor_definitions.configuration_types,
180 "cmake_prefix_path": self.cmake_prefix_path,
181 "cmake_module_path": self.cmake_module_path,
182 "build_type": self.build_type,
183 "find_package_prefer_config": self.find_package_prefer_config,
184 }
185 return ctxt_toolchain
186
187 def generate(self):
188 # Prepare templates to be loaded
189 dict_loader = DictLoader(self._get_templates())
190 env = Environment(loader=dict_loader)
191
192 ctxt_toolchain = self._get_template_context_data()
193 t = env.get_template(self.filename)
194 content = t.render(**ctxt_toolchain)
195 save(self.filename, content)
196
[end of conan/tools/cmake/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conan/tools/cmake/base.py b/conan/tools/cmake/base.py
--- a/conan/tools/cmake/base.py
+++ b/conan/tools/cmake/base.py
@@ -71,10 +71,8 @@
# Avoid including toolchain file several times (bad if appending to variables like
# CMAKE_CXX_FLAGS. See https://github.com/android/ndk/issues/323
- if(CONAN_TOOLCHAIN_INCLUDED)
- return()
- endif()
- set(CONAN_TOOLCHAIN_INCLUDED TRUE)
+ include_guard()
+
{% block before_try_compile %}
{# build_type (Release, Debug, etc) is only defined for single-config generators #}
| {"golden_diff": "diff --git a/conan/tools/cmake/base.py b/conan/tools/cmake/base.py\n--- a/conan/tools/cmake/base.py\n+++ b/conan/tools/cmake/base.py\n@@ -71,10 +71,8 @@\n \n # Avoid including toolchain file several times (bad if appending to variables like\n # CMAKE_CXX_FLAGS. See https://github.com/android/ndk/issues/323\n- if(CONAN_TOOLCHAIN_INCLUDED)\n- return()\n- endif()\n- set(CONAN_TOOLCHAIN_INCLUDED TRUE)\n+ include_guard()\n+\n \n {% block before_try_compile %}\n {# build_type (Release, Debug, etc) is only defined for single-config generators #}\n", "issue": "[feature] Consider using CMake's `include_guard()` in `conan_toolchain.cmake`\nRight now at the beginning of `conan_toolchain.cmake` `CMakeToolchain` generates:\r\n\r\n```cmake\r\n# Avoid including toolchain file several times (bad if appending to variables like\r\n# CMAKE_CXX_FLAGS. See https://github.com/android/ndk/issues/323\r\nif(CONAN_TOOLCHAIN_INCLUDED)\r\n return()\r\nendif()\r\nset(CONAN_TOOLCHAIN_INCLUDED TRUE)\r\n```\r\n\r\nWhen minimum version of CMake (3.10) will not be an issue anymore please consider using: https://cmake.org/cmake/help/latest/command/include_guard.html\r\n\r\n- [x] I've read the [CONTRIBUTING guide](https://github.com/conan-io/conan/blob/develop/.github/CONTRIBUTING.md).\r\n\n", "before_files": [{"content": "import textwrap\nfrom collections import OrderedDict\n\nimport six\nfrom jinja2 import DictLoader, Environment\n\nfrom conans.util.files import save\n\n\nclass Variables(OrderedDict):\n _configuration_types = None # Needed for py27 to avoid infinite recursion\n\n def __init__(self):\n super(Variables, self).__init__()\n self._configuration_types = {}\n\n def __getattribute__(self, config):\n try:\n return super(Variables, self).__getattribute__(config)\n except AttributeError:\n return self._configuration_types.setdefault(config, OrderedDict())\n\n @property\n def configuration_types(self):\n # Reverse index for the configuration_types variables\n ret = OrderedDict()\n for conf, definitions in self._configuration_types.items():\n for k, v in definitions.items():\n ret.setdefault(k, []).append((conf, v))\n return ret\n\n def quote_preprocessor_strings(self):\n for key, var in self.items():\n if isinstance(var, six.string_types):\n self[key] = '\"{}\"'.format(var)\n for config, data in self._configuration_types.items():\n for key, var in data.items():\n if isinstance(var, six.string_types):\n data[key] = '\"{}\"'.format(var)\n\n\nclass CMakeToolchainBase(object):\n filename = \"conan_toolchain.cmake\"\n\n _toolchain_macros_tpl = textwrap.dedent(\"\"\"\n {% macro iterate_configs(var_config, action) -%}\n {% for it, values in var_config.items() -%}\n {%- set genexpr = namespace(str='') %}\n {%- for conf, value in values -%}\n {%- set genexpr.str = genexpr.str +\n '$<IF:$<CONFIG:' + conf + '>,' + value|string + ',' %}\n {%- if loop.last %}{% set genexpr.str = genexpr.str + '\"\"' -%}{%- endif -%}\n {%- endfor -%}\n {% for i in range(values|count) %}{%- set genexpr.str = genexpr.str + '>' %}\n {%- endfor -%}\n {% if action=='set' %}\n set({{ it }} {{ genexpr.str }} CACHE STRING\n \"Variable {{ it }} conan-toolchain defined\")\n {% elif action=='add_definitions' %}\n add_definitions(-D{{ it }}={{ genexpr.str }})\n {% endif %}\n {%- endfor %}\n {% endmacro %}\n \"\"\")\n\n _base_toolchain_tpl = textwrap.dedent(\"\"\"\n {% import 'toolchain_macros' as toolchain_macros %}\n\n # Conan automatically generated toolchain file\n # DO NOT EDIT MANUALLY, it will be overwritten\n\n # Avoid including toolchain file several times (bad if appending to variables like\n # CMAKE_CXX_FLAGS. See https://github.com/android/ndk/issues/323\n if(CONAN_TOOLCHAIN_INCLUDED)\n return()\n endif()\n set(CONAN_TOOLCHAIN_INCLUDED TRUE)\n\n {% block before_try_compile %}\n {# build_type (Release, Debug, etc) is only defined for single-config generators #}\n {%- if build_type %}\n set(CMAKE_BUILD_TYPE \"{{ build_type }}\" CACHE STRING \"Choose the type of build.\" FORCE)\n {%- endif %}\n {% endblock %}\n\n get_property( _CMAKE_IN_TRY_COMPILE GLOBAL PROPERTY IN_TRY_COMPILE )\n if(_CMAKE_IN_TRY_COMPILE)\n message(STATUS \"Running toolchain IN_TRY_COMPILE\")\n return()\n endif()\n\n message(\"Using Conan toolchain through ${CMAKE_TOOLCHAIN_FILE}.\")\n\n {% block main %}\n # We are going to adjust automagically many things as requested by Conan\n # these are the things done by 'conan_basic_setup()'\n set(CMAKE_EXPORT_NO_PACKAGE_REGISTRY ON)\n {%- if find_package_prefer_config %}\n set(CMAKE_FIND_PACKAGE_PREFER_CONFIG {{ find_package_prefer_config }})\n {%- endif %}\n # To support the cmake_find_package generators\n {% if cmake_module_path -%}\n set(CMAKE_MODULE_PATH {{ cmake_module_path }} ${CMAKE_MODULE_PATH})\n {%- endif %}\n {% if cmake_prefix_path -%}\n set(CMAKE_PREFIX_PATH {{ cmake_prefix_path }} ${CMAKE_PREFIX_PATH})\n {%- endif %}\n {% endblock %}\n\n {% if shared_libs -%}\n message(STATUS \"Conan toolchain: Setting BUILD_SHARED_LIBS= {{ shared_libs }}\")\n set(BUILD_SHARED_LIBS {{ shared_libs }})\n {%- endif %}\n\n {% if parallel -%}\n set(CONAN_CXX_FLAGS \"${CONAN_CXX_FLAGS} {{ parallel }}\")\n set(CONAN_C_FLAGS \"${CONAN_C_FLAGS} {{ parallel }}\")\n {%- endif %}\n\n {% if cppstd -%}\n message(STATUS \"Conan C++ Standard {{ cppstd }} with extensions {{ cppstd_extensions }}}\")\n set(CMAKE_CXX_STANDARD {{ cppstd }})\n set(CMAKE_CXX_EXTENSIONS {{ cppstd_extensions }})\n {%- endif %}\n\n set(CMAKE_CXX_FLAGS_INIT \"${CONAN_CXX_FLAGS}\" CACHE STRING \"\" FORCE)\n set(CMAKE_C_FLAGS_INIT \"${CONAN_C_FLAGS}\" CACHE STRING \"\" FORCE)\n set(CMAKE_SHARED_LINKER_FLAGS_INIT \"${CONAN_SHARED_LINKER_FLAGS}\" CACHE STRING \"\" FORCE)\n set(CMAKE_EXE_LINKER_FLAGS_INIT \"${CONAN_EXE_LINKER_FLAGS}\" CACHE STRING \"\" FORCE)\n\n # Variables\n {% for it, value in variables.items() %}\n set({{ it }} \"{{ value }}\" CACHE STRING \"Variable {{ it }} conan-toolchain defined\")\n {%- endfor %}\n # Variables per configuration\n {{ toolchain_macros.iterate_configs(variables_config, action='set') }}\n\n # Preprocessor definitions\n {% for it, value in preprocessor_definitions.items() -%}\n # add_compile_definitions only works in cmake >= 3.12\n add_definitions(-D{{ it }}={{ value }})\n {%- endfor %}\n # Preprocessor definitions per configuration\n {{ toolchain_macros.iterate_configs(preprocessor_definitions_config,\n action='add_definitions') }}\n \"\"\")\n\n def __init__(self, conanfile, **kwargs):\n self._conanfile = conanfile\n self.variables = Variables()\n self.preprocessor_definitions = Variables()\n\n # To find the generated cmake_find_package finders\n self.cmake_prefix_path = \"${CMAKE_BINARY_DIR}\"\n self.cmake_module_path = \"${CMAKE_BINARY_DIR}\"\n\n self.build_type = None\n\n self.find_package_prefer_config = \"ON\" # assume ON by default if not specified in conf\n prefer_config = conanfile.conf[\"tools.cmake.cmaketoolchain\"].find_package_prefer_config\n if prefer_config is not None and prefer_config.lower() in (\"false\", \"0\", \"off\"):\n self.find_package_prefer_config = \"OFF\"\n\n def _get_templates(self):\n return {\n 'toolchain_macros': self._toolchain_macros_tpl,\n 'base_toolchain': self._base_toolchain_tpl\n }\n\n def _get_template_context_data(self):\n \"\"\" Returns dict, the context for the '_template_toolchain'\n \"\"\"\n self.preprocessor_definitions.quote_preprocessor_strings()\n\n ctxt_toolchain = {\n \"variables\": self.variables,\n \"variables_config\": self.variables.configuration_types,\n \"preprocessor_definitions\": self.preprocessor_definitions,\n \"preprocessor_definitions_config\": self.preprocessor_definitions.configuration_types,\n \"cmake_prefix_path\": self.cmake_prefix_path,\n \"cmake_module_path\": self.cmake_module_path,\n \"build_type\": self.build_type,\n \"find_package_prefer_config\": self.find_package_prefer_config,\n }\n return ctxt_toolchain\n\n def generate(self):\n # Prepare templates to be loaded\n dict_loader = DictLoader(self._get_templates())\n env = Environment(loader=dict_loader)\n\n ctxt_toolchain = self._get_template_context_data()\n t = env.get_template(self.filename)\n content = t.render(**ctxt_toolchain)\n save(self.filename, content)\n", "path": "conan/tools/cmake/base.py"}]} | 2,951 | 157 |
gh_patches_debug_35752 | rasdani/github-patches | git_diff | feast-dev__feast-2845 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect projects-list.json generated by feast ui when using Postgres as a data source.
## Expected Behavior
Correct generation of the projects-list.json when running feast ui.
## Current Behavior
The generated projects-list.json does not contain a name in the dataSources field, causing the parser to fail.
## Steps to reproduce
Setup feast with PostgreSQL as a data source.
### Specifications
- Version:
- Platform:
- Subsystem:
## Possible Solution
Adding name=self.name to to_proto() in postgres_source.py. And in general making the postgres_source.py file more similar to e.g., file_source.py.
</issue>
<code>
[start of sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres_source.py]
1 import json
2 from typing import Callable, Dict, Iterable, Optional, Tuple
3
4 from feast.data_source import DataSource
5 from feast.infra.utils.postgres.connection_utils import _get_conn
6 from feast.protos.feast.core.DataSource_pb2 import DataSource as DataSourceProto
7 from feast.repo_config import RepoConfig
8 from feast.type_map import pg_type_code_to_pg_type, pg_type_to_feast_value_type
9 from feast.value_type import ValueType
10
11
12 class PostgreSQLSource(DataSource):
13 def __init__(
14 self,
15 name: str,
16 query: str,
17 timestamp_field: Optional[str] = "",
18 created_timestamp_column: Optional[str] = "",
19 field_mapping: Optional[Dict[str, str]] = None,
20 date_partition_column: Optional[str] = "",
21 ):
22 self._postgres_options = PostgreSQLOptions(name=name, query=query)
23
24 super().__init__(
25 name=name,
26 timestamp_field=timestamp_field,
27 created_timestamp_column=created_timestamp_column,
28 field_mapping=field_mapping,
29 date_partition_column=date_partition_column,
30 )
31
32 def __hash__(self):
33 return super().__hash__()
34
35 def __eq__(self, other):
36 if not isinstance(other, PostgreSQLSource):
37 raise TypeError(
38 "Comparisons should only involve PostgreSQLSource class objects."
39 )
40
41 return (
42 self._postgres_options._query == other._postgres_options._query
43 and self.timestamp_field == other.timestamp_field
44 and self.created_timestamp_column == other.created_timestamp_column
45 and self.field_mapping == other.field_mapping
46 )
47
48 @staticmethod
49 def from_proto(data_source: DataSourceProto):
50 assert data_source.HasField("custom_options")
51
52 postgres_options = json.loads(data_source.custom_options.configuration)
53 return PostgreSQLSource(
54 name=postgres_options["name"],
55 query=postgres_options["query"],
56 field_mapping=dict(data_source.field_mapping),
57 timestamp_field=data_source.timestamp_field,
58 created_timestamp_column=data_source.created_timestamp_column,
59 date_partition_column=data_source.date_partition_column,
60 )
61
62 def to_proto(self) -> DataSourceProto:
63 data_source_proto = DataSourceProto(
64 type=DataSourceProto.CUSTOM_SOURCE,
65 data_source_class_type="feast.infra.offline_stores.contrib.postgres_offline_store.postgres_source.PostgreSQLSource",
66 field_mapping=self.field_mapping,
67 custom_options=self._postgres_options.to_proto(),
68 )
69
70 data_source_proto.timestamp_field = self.timestamp_field
71 data_source_proto.created_timestamp_column = self.created_timestamp_column
72 data_source_proto.date_partition_column = self.date_partition_column
73
74 return data_source_proto
75
76 def validate(self, config: RepoConfig):
77 pass
78
79 @staticmethod
80 def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:
81 return pg_type_to_feast_value_type
82
83 def get_table_column_names_and_types(
84 self, config: RepoConfig
85 ) -> Iterable[Tuple[str, str]]:
86 with _get_conn(config.offline_store) as conn, conn.cursor() as cur:
87 cur.execute(
88 f"SELECT * FROM ({self.get_table_query_string()}) AS sub LIMIT 0"
89 )
90 return (
91 (c.name, pg_type_code_to_pg_type(c.type_code)) for c in cur.description
92 )
93
94 def get_table_query_string(self) -> str:
95 return f"({self._postgres_options._query})"
96
97
98 class PostgreSQLOptions:
99 def __init__(self, name: str, query: Optional[str]):
100 self._name = name
101 self._query = query
102
103 @classmethod
104 def from_proto(cls, postgres_options_proto: DataSourceProto.CustomSourceOptions):
105 config = json.loads(postgres_options_proto.configuration.decode("utf8"))
106 postgres_options = cls(name=config["name"], query=config["query"])
107
108 return postgres_options
109
110 def to_proto(self) -> DataSourceProto.CustomSourceOptions:
111 postgres_options_proto = DataSourceProto.CustomSourceOptions(
112 configuration=json.dumps(
113 {"name": self._name, "query": self._query}
114 ).encode()
115 )
116
117 return postgres_options_proto
118
[end of sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres_source.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres_source.py b/sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres_source.py
--- a/sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres_source.py
+++ b/sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres_source.py
@@ -18,6 +18,9 @@
created_timestamp_column: Optional[str] = "",
field_mapping: Optional[Dict[str, str]] = None,
date_partition_column: Optional[str] = "",
+ description: Optional[str] = "",
+ tags: Optional[Dict[str, str]] = None,
+ owner: Optional[str] = "",
):
self._postgres_options = PostgreSQLOptions(name=name, query=query)
@@ -27,6 +30,9 @@
created_timestamp_column=created_timestamp_column,
field_mapping=field_mapping,
date_partition_column=date_partition_column,
+ description=description,
+ tags=tags,
+ owner=owner,
)
def __hash__(self):
@@ -57,14 +63,21 @@
timestamp_field=data_source.timestamp_field,
created_timestamp_column=data_source.created_timestamp_column,
date_partition_column=data_source.date_partition_column,
+ description=data_source.description,
+ tags=dict(data_source.tags),
+ owner=data_source.owner,
)
def to_proto(self) -> DataSourceProto:
data_source_proto = DataSourceProto(
+ name=self.name,
type=DataSourceProto.CUSTOM_SOURCE,
data_source_class_type="feast.infra.offline_stores.contrib.postgres_offline_store.postgres_source.PostgreSQLSource",
field_mapping=self.field_mapping,
custom_options=self._postgres_options.to_proto(),
+ description=self.description,
+ tags=self.tags,
+ owner=self.owner,
)
data_source_proto.timestamp_field = self.timestamp_field
| {"golden_diff": "diff --git a/sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres_source.py b/sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres_source.py\n--- a/sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres_source.py\n+++ b/sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres_source.py\n@@ -18,6 +18,9 @@\n created_timestamp_column: Optional[str] = \"\",\n field_mapping: Optional[Dict[str, str]] = None,\n date_partition_column: Optional[str] = \"\",\n+ description: Optional[str] = \"\",\n+ tags: Optional[Dict[str, str]] = None,\n+ owner: Optional[str] = \"\",\n ):\n self._postgres_options = PostgreSQLOptions(name=name, query=query)\n \n@@ -27,6 +30,9 @@\n created_timestamp_column=created_timestamp_column,\n field_mapping=field_mapping,\n date_partition_column=date_partition_column,\n+ description=description,\n+ tags=tags,\n+ owner=owner,\n )\n \n def __hash__(self):\n@@ -57,14 +63,21 @@\n timestamp_field=data_source.timestamp_field,\n created_timestamp_column=data_source.created_timestamp_column,\n date_partition_column=data_source.date_partition_column,\n+ description=data_source.description,\n+ tags=dict(data_source.tags),\n+ owner=data_source.owner,\n )\n \n def to_proto(self) -> DataSourceProto:\n data_source_proto = DataSourceProto(\n+ name=self.name,\n type=DataSourceProto.CUSTOM_SOURCE,\n data_source_class_type=\"feast.infra.offline_stores.contrib.postgres_offline_store.postgres_source.PostgreSQLSource\",\n field_mapping=self.field_mapping,\n custom_options=self._postgres_options.to_proto(),\n+ description=self.description,\n+ tags=self.tags,\n+ owner=self.owner,\n )\n \n data_source_proto.timestamp_field = self.timestamp_field\n", "issue": "Incorrect projects-list.json generated by feast ui when using Postgres as a data source.\n## Expected Behavior \r\nCorrect generation of the projects-list.json when running feast ui. \r\n## Current Behavior\r\nThe generated projects-list.json does not contain a name in the dataSources field, causing the parser to fail.\r\n## Steps to reproduce\r\nSetup feast with PostgreSQL as a data source.\r\n### Specifications\r\n\r\n- Version:\r\n- Platform:\r\n- Subsystem:\r\n\r\n## Possible Solution\r\nAdding name=self.name to to_proto() in postgres_source.py. And in general making the postgres_source.py file more similar to e.g., file_source.py.\n", "before_files": [{"content": "import json\nfrom typing import Callable, Dict, Iterable, Optional, Tuple\n\nfrom feast.data_source import DataSource\nfrom feast.infra.utils.postgres.connection_utils import _get_conn\nfrom feast.protos.feast.core.DataSource_pb2 import DataSource as DataSourceProto\nfrom feast.repo_config import RepoConfig\nfrom feast.type_map import pg_type_code_to_pg_type, pg_type_to_feast_value_type\nfrom feast.value_type import ValueType\n\n\nclass PostgreSQLSource(DataSource):\n def __init__(\n self,\n name: str,\n query: str,\n timestamp_field: Optional[str] = \"\",\n created_timestamp_column: Optional[str] = \"\",\n field_mapping: Optional[Dict[str, str]] = None,\n date_partition_column: Optional[str] = \"\",\n ):\n self._postgres_options = PostgreSQLOptions(name=name, query=query)\n\n super().__init__(\n name=name,\n timestamp_field=timestamp_field,\n created_timestamp_column=created_timestamp_column,\n field_mapping=field_mapping,\n date_partition_column=date_partition_column,\n )\n\n def __hash__(self):\n return super().__hash__()\n\n def __eq__(self, other):\n if not isinstance(other, PostgreSQLSource):\n raise TypeError(\n \"Comparisons should only involve PostgreSQLSource class objects.\"\n )\n\n return (\n self._postgres_options._query == other._postgres_options._query\n and self.timestamp_field == other.timestamp_field\n and self.created_timestamp_column == other.created_timestamp_column\n and self.field_mapping == other.field_mapping\n )\n\n @staticmethod\n def from_proto(data_source: DataSourceProto):\n assert data_source.HasField(\"custom_options\")\n\n postgres_options = json.loads(data_source.custom_options.configuration)\n return PostgreSQLSource(\n name=postgres_options[\"name\"],\n query=postgres_options[\"query\"],\n field_mapping=dict(data_source.field_mapping),\n timestamp_field=data_source.timestamp_field,\n created_timestamp_column=data_source.created_timestamp_column,\n date_partition_column=data_source.date_partition_column,\n )\n\n def to_proto(self) -> DataSourceProto:\n data_source_proto = DataSourceProto(\n type=DataSourceProto.CUSTOM_SOURCE,\n data_source_class_type=\"feast.infra.offline_stores.contrib.postgres_offline_store.postgres_source.PostgreSQLSource\",\n field_mapping=self.field_mapping,\n custom_options=self._postgres_options.to_proto(),\n )\n\n data_source_proto.timestamp_field = self.timestamp_field\n data_source_proto.created_timestamp_column = self.created_timestamp_column\n data_source_proto.date_partition_column = self.date_partition_column\n\n return data_source_proto\n\n def validate(self, config: RepoConfig):\n pass\n\n @staticmethod\n def source_datatype_to_feast_value_type() -> Callable[[str], ValueType]:\n return pg_type_to_feast_value_type\n\n def get_table_column_names_and_types(\n self, config: RepoConfig\n ) -> Iterable[Tuple[str, str]]:\n with _get_conn(config.offline_store) as conn, conn.cursor() as cur:\n cur.execute(\n f\"SELECT * FROM ({self.get_table_query_string()}) AS sub LIMIT 0\"\n )\n return (\n (c.name, pg_type_code_to_pg_type(c.type_code)) for c in cur.description\n )\n\n def get_table_query_string(self) -> str:\n return f\"({self._postgres_options._query})\"\n\n\nclass PostgreSQLOptions:\n def __init__(self, name: str, query: Optional[str]):\n self._name = name\n self._query = query\n\n @classmethod\n def from_proto(cls, postgres_options_proto: DataSourceProto.CustomSourceOptions):\n config = json.loads(postgres_options_proto.configuration.decode(\"utf8\"))\n postgres_options = cls(name=config[\"name\"], query=config[\"query\"])\n\n return postgres_options\n\n def to_proto(self) -> DataSourceProto.CustomSourceOptions:\n postgres_options_proto = DataSourceProto.CustomSourceOptions(\n configuration=json.dumps(\n {\"name\": self._name, \"query\": self._query}\n ).encode()\n )\n\n return postgres_options_proto\n", "path": "sdk/python/feast/infra/offline_stores/contrib/postgres_offline_store/postgres_source.py"}]} | 1,803 | 438 |
gh_patches_debug_1634 | rasdani/github-patches | git_diff | coala__coala-4980 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Compatibility.py: Add comment explaining JSONDecodeError is missing in Python 3.4
difficulty/newcomer
Opened via [gitter](https://gitter.im/coala/coala/?at=59098da68fcce56b205cd7e0) by @jayvdb
</issue>
<code>
[start of coalib/misc/Compatibility.py]
1 import json
2 try:
3 JSONDecodeError = json.decoder.JSONDecodeError
4 except AttributeError: # pragma Python 3.5,3.6: no cover
5 JSONDecodeError = ValueError
6
[end of coalib/misc/Compatibility.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/coalib/misc/Compatibility.py b/coalib/misc/Compatibility.py
--- a/coalib/misc/Compatibility.py
+++ b/coalib/misc/Compatibility.py
@@ -1,5 +1,6 @@
import json
try:
+ # JSONDecodeError class is available since Python 3.5.x.
JSONDecodeError = json.decoder.JSONDecodeError
except AttributeError: # pragma Python 3.5,3.6: no cover
JSONDecodeError = ValueError
| {"golden_diff": "diff --git a/coalib/misc/Compatibility.py b/coalib/misc/Compatibility.py\n--- a/coalib/misc/Compatibility.py\n+++ b/coalib/misc/Compatibility.py\n@@ -1,5 +1,6 @@\n import json\n try:\n+ # JSONDecodeError class is available since Python 3.5.x.\n JSONDecodeError = json.decoder.JSONDecodeError\n except AttributeError: # pragma Python 3.5,3.6: no cover\n JSONDecodeError = ValueError\n", "issue": "Compatibility.py: Add comment explaining JSONDecodeError is missing in Python 3.4\ndifficulty/newcomer\n\nOpened via [gitter](https://gitter.im/coala/coala/?at=59098da68fcce56b205cd7e0) by @jayvdb\n", "before_files": [{"content": "import json\ntry:\n JSONDecodeError = json.decoder.JSONDecodeError\nexcept AttributeError: # pragma Python 3.5,3.6: no cover\n JSONDecodeError = ValueError\n", "path": "coalib/misc/Compatibility.py"}]} | 653 | 109 |
gh_patches_debug_727 | rasdani/github-patches | git_diff | comic__grand-challenge.org-2027 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Integrate Forums into challenges
Navigating to the forum of a challenge currently takes the participant outside of the challenge environment. Navigating back to the challenge is not possible through the breadcrumbs on the forum page and instead requires going via the Challenge tab and searching for the respective Challenge again. It would be nicer if the forums were visually integrated into the challenge page layout and if the breadcrumbs reflected their nesting in the challenge rather than their nesting under all forum on GC.
See here: https://github.com/DIAGNijmegen/rse-roadmap/issues/83#issuecomment-919250835
</issue>
<code>
[start of app/grandchallenge/forum_conversation/templatetags/forum_extras.py]
1 from actstream.models import Follow
2 from django import template
3 from django.contrib.contenttypes.models import ContentType
4
5 from grandchallenge.notifications.forms import FollowForm
6
7 register = template.Library()
8
9
10 @register.simple_tag
11 def get_follow_object_pk(user, follow_object):
12 object_follows_for_user = Follow.objects.filter(
13 user=user,
14 content_type=ContentType.objects.get(
15 app_label=follow_object._meta.app_label,
16 model=follow_object._meta.model_name,
17 ),
18 ).all()
19
20 if not object_follows_for_user:
21 current_follow_object = []
22 else:
23 current_follow_object = []
24 for obj in object_follows_for_user:
25 if not obj.follow_object:
26 continue
27 elif obj.follow_object.id == follow_object.id:
28 current_follow_object = obj.pk
29 return current_follow_object
30
31
32 @register.simple_tag
33 def follow_form(*, user, object_id, content_type):
34 return FollowForm(
35 user=user,
36 initial={
37 "object_id": object_id,
38 "content_type": content_type,
39 "actor_only": False,
40 },
41 )
42
43
44 @register.simple_tag()
45 def get_content_type(follow_object):
46 try:
47 ct = ContentType.objects.get(
48 app_label=follow_object._meta.app_label,
49 model=follow_object._meta.model_name,
50 )
51 except AttributeError:
52 ct = None
53 return ct
54
[end of app/grandchallenge/forum_conversation/templatetags/forum_extras.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/grandchallenge/forum_conversation/templatetags/forum_extras.py b/app/grandchallenge/forum_conversation/templatetags/forum_extras.py
--- a/app/grandchallenge/forum_conversation/templatetags/forum_extras.py
+++ b/app/grandchallenge/forum_conversation/templatetags/forum_extras.py
@@ -51,3 +51,9 @@
except AttributeError:
ct = None
return ct
+
+
[email protected]_tag()
+def is_participant(user, challenge):
+ if challenge.is_participant(user):
+ return True
| {"golden_diff": "diff --git a/app/grandchallenge/forum_conversation/templatetags/forum_extras.py b/app/grandchallenge/forum_conversation/templatetags/forum_extras.py\n--- a/app/grandchallenge/forum_conversation/templatetags/forum_extras.py\n+++ b/app/grandchallenge/forum_conversation/templatetags/forum_extras.py\n@@ -51,3 +51,9 @@\n except AttributeError:\r\n ct = None\r\n return ct\r\n+\r\n+\r\[email protected]_tag()\r\n+def is_participant(user, challenge):\r\n+ if challenge.is_participant(user):\r\n+ return True\n", "issue": "Integrate Forums into challenges \nNavigating to the forum of a challenge currently takes the participant outside of the challenge environment. Navigating back to the challenge is not possible through the breadcrumbs on the forum page and instead requires going via the Challenge tab and searching for the respective Challenge again. It would be nicer if the forums were visually integrated into the challenge page layout and if the breadcrumbs reflected their nesting in the challenge rather than their nesting under all forum on GC. \r\n\r\nSee here: https://github.com/DIAGNijmegen/rse-roadmap/issues/83#issuecomment-919250835\r\n\n", "before_files": [{"content": "from actstream.models import Follow\r\nfrom django import template\r\nfrom django.contrib.contenttypes.models import ContentType\r\n\r\nfrom grandchallenge.notifications.forms import FollowForm\r\n\r\nregister = template.Library()\r\n\r\n\r\[email protected]_tag\r\ndef get_follow_object_pk(user, follow_object):\r\n object_follows_for_user = Follow.objects.filter(\r\n user=user,\r\n content_type=ContentType.objects.get(\r\n app_label=follow_object._meta.app_label,\r\n model=follow_object._meta.model_name,\r\n ),\r\n ).all()\r\n\r\n if not object_follows_for_user:\r\n current_follow_object = []\r\n else:\r\n current_follow_object = []\r\n for obj in object_follows_for_user:\r\n if not obj.follow_object:\r\n continue\r\n elif obj.follow_object.id == follow_object.id:\r\n current_follow_object = obj.pk\r\n return current_follow_object\r\n\r\n\r\[email protected]_tag\r\ndef follow_form(*, user, object_id, content_type):\r\n return FollowForm(\r\n user=user,\r\n initial={\r\n \"object_id\": object_id,\r\n \"content_type\": content_type,\r\n \"actor_only\": False,\r\n },\r\n )\r\n\r\n\r\[email protected]_tag()\r\ndef get_content_type(follow_object):\r\n try:\r\n ct = ContentType.objects.get(\r\n app_label=follow_object._meta.app_label,\r\n model=follow_object._meta.model_name,\r\n )\r\n except AttributeError:\r\n ct = None\r\n return ct\r\n", "path": "app/grandchallenge/forum_conversation/templatetags/forum_extras.py"}]} | 1,081 | 134 |
gh_patches_debug_10509 | rasdani/github-patches | git_diff | openfun__richie-2035 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cookiecutter bootstrap failure
## Bug Report
**Problematic Behavior**
The `nightly round` job warns us that there is a problem with cookiecutter template.
**Additional context/Screenshots**
[Add any other context about the problem here. If applicable, add screenshots to help explain.](https://app.circleci.com/pipelines/github/openfun/richie/6840/workflows/7b6bd5f9-e2d4-4ef1-8e54-4562a521d50d/jobs/183180)
</issue>
<code>
[start of cookiecutter/{{cookiecutter.organization}}-richie-site-factory/template/{{cookiecutter.site}}/src/backend/{{cookiecutter.site}}/urls.py]
1 """
2 {{cookiecutter.site}} urls
3 """
4 from django.conf import settings
5 from django.conf.urls.i18n import i18n_patterns
6 from django.contrib import admin
7 from django.contrib.sitemaps.views import sitemap
8 from django.contrib.staticfiles.urls import staticfiles_urlpatterns
9 from django.urls import include, path, re_path
10 from django.views.generic import TemplateView
11 from django.views.static import serve
12
13 from cms.sitemaps import CMSSitemap
14 from richie.apps.courses.urls import (
15 redirects_urlpatterns as courses_redirects_urlpatterns,
16 urlpatterns as courses_urlpatterns,
17 )
18 from richie.apps.search.urls import urlpatterns as search_urlpatterns
19 from richie.plugins.urls import urlpatterns as plugins_urlpatterns
20
21 # For now, we use URLPathVersioning to be consistent with fonzie. Fonzie uses it
22 # because DRF OpenAPI only supports URLPathVersioning for now. See fonzie
23 # API_PREFIX config for more information.
24 API_PREFIX = r"v(?P<version>[0-9]+\.[0-9]+)"
25
26 admin.autodiscover()
27 admin.site.enable_nav_sidebar = False
28
29 urlpatterns = [
30 path(r"sitemap.xml", sitemap, {"sitemaps": {"cmspages": CMSSitemap}}),
31 re_path(
32 rf"api/{API_PREFIX:s}/",
33 include([*courses_urlpatterns, *search_urlpatterns, *plugins_urlpatterns]),
34 ),
35 path(r"", include("filer.server.urls")),
36 path(r"django-check-seo/", include("django_check_seo.urls")),
37 ]
38
39 urlpatterns += i18n_patterns(
40 path(r"admin/", admin.site.urls),
41 path(r"accounts/", include("django.contrib.auth.urls")),
42 path(r"", include("cms.urls")), # NOQA
43 )
44
45 # This is only needed when using runserver.
46 if settings.DEBUG:
47 urlpatterns = (
48 [
49 path(
50 r"styleguide/",
51 TemplateView.as_view(
52 template_name="richie/styleguide/index.html",
53 extra_context={"STYLEGUIDE": settings.STYLEGUIDE},
54 ),
55 name="styleguide",
56 ),
57 path(
58 r"media/<path:path>",
59 serve,
60 {"document_root": settings.MEDIA_ROOT, "show_indexes": True},
61 ),
62 ]
63 + staticfiles_urlpatterns()
64 + urlpatterns
65 )
66
67 handler400 = "richie.apps.core.views.error.error_400_view_handler"
68 handler403 = "richie.apps.core.views.error.error_403_view_handler"
69 handler404 = "richie.apps.core.views.error.error_404_view_handler"
70 handler500 = "richie.apps.core.views.error.error_500_view_handler"
71
[end of cookiecutter/{{cookiecutter.organization}}-richie-site-factory/template/{{cookiecutter.site}}/src/backend/{{cookiecutter.site}}/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cookiecutter/{{cookiecutter.organization}}-richie-site-factory/template/{{cookiecutter.site}}/src/backend/{{cookiecutter.site}}/urls.py b/cookiecutter/{{cookiecutter.organization}}-richie-site-factory/template/{{cookiecutter.site}}/src/backend/{{cookiecutter.site}}/urls.py
--- a/cookiecutter/{{cookiecutter.organization}}-richie-site-factory/template/{{cookiecutter.site}}/src/backend/{{cookiecutter.site}}/urls.py
+++ b/cookiecutter/{{cookiecutter.organization}}-richie-site-factory/template/{{cookiecutter.site}}/src/backend/{{cookiecutter.site}}/urls.py
@@ -11,10 +11,7 @@
from django.views.static import serve
from cms.sitemaps import CMSSitemap
-from richie.apps.courses.urls import (
- redirects_urlpatterns as courses_redirects_urlpatterns,
- urlpatterns as courses_urlpatterns,
-)
+from richie.apps.courses.urls import urlpatterns as courses_urlpatterns
from richie.apps.search.urls import urlpatterns as search_urlpatterns
from richie.plugins.urls import urlpatterns as plugins_urlpatterns
| {"golden_diff": "diff --git a/cookiecutter/{{cookiecutter.organization}}-richie-site-factory/template/{{cookiecutter.site}}/src/backend/{{cookiecutter.site}}/urls.py b/cookiecutter/{{cookiecutter.organization}}-richie-site-factory/template/{{cookiecutter.site}}/src/backend/{{cookiecutter.site}}/urls.py\n--- a/cookiecutter/{{cookiecutter.organization}}-richie-site-factory/template/{{cookiecutter.site}}/src/backend/{{cookiecutter.site}}/urls.py\n+++ b/cookiecutter/{{cookiecutter.organization}}-richie-site-factory/template/{{cookiecutter.site}}/src/backend/{{cookiecutter.site}}/urls.py\n@@ -11,10 +11,7 @@\n from django.views.static import serve\n \n from cms.sitemaps import CMSSitemap\n-from richie.apps.courses.urls import (\n- redirects_urlpatterns as courses_redirects_urlpatterns,\n- urlpatterns as courses_urlpatterns,\n-)\n+from richie.apps.courses.urls import urlpatterns as courses_urlpatterns\n from richie.apps.search.urls import urlpatterns as search_urlpatterns\n from richie.plugins.urls import urlpatterns as plugins_urlpatterns\n", "issue": "Cookiecutter bootstrap failure\n## Bug Report\r\n\r\n**Problematic Behavior**\r\nThe `nightly round` job warns us that there is a problem with cookiecutter template.\r\n\r\n**Additional context/Screenshots**\r\n[Add any other context about the problem here. If applicable, add screenshots to help explain.](https://app.circleci.com/pipelines/github/openfun/richie/6840/workflows/7b6bd5f9-e2d4-4ef1-8e54-4562a521d50d/jobs/183180)\r\n\n", "before_files": [{"content": "\"\"\"\n{{cookiecutter.site}} urls\n\"\"\"\nfrom django.conf import settings\nfrom django.conf.urls.i18n import i18n_patterns\nfrom django.contrib import admin\nfrom django.contrib.sitemaps.views import sitemap\nfrom django.contrib.staticfiles.urls import staticfiles_urlpatterns\nfrom django.urls import include, path, re_path\nfrom django.views.generic import TemplateView\nfrom django.views.static import serve\n\nfrom cms.sitemaps import CMSSitemap\nfrom richie.apps.courses.urls import (\n redirects_urlpatterns as courses_redirects_urlpatterns,\n urlpatterns as courses_urlpatterns,\n)\nfrom richie.apps.search.urls import urlpatterns as search_urlpatterns\nfrom richie.plugins.urls import urlpatterns as plugins_urlpatterns\n\n# For now, we use URLPathVersioning to be consistent with fonzie. Fonzie uses it\n# because DRF OpenAPI only supports URLPathVersioning for now. See fonzie\n# API_PREFIX config for more information.\nAPI_PREFIX = r\"v(?P<version>[0-9]+\\.[0-9]+)\"\n\nadmin.autodiscover()\nadmin.site.enable_nav_sidebar = False\n\nurlpatterns = [\n path(r\"sitemap.xml\", sitemap, {\"sitemaps\": {\"cmspages\": CMSSitemap}}),\n re_path(\n rf\"api/{API_PREFIX:s}/\",\n include([*courses_urlpatterns, *search_urlpatterns, *plugins_urlpatterns]),\n ),\n path(r\"\", include(\"filer.server.urls\")),\n path(r\"django-check-seo/\", include(\"django_check_seo.urls\")),\n]\n\nurlpatterns += i18n_patterns(\n path(r\"admin/\", admin.site.urls),\n path(r\"accounts/\", include(\"django.contrib.auth.urls\")),\n path(r\"\", include(\"cms.urls\")), # NOQA\n)\n\n# This is only needed when using runserver.\nif settings.DEBUG:\n urlpatterns = (\n [\n path(\n r\"styleguide/\",\n TemplateView.as_view(\n template_name=\"richie/styleguide/index.html\",\n extra_context={\"STYLEGUIDE\": settings.STYLEGUIDE},\n ),\n name=\"styleguide\",\n ),\n path(\n r\"media/<path:path>\",\n serve,\n {\"document_root\": settings.MEDIA_ROOT, \"show_indexes\": True},\n ),\n ]\n + staticfiles_urlpatterns()\n + urlpatterns\n )\n\nhandler400 = \"richie.apps.core.views.error.error_400_view_handler\"\nhandler403 = \"richie.apps.core.views.error.error_403_view_handler\"\nhandler404 = \"richie.apps.core.views.error.error_404_view_handler\"\nhandler500 = \"richie.apps.core.views.error.error_500_view_handler\"\n", "path": "cookiecutter/{{cookiecutter.organization}}-richie-site-factory/template/{{cookiecutter.site}}/src/backend/{{cookiecutter.site}}/urls.py"}]} | 1,426 | 253 |
gh_patches_debug_24441 | rasdani/github-patches | git_diff | openfun__richie-1584 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Send web analytics event when user enrolls on a course run
## Feature Request
**Is your feature request related to a problem or unsupported use case? Please describe.**
To better monitor witch marketing campaigns have better performance, we need to generate events when an user enrolls to a course.
**Describe the solution you'd like**
I would like to the add support to the web analytics module, the some react events, at least when the user enrolls to a course.
**Describe alternatives you've considered**
I don't any.
**Discovery, Documentation, Adoption, Migration Strategy**
It is a new feature and it's compatible with current implementation.
Add information on web analytics docs on google analytics sub section that richie would send events, and list them.
**Do you want to work on it through a Pull Request?**
Yes. Probably create an util to send web analytics events.
How to configure Google Analytics to receive the Google Tag Manager custom events:
https://www.analyticsmania.com/post/google-tag-manager-custom-event-trigger/
</issue>
<code>
[start of src/richie/apps/core/context_processors.py]
1 """
2 Template context processors
3 """
4 import json
5 from collections import OrderedDict
6
7 from django.conf import settings
8 from django.contrib.sites.models import Site
9 from django.core.files.storage import get_storage_class
10 from django.http.request import HttpRequest
11 from django.middleware.csrf import get_token
12 from django.utils.translation import get_language_from_request
13
14 from richie.apps.core.templatetags.joanie import is_joanie_enabled
15 from richie.apps.courses.defaults import RICHIE_MAX_ARCHIVED_COURSE_RUNS
16 from richie.apps.courses.models import Organization
17
18 from . import defaults
19
20
21 def site_metas(request: HttpRequest):
22 """
23 Context processor to add all information required by Richie CMS templates and frontend.
24
25 If `CDN_DOMAIN` settings is defined we add it in the context. It allows
26 to load statics js on a CDN like cloudfront.
27 """
28 site_current = Site.objects.get_current()
29 protocol = "https" if request.is_secure() else "http"
30
31 context = {
32 **{
33 f"GLIMPSE_PAGINATION_{k.upper()}": v
34 for k, v in {
35 **defaults.GLIMPSE_PAGINATION,
36 **getattr(settings, "RICHIE_GLIMPSE_PAGINATION", {}),
37 }.items()
38 },
39 "SITE": {
40 "name": site_current.name,
41 "domain": site_current.domain,
42 "web_url": f"{protocol:s}://{site_current.domain:s}",
43 },
44 "FRONTEND_CONTEXT": {
45 "context": {
46 "csrftoken": get_token(request),
47 "environment": getattr(settings, "ENVIRONMENT", ""),
48 "release": getattr(settings, "RELEASE", ""),
49 "sentry_dsn": getattr(settings, "SENTRY_DSN", ""),
50 }
51 },
52 **WebAnalyticsContextProcessor().context_processor(request),
53 }
54
55 if getattr(settings, "CDN_DOMAIN", None):
56 context["CDN_DOMAIN"] = settings.CDN_DOMAIN
57
58 storage_url = get_storage_class()().url("any-page")
59 # Add a MEDIA_URL_PREFIX to context to prefix the media url files to have an absolute URL
60 if storage_url.startswith("//"):
61 # Eg. //my-cdn-user.cdn-provider.com/media/
62 context["MEDIA_URL_PREFIX"] = f"{request.scheme:s}:"
63 elif storage_url.startswith("/"):
64 # Eg. /media/
65 context["MEDIA_URL_PREFIX"] = f"{protocol:s}://{site_current.domain:s}"
66 else:
67 # Eg. https://my-cdn-user.cdn-provider.com/media/
68 context["MEDIA_URL_PREFIX"] = ""
69
70 authentication_delegation = getattr(
71 settings, "RICHIE_AUTHENTICATION_DELEGATION", None
72 )
73 if authentication_delegation:
74
75 context["AUTHENTICATION"] = {
76 "profile_urls": json.dumps(
77 {
78 key: {
79 "label": str(url["label"]),
80 "action": str(
81 url["href"].format(
82 base_url=authentication_delegation["BASE_URL"]
83 )
84 ),
85 }
86 for key, url in authentication_delegation.get(
87 "PROFILE_URLS", {}
88 ).items()
89 }
90 ),
91 }
92
93 context["FRONTEND_CONTEXT"]["context"]["authentication"] = {
94 "endpoint": authentication_delegation["BASE_URL"],
95 "backend": authentication_delegation["BACKEND"],
96 }
97
98 if is_joanie_enabled():
99 context["FRONTEND_CONTEXT"]["context"]["joanie_backend"] = {
100 "endpoint": settings.JOANIE["BASE_URL"],
101 }
102
103 if getattr(settings, "RICHIE_LMS_BACKENDS", None):
104 context["FRONTEND_CONTEXT"]["context"]["lms_backends"] = [
105 {
106 "endpoint": lms["BASE_URL"],
107 "backend": lms["JS_BACKEND"],
108 "course_regexp": lms["JS_COURSE_REGEX"],
109 }
110 for lms in getattr(settings, "RICHIE_LMS_BACKENDS", [])
111 ]
112
113 context["FRONTEND_CONTEXT"] = json.dumps(context["FRONTEND_CONTEXT"])
114
115 if getattr(settings, "RICHIE_MINIMUM_COURSE_RUNS_ENROLLMENT_COUNT", None):
116 context[
117 "RICHIE_MINIMUM_COURSE_RUNS_ENROLLMENT_COUNT"
118 ] = settings.RICHIE_MINIMUM_COURSE_RUNS_ENROLLMENT_COUNT
119
120 context["RICHIE_MAX_ARCHIVED_COURSE_RUNS"] = getattr(
121 settings, "RICHIE_MAX_ARCHIVED_COURSE_RUNS", RICHIE_MAX_ARCHIVED_COURSE_RUNS
122 )
123
124 return context
125
126
127 class WebAnalyticsContextProcessor:
128 """
129 Context processor to add Web Analytics tracking information to Richie CMS templates and
130 frontend.
131 """
132
133 def context_processor(self, request: HttpRequest) -> dict:
134 """
135 Real implementation of the context processor for the Web Analytics core app sub-module
136 """
137 context = {}
138 if hasattr(request, "current_page"):
139 # load web analytics settings to the context
140 if getattr(settings, "WEB_ANALYTICS_ID", None):
141 context["WEB_ANALYTICS_ID"] = settings.WEB_ANALYTICS_ID
142 context["WEB_ANALYTICS_DIMENSIONS"] = self.get_dimensions(request)
143
144 context["WEB_ANALYTICS_LOCATION"] = getattr(
145 settings, "WEB_ANALYTICS_LOCATION", "head"
146 )
147
148 context["WEB_ANALYTICS_PROVIDER"] = getattr(
149 settings, "WEB_ANALYTICS_PROVIDER", "google_analytics"
150 )
151 return context
152
153 # pylint: disable=no-self-use
154 def get_dimensions(self, request: HttpRequest) -> dict:
155 """
156 Compute the web analytics dimensions (dict) that would be added to the Django context
157 They are a dictionary like:
158 ```
159 {
160 "organizations_codes": ["UNIV_LISBON", "UNIV_PORTO"],
161 "course_code": ["COURSE_XPTO"],
162 "course_runs_titles": [
163 "Summer edition",
164 "Winter edition"
165 ],
166 "course_runs_resource_links": [
167 "http://example.edx:8073/courses/course-v1:edX+DemoX+Demo_Course/info",
168 "http://example.edx:8073/courses/course-v1:edX+DemoX+Demo_Course_2/info"
169 ],
170 "page_title": ["Introduction to Programming"],
171 }
172 ```
173
174 Args:
175 request (HttpRequest): The Http request
176 web_analytics_context (dict): the context relevant for the web analytics sub module
177
178 Returns:
179 dict: a dict with the dimensions, where each value is a list
180 """
181 # Warn do not change the order of evaluation of this methods, because on Google Analytics
182 # the key is a 'dimension1' instead of a more generic key like 'organizations'.
183 # That's why we are using an OrderedDict instead of a normal Python dict (don't support
184 # ordering)
185 dimensions = OrderedDict()
186
187 page = request.current_page or None
188 language = get_language_from_request(request, check_path=True)
189
190 organizations_codes = []
191 if page and not page.is_home:
192 organizations_codes = Organization.get_organizations_codes(page, language)
193 dimensions["organizations_codes"] = organizations_codes
194
195 course = getattr(page, "course", None)
196 dimensions["course_code"] = [getattr(course, "code", "")]
197
198 course_runs = course.course_runs if course else []
199 dimensions["course_runs_titles"] = [
200 course_run.title
201 for course_run in course_runs
202 if course_run is not None and course_run.safe_title is not None
203 ]
204
205 dimensions["course_runs_resource_links"] = map(
206 lambda course_run: course_run.resource_link, course_runs
207 )
208
209 dimensions["page_title"] = [page.get_title() if page else ""]
210 return dimensions
211
[end of src/richie/apps/core/context_processors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/richie/apps/core/context_processors.py b/src/richie/apps/core/context_processors.py
--- a/src/richie/apps/core/context_processors.py
+++ b/src/richie/apps/core/context_processors.py
@@ -47,6 +47,7 @@
"environment": getattr(settings, "ENVIRONMENT", ""),
"release": getattr(settings, "RELEASE", ""),
"sentry_dsn": getattr(settings, "SENTRY_DSN", ""),
+ **WebAnalyticsContextProcessor().frontend_context_processor(request),
}
},
**WebAnalyticsContextProcessor().context_processor(request),
@@ -130,6 +131,18 @@
frontend.
"""
+ # pylint: disable=no-self-use
+ def frontend_context_processor(self, request: HttpRequest) -> dict:
+ """
+ Additional web analytics information for the frontend react
+ """
+ context = {}
+ if getattr(settings, "WEB_ANALYTICS_ID"):
+ context["web_analytics_provider"] = getattr(
+ settings, "WEB_ANALYTICS_PROVIDER", "google_analytics"
+ )
+ return context
+
def context_processor(self, request: HttpRequest) -> dict:
"""
Real implementation of the context processor for the Web Analytics core app sub-module
| {"golden_diff": "diff --git a/src/richie/apps/core/context_processors.py b/src/richie/apps/core/context_processors.py\n--- a/src/richie/apps/core/context_processors.py\n+++ b/src/richie/apps/core/context_processors.py\n@@ -47,6 +47,7 @@\n \"environment\": getattr(settings, \"ENVIRONMENT\", \"\"),\n \"release\": getattr(settings, \"RELEASE\", \"\"),\n \"sentry_dsn\": getattr(settings, \"SENTRY_DSN\", \"\"),\n+ **WebAnalyticsContextProcessor().frontend_context_processor(request),\n }\n },\n **WebAnalyticsContextProcessor().context_processor(request),\n@@ -130,6 +131,18 @@\n frontend.\n \"\"\"\n \n+ # pylint: disable=no-self-use\n+ def frontend_context_processor(self, request: HttpRequest) -> dict:\n+ \"\"\"\n+ Additional web analytics information for the frontend react\n+ \"\"\"\n+ context = {}\n+ if getattr(settings, \"WEB_ANALYTICS_ID\"):\n+ context[\"web_analytics_provider\"] = getattr(\n+ settings, \"WEB_ANALYTICS_PROVIDER\", \"google_analytics\"\n+ )\n+ return context\n+\n def context_processor(self, request: HttpRequest) -> dict:\n \"\"\"\n Real implementation of the context processor for the Web Analytics core app sub-module\n", "issue": "Send web analytics event when user enrolls on a course run\n## Feature Request\r\n\r\n**Is your feature request related to a problem or unsupported use case? Please describe.**\r\nTo better monitor witch marketing campaigns have better performance, we need to generate events when an user enrolls to a course. \r\n\r\n**Describe the solution you'd like**\r\nI would like to the add support to the web analytics module, the some react events, at least when the user enrolls to a course.\r\n\r\n**Describe alternatives you've considered**\r\nI don't any.\r\n\r\n**Discovery, Documentation, Adoption, Migration Strategy**\r\nIt is a new feature and it's compatible with current implementation. \r\nAdd information on web analytics docs on google analytics sub section that richie would send events, and list them.\r\n\r\n**Do you want to work on it through a Pull Request?**\r\nYes. Probably create an util to send web analytics events.\r\n\r\nHow to configure Google Analytics to receive the Google Tag Manager custom events:\r\nhttps://www.analyticsmania.com/post/google-tag-manager-custom-event-trigger/\n", "before_files": [{"content": "\"\"\"\nTemplate context processors\n\"\"\"\nimport json\nfrom collections import OrderedDict\n\nfrom django.conf import settings\nfrom django.contrib.sites.models import Site\nfrom django.core.files.storage import get_storage_class\nfrom django.http.request import HttpRequest\nfrom django.middleware.csrf import get_token\nfrom django.utils.translation import get_language_from_request\n\nfrom richie.apps.core.templatetags.joanie import is_joanie_enabled\nfrom richie.apps.courses.defaults import RICHIE_MAX_ARCHIVED_COURSE_RUNS\nfrom richie.apps.courses.models import Organization\n\nfrom . import defaults\n\n\ndef site_metas(request: HttpRequest):\n \"\"\"\n Context processor to add all information required by Richie CMS templates and frontend.\n\n If `CDN_DOMAIN` settings is defined we add it in the context. It allows\n to load statics js on a CDN like cloudfront.\n \"\"\"\n site_current = Site.objects.get_current()\n protocol = \"https\" if request.is_secure() else \"http\"\n\n context = {\n **{\n f\"GLIMPSE_PAGINATION_{k.upper()}\": v\n for k, v in {\n **defaults.GLIMPSE_PAGINATION,\n **getattr(settings, \"RICHIE_GLIMPSE_PAGINATION\", {}),\n }.items()\n },\n \"SITE\": {\n \"name\": site_current.name,\n \"domain\": site_current.domain,\n \"web_url\": f\"{protocol:s}://{site_current.domain:s}\",\n },\n \"FRONTEND_CONTEXT\": {\n \"context\": {\n \"csrftoken\": get_token(request),\n \"environment\": getattr(settings, \"ENVIRONMENT\", \"\"),\n \"release\": getattr(settings, \"RELEASE\", \"\"),\n \"sentry_dsn\": getattr(settings, \"SENTRY_DSN\", \"\"),\n }\n },\n **WebAnalyticsContextProcessor().context_processor(request),\n }\n\n if getattr(settings, \"CDN_DOMAIN\", None):\n context[\"CDN_DOMAIN\"] = settings.CDN_DOMAIN\n\n storage_url = get_storage_class()().url(\"any-page\")\n # Add a MEDIA_URL_PREFIX to context to prefix the media url files to have an absolute URL\n if storage_url.startswith(\"//\"):\n # Eg. //my-cdn-user.cdn-provider.com/media/\n context[\"MEDIA_URL_PREFIX\"] = f\"{request.scheme:s}:\"\n elif storage_url.startswith(\"/\"):\n # Eg. /media/\n context[\"MEDIA_URL_PREFIX\"] = f\"{protocol:s}://{site_current.domain:s}\"\n else:\n # Eg. https://my-cdn-user.cdn-provider.com/media/\n context[\"MEDIA_URL_PREFIX\"] = \"\"\n\n authentication_delegation = getattr(\n settings, \"RICHIE_AUTHENTICATION_DELEGATION\", None\n )\n if authentication_delegation:\n\n context[\"AUTHENTICATION\"] = {\n \"profile_urls\": json.dumps(\n {\n key: {\n \"label\": str(url[\"label\"]),\n \"action\": str(\n url[\"href\"].format(\n base_url=authentication_delegation[\"BASE_URL\"]\n )\n ),\n }\n for key, url in authentication_delegation.get(\n \"PROFILE_URLS\", {}\n ).items()\n }\n ),\n }\n\n context[\"FRONTEND_CONTEXT\"][\"context\"][\"authentication\"] = {\n \"endpoint\": authentication_delegation[\"BASE_URL\"],\n \"backend\": authentication_delegation[\"BACKEND\"],\n }\n\n if is_joanie_enabled():\n context[\"FRONTEND_CONTEXT\"][\"context\"][\"joanie_backend\"] = {\n \"endpoint\": settings.JOANIE[\"BASE_URL\"],\n }\n\n if getattr(settings, \"RICHIE_LMS_BACKENDS\", None):\n context[\"FRONTEND_CONTEXT\"][\"context\"][\"lms_backends\"] = [\n {\n \"endpoint\": lms[\"BASE_URL\"],\n \"backend\": lms[\"JS_BACKEND\"],\n \"course_regexp\": lms[\"JS_COURSE_REGEX\"],\n }\n for lms in getattr(settings, \"RICHIE_LMS_BACKENDS\", [])\n ]\n\n context[\"FRONTEND_CONTEXT\"] = json.dumps(context[\"FRONTEND_CONTEXT\"])\n\n if getattr(settings, \"RICHIE_MINIMUM_COURSE_RUNS_ENROLLMENT_COUNT\", None):\n context[\n \"RICHIE_MINIMUM_COURSE_RUNS_ENROLLMENT_COUNT\"\n ] = settings.RICHIE_MINIMUM_COURSE_RUNS_ENROLLMENT_COUNT\n\n context[\"RICHIE_MAX_ARCHIVED_COURSE_RUNS\"] = getattr(\n settings, \"RICHIE_MAX_ARCHIVED_COURSE_RUNS\", RICHIE_MAX_ARCHIVED_COURSE_RUNS\n )\n\n return context\n\n\nclass WebAnalyticsContextProcessor:\n \"\"\"\n Context processor to add Web Analytics tracking information to Richie CMS templates and\n frontend.\n \"\"\"\n\n def context_processor(self, request: HttpRequest) -> dict:\n \"\"\"\n Real implementation of the context processor for the Web Analytics core app sub-module\n \"\"\"\n context = {}\n if hasattr(request, \"current_page\"):\n # load web analytics settings to the context\n if getattr(settings, \"WEB_ANALYTICS_ID\", None):\n context[\"WEB_ANALYTICS_ID\"] = settings.WEB_ANALYTICS_ID\n context[\"WEB_ANALYTICS_DIMENSIONS\"] = self.get_dimensions(request)\n\n context[\"WEB_ANALYTICS_LOCATION\"] = getattr(\n settings, \"WEB_ANALYTICS_LOCATION\", \"head\"\n )\n\n context[\"WEB_ANALYTICS_PROVIDER\"] = getattr(\n settings, \"WEB_ANALYTICS_PROVIDER\", \"google_analytics\"\n )\n return context\n\n # pylint: disable=no-self-use\n def get_dimensions(self, request: HttpRequest) -> dict:\n \"\"\"\n Compute the web analytics dimensions (dict) that would be added to the Django context\n They are a dictionary like:\n ```\n {\n \"organizations_codes\": [\"UNIV_LISBON\", \"UNIV_PORTO\"],\n \"course_code\": [\"COURSE_XPTO\"],\n \"course_runs_titles\": [\n \"Summer edition\",\n \"Winter edition\"\n ],\n \"course_runs_resource_links\": [\n \"http://example.edx:8073/courses/course-v1:edX+DemoX+Demo_Course/info\",\n \"http://example.edx:8073/courses/course-v1:edX+DemoX+Demo_Course_2/info\"\n ],\n \"page_title\": [\"Introduction to Programming\"],\n }\n ```\n\n Args:\n request (HttpRequest): The Http request\n web_analytics_context (dict): the context relevant for the web analytics sub module\n\n Returns:\n dict: a dict with the dimensions, where each value is a list\n \"\"\"\n # Warn do not change the order of evaluation of this methods, because on Google Analytics\n # the key is a 'dimension1' instead of a more generic key like 'organizations'.\n # That's why we are using an OrderedDict instead of a normal Python dict (don't support\n # ordering)\n dimensions = OrderedDict()\n\n page = request.current_page or None\n language = get_language_from_request(request, check_path=True)\n\n organizations_codes = []\n if page and not page.is_home:\n organizations_codes = Organization.get_organizations_codes(page, language)\n dimensions[\"organizations_codes\"] = organizations_codes\n\n course = getattr(page, \"course\", None)\n dimensions[\"course_code\"] = [getattr(course, \"code\", \"\")]\n\n course_runs = course.course_runs if course else []\n dimensions[\"course_runs_titles\"] = [\n course_run.title\n for course_run in course_runs\n if course_run is not None and course_run.safe_title is not None\n ]\n\n dimensions[\"course_runs_resource_links\"] = map(\n lambda course_run: course_run.resource_link, course_runs\n )\n\n dimensions[\"page_title\"] = [page.get_title() if page else \"\"]\n return dimensions\n", "path": "src/richie/apps/core/context_processors.py"}]} | 2,947 | 277 |
gh_patches_debug_21738 | rasdani/github-patches | git_diff | ddionrails__ddionrails-1362 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Concepts should not be imported implicitly.
https://github.com/ddionrails/ddionrails/blob/68139838febdba24fdffaf4ef72e040801e94970/ddionrails/instruments/imports.py#L227-L232
---
###### This issue was generated by [todo](https://todo.jasonet.co) based on a `TODO` comment in 68139838febdba24fdffaf4ef72e040801e94970 when #1324 was merged. cc @ddionrails.
</issue>
<code>
[start of ddionrails/instruments/imports.py]
1 # -*- coding: utf-8 -*-
2
3 """ Importer classes for ddionrails.instruments app """
4
5 import json
6 import logging
7 from collections import OrderedDict
8 from pathlib import Path
9 from typing import Dict, Optional
10
11 from django.db.transaction import atomic
12
13 from ddionrails.concepts.models import AnalysisUnit, Concept, Period
14 from ddionrails.data.models import Variable
15 from ddionrails.imports import imports
16 from ddionrails.imports.helpers import download_image, store_image
17
18 from .models import ConceptQuestion, Instrument, Question, QuestionImage, QuestionVariable
19
20 logging.config.fileConfig("logging.conf") # type: ignore
21 logger = logging.getLogger(__name__)
22
23
24 class InstrumentImport(imports.Import):
25
26 content: str
27
28 def execute_import(self):
29 self.content = json.JSONDecoder(object_pairs_hook=OrderedDict).decode(
30 self.content
31 )
32 self._import_instrument(self.name, self.content)
33
34 def _import_instrument(self, name, content):
35 instrument, _ = Instrument.objects.get_or_create(study=self.study, name=name)
36
37 # add period relation to instrument
38 period_name = content.get("period", "none")
39 # Workaround for two ways to name period: period, period_name
40 # => period_name
41 if period_name == "none":
42 period_name = content.get("period_name", "none")
43
44 # Workaround for periods coming in as float, w.g. 2001.0
45 try:
46 period_name = str(int(period_name))
47 except ValueError:
48 period_name = str(period_name)
49 period = Period.objects.get_or_create(name=period_name, study=self.study)[0]
50 instrument.period = period
51
52 # add analysis_unit relation to instrument
53 analysis_unit_name = content.get("analysis_unit", "none")
54 if analysis_unit_name == "none":
55 analysis_unit_name = content.get("analysis_unit_name", "none")
56 analysis_unit = AnalysisUnit.objects.get_or_create(
57 name=analysis_unit_name, study=self.study
58 )[0]
59 instrument.analysis_unit = analysis_unit
60
61 for _name, _question in content["questions"].items():
62 question, _ = Question.objects.get_or_create(
63 name=_question["question"], instrument=instrument
64 )
65 question.sort_id = int(_question.get("sn", 0))
66 question.label = _question.get("label", _question.get("text", _name))
67 question.label_de = _question.get("label_de", _question.get("text_de", ""))
68 question.description = _question.get("description", "")
69 question.description_de = _question.get("description_de", "")
70 question.items = _question.get("items", list)
71 question.save()
72 image_data = _question.get("image", None)
73 if image_data:
74 self.question_image_import(question, image_data)
75
76 instrument.label = content.get("label", "")
77 instrument.label_de = content.get("label_de", "")
78 instrument.description = content.get("description", "")
79 instrument.description_de = content.get("description_de", "")
80 instrument.save()
81
82 def question_image_import(self, question: Question, image_data: Dict[str, str]):
83 """Load and save the images of a Question
84
85 Loads and saves up to two image files given their location by url.
86
87 Args:
88 question: The associated Question Object
89 image_data: Contains "url" and/or "url_de" keys to locate image files.
90 Optionally contains "label" and/or "label_de" to add a label
91 to the corresponding images.
92 """
93
94 images = list()
95
96 if "url" in image_data:
97 images.append(
98 self._question_image_import_helper(
99 question,
100 url=image_data["url"],
101 label=image_data.get("label", None),
102 language="en",
103 )
104 )
105 if "url_de" in image_data:
106 images.append(
107 self._question_image_import_helper(
108 question,
109 url=image_data["url_de"],
110 label=image_data.get("label_de", None),
111 language="de",
112 )
113 )
114
115 for image in images:
116 if image is not None:
117 image.save()
118
119 @staticmethod
120 def _question_image_import_helper(
121 question: Question, url: str, label: Optional[str], language: str
122 ) -> Optional[QuestionImage]:
123 """Create a QuestionImage Object, load and store Image file inside it
124
125 Args:
126 question: The associated Question Object
127 url: Location of the image to be loaded
128 label: Is stored in the QuestionImage label field
129 language: Is stored in the QuestionImage language field
130 Returns:
131 The QuestionImage created with loaded image.
132 If the image could not be loaded, this will return None.
133 QuestionImage is not written to the database here.
134 """
135 question_image = QuestionImage()
136 instrument = question.instrument
137 path = [instrument.study.name, instrument.name, question.name]
138 _image = download_image(url)
139 suffix = Path(url).suffix
140 if label is None:
141 label = "Screenshot"
142 if _image is None:
143 return None
144 image, _ = store_image(file=_image, name=label + suffix, path=path)
145 question_image.question = question
146 question_image.image = image
147 question_image.label = label
148 question_image.language = language
149 return question_image
150
151
152 class QuestionVariableImport(imports.CSVImport):
153 def execute_import(self):
154 for link in self.content:
155 self._import_link(link)
156
157 def _import_link(self, link):
158 try:
159 question = self._get_question(link)
160 variable = self._get_variable(link)
161 QuestionVariable.objects.get_or_create(question=question, variable=variable)
162 except BaseException as error:
163 raise type(error)(f"Could not import QuestionVariable: {link}")
164
165 @staticmethod
166 def _get_question(link):
167 question = (
168 Question.objects.filter(
169 instrument__study__name=link.get("study", link.get("study_name"))
170 )
171 .filter(instrument__name=link.get("instrument", link.get("instrument_name")))
172 .get(name=link.get("question", link.get("question_name")))
173 )
174 return question
175
176 @staticmethod
177 def _get_variable(link):
178 variable = (
179 Variable.objects.filter(
180 dataset__study__name=link.get("study", link.get("study_name"))
181 )
182 .filter(dataset__name=link.get("dataset", link.get("dataset_name")))
183 .filter(name=link.get("variable", link.get("variable_name")))
184 .first()
185 )
186 return variable
187
188
189 class ConceptQuestionImport(imports.CSVImport):
190 @atomic
191 def execute_import(self):
192 for link in self.content:
193 self._import_link(link)
194
195 def _import_link(self, link):
196 try:
197 question = self._get_question(link)
198 concept = self._get_concept(link)
199 except BaseException as error:
200 raise type(error)(f"Could not import ConceptQuestion: {link}")
201 ConceptQuestion.objects.get_or_create(question=question, concept=concept)
202
203 @staticmethod
204 def _get_question(element):
205 study = element.get("study", element.get("study_name"))
206 instrument = element.get("instrument", element.get("instrument_name"))
207 question = element.get("question", element.get("question_name"))
208
209 question = Question.objects.get(
210 instrument__study__name=study, instrument__name=instrument, name=question
211 )
212 return question
213
214 @staticmethod
215 def _get_concept(element):
216 # TODO: Concepts should not be imported implicitly.
217 concept = Concept.objects.get_or_create(
218 name=element.get("concept", element.get("concept_name"))
219 )[0]
220
221 return concept
222
[end of ddionrails/instruments/imports.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddionrails/instruments/imports.py b/ddionrails/instruments/imports.py
--- a/ddionrails/instruments/imports.py
+++ b/ddionrails/instruments/imports.py
@@ -195,7 +195,9 @@
def _import_link(self, link):
try:
question = self._get_question(link)
- concept = self._get_concept(link)
+ concept = Concept.objects.get(
+ name=link.get("concept", link.get("concept_name"))
+ )
except BaseException as error:
raise type(error)(f"Could not import ConceptQuestion: {link}")
ConceptQuestion.objects.get_or_create(question=question, concept=concept)
@@ -210,12 +212,3 @@
instrument__study__name=study, instrument__name=instrument, name=question
)
return question
-
- @staticmethod
- def _get_concept(element):
- # TODO: Concepts should not be imported implicitly.
- concept = Concept.objects.get_or_create(
- name=element.get("concept", element.get("concept_name"))
- )[0]
-
- return concept
| {"golden_diff": "diff --git a/ddionrails/instruments/imports.py b/ddionrails/instruments/imports.py\n--- a/ddionrails/instruments/imports.py\n+++ b/ddionrails/instruments/imports.py\n@@ -195,7 +195,9 @@\n def _import_link(self, link):\n try:\n question = self._get_question(link)\n- concept = self._get_concept(link)\n+ concept = Concept.objects.get(\n+ name=link.get(\"concept\", link.get(\"concept_name\"))\n+ )\n except BaseException as error:\n raise type(error)(f\"Could not import ConceptQuestion: {link}\")\n ConceptQuestion.objects.get_or_create(question=question, concept=concept)\n@@ -210,12 +212,3 @@\n instrument__study__name=study, instrument__name=instrument, name=question\n )\n return question\n-\n- @staticmethod\n- def _get_concept(element):\n- # TODO: Concepts should not be imported implicitly.\n- concept = Concept.objects.get_or_create(\n- name=element.get(\"concept\", element.get(\"concept_name\"))\n- )[0]\n-\n- return concept\n", "issue": "Concepts should not be imported implicitly.\nhttps://github.com/ddionrails/ddionrails/blob/68139838febdba24fdffaf4ef72e040801e94970/ddionrails/instruments/imports.py#L227-L232\n\n---\n\n###### This issue was generated by [todo](https://todo.jasonet.co) based on a `TODO` comment in 68139838febdba24fdffaf4ef72e040801e94970 when #1324 was merged. cc @ddionrails.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\" Importer classes for ddionrails.instruments app \"\"\"\n\nimport json\nimport logging\nfrom collections import OrderedDict\nfrom pathlib import Path\nfrom typing import Dict, Optional\n\nfrom django.db.transaction import atomic\n\nfrom ddionrails.concepts.models import AnalysisUnit, Concept, Period\nfrom ddionrails.data.models import Variable\nfrom ddionrails.imports import imports\nfrom ddionrails.imports.helpers import download_image, store_image\n\nfrom .models import ConceptQuestion, Instrument, Question, QuestionImage, QuestionVariable\n\nlogging.config.fileConfig(\"logging.conf\") # type: ignore\nlogger = logging.getLogger(__name__)\n\n\nclass InstrumentImport(imports.Import):\n\n content: str\n\n def execute_import(self):\n self.content = json.JSONDecoder(object_pairs_hook=OrderedDict).decode(\n self.content\n )\n self._import_instrument(self.name, self.content)\n\n def _import_instrument(self, name, content):\n instrument, _ = Instrument.objects.get_or_create(study=self.study, name=name)\n\n # add period relation to instrument\n period_name = content.get(\"period\", \"none\")\n # Workaround for two ways to name period: period, period_name\n # => period_name\n if period_name == \"none\":\n period_name = content.get(\"period_name\", \"none\")\n\n # Workaround for periods coming in as float, w.g. 2001.0\n try:\n period_name = str(int(period_name))\n except ValueError:\n period_name = str(period_name)\n period = Period.objects.get_or_create(name=period_name, study=self.study)[0]\n instrument.period = period\n\n # add analysis_unit relation to instrument\n analysis_unit_name = content.get(\"analysis_unit\", \"none\")\n if analysis_unit_name == \"none\":\n analysis_unit_name = content.get(\"analysis_unit_name\", \"none\")\n analysis_unit = AnalysisUnit.objects.get_or_create(\n name=analysis_unit_name, study=self.study\n )[0]\n instrument.analysis_unit = analysis_unit\n\n for _name, _question in content[\"questions\"].items():\n question, _ = Question.objects.get_or_create(\n name=_question[\"question\"], instrument=instrument\n )\n question.sort_id = int(_question.get(\"sn\", 0))\n question.label = _question.get(\"label\", _question.get(\"text\", _name))\n question.label_de = _question.get(\"label_de\", _question.get(\"text_de\", \"\"))\n question.description = _question.get(\"description\", \"\")\n question.description_de = _question.get(\"description_de\", \"\")\n question.items = _question.get(\"items\", list)\n question.save()\n image_data = _question.get(\"image\", None)\n if image_data:\n self.question_image_import(question, image_data)\n\n instrument.label = content.get(\"label\", \"\")\n instrument.label_de = content.get(\"label_de\", \"\")\n instrument.description = content.get(\"description\", \"\")\n instrument.description_de = content.get(\"description_de\", \"\")\n instrument.save()\n\n def question_image_import(self, question: Question, image_data: Dict[str, str]):\n \"\"\"Load and save the images of a Question\n\n Loads and saves up to two image files given their location by url.\n\n Args:\n question: The associated Question Object\n image_data: Contains \"url\" and/or \"url_de\" keys to locate image files.\n Optionally contains \"label\" and/or \"label_de\" to add a label\n to the corresponding images.\n \"\"\"\n\n images = list()\n\n if \"url\" in image_data:\n images.append(\n self._question_image_import_helper(\n question,\n url=image_data[\"url\"],\n label=image_data.get(\"label\", None),\n language=\"en\",\n )\n )\n if \"url_de\" in image_data:\n images.append(\n self._question_image_import_helper(\n question,\n url=image_data[\"url_de\"],\n label=image_data.get(\"label_de\", None),\n language=\"de\",\n )\n )\n\n for image in images:\n if image is not None:\n image.save()\n\n @staticmethod\n def _question_image_import_helper(\n question: Question, url: str, label: Optional[str], language: str\n ) -> Optional[QuestionImage]:\n \"\"\"Create a QuestionImage Object, load and store Image file inside it\n\n Args:\n question: The associated Question Object\n url: Location of the image to be loaded\n label: Is stored in the QuestionImage label field\n language: Is stored in the QuestionImage language field\n Returns:\n The QuestionImage created with loaded image.\n If the image could not be loaded, this will return None.\n QuestionImage is not written to the database here.\n \"\"\"\n question_image = QuestionImage()\n instrument = question.instrument\n path = [instrument.study.name, instrument.name, question.name]\n _image = download_image(url)\n suffix = Path(url).suffix\n if label is None:\n label = \"Screenshot\"\n if _image is None:\n return None\n image, _ = store_image(file=_image, name=label + suffix, path=path)\n question_image.question = question\n question_image.image = image\n question_image.label = label\n question_image.language = language\n return question_image\n\n\nclass QuestionVariableImport(imports.CSVImport):\n def execute_import(self):\n for link in self.content:\n self._import_link(link)\n\n def _import_link(self, link):\n try:\n question = self._get_question(link)\n variable = self._get_variable(link)\n QuestionVariable.objects.get_or_create(question=question, variable=variable)\n except BaseException as error:\n raise type(error)(f\"Could not import QuestionVariable: {link}\")\n\n @staticmethod\n def _get_question(link):\n question = (\n Question.objects.filter(\n instrument__study__name=link.get(\"study\", link.get(\"study_name\"))\n )\n .filter(instrument__name=link.get(\"instrument\", link.get(\"instrument_name\")))\n .get(name=link.get(\"question\", link.get(\"question_name\")))\n )\n return question\n\n @staticmethod\n def _get_variable(link):\n variable = (\n Variable.objects.filter(\n dataset__study__name=link.get(\"study\", link.get(\"study_name\"))\n )\n .filter(dataset__name=link.get(\"dataset\", link.get(\"dataset_name\")))\n .filter(name=link.get(\"variable\", link.get(\"variable_name\")))\n .first()\n )\n return variable\n\n\nclass ConceptQuestionImport(imports.CSVImport):\n @atomic\n def execute_import(self):\n for link in self.content:\n self._import_link(link)\n\n def _import_link(self, link):\n try:\n question = self._get_question(link)\n concept = self._get_concept(link)\n except BaseException as error:\n raise type(error)(f\"Could not import ConceptQuestion: {link}\")\n ConceptQuestion.objects.get_or_create(question=question, concept=concept)\n\n @staticmethod\n def _get_question(element):\n study = element.get(\"study\", element.get(\"study_name\"))\n instrument = element.get(\"instrument\", element.get(\"instrument_name\"))\n question = element.get(\"question\", element.get(\"question_name\"))\n\n question = Question.objects.get(\n instrument__study__name=study, instrument__name=instrument, name=question\n )\n return question\n\n @staticmethod\n def _get_concept(element):\n # TODO: Concepts should not be imported implicitly.\n concept = Concept.objects.get_or_create(\n name=element.get(\"concept\", element.get(\"concept_name\"))\n )[0]\n\n return concept\n", "path": "ddionrails/instruments/imports.py"}]} | 2,906 | 257 |
gh_patches_debug_3175 | rasdani/github-patches | git_diff | inventree__InvenTree-1537 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Embed information on 3rd-party libraries
I would like to create a place (I suggest the "about" modal) where all the great used libraries get mentioned and their licenses linked. The legal depts. in bigger organisations like to see something like that to be sure there is nothing bad going on licensing wise ([like this](https://www.theregister.com/2021/03/25/ruby_rails_code/)).
@SchrodingersGat
Thoughts / did I oversee that there already is something like that? I made a tech-demo (not polished) how it could look in [my public repo](https://github.com/matmair/InvenTree/tree/3rdparty-info).
</issue>
<code>
[start of InvenTree/part/templatetags/inventree_extras.py]
1 """ This module provides template tags for extra functionality
2 over and above the built-in Django tags.
3 """
4 import os
5
6 from django import template
7 from django.urls import reverse
8 from django.utils.safestring import mark_safe
9 from django.templatetags.static import StaticNode
10 from InvenTree import version, settings
11
12 import InvenTree.helpers
13
14 from common.models import InvenTreeSetting, ColorTheme
15
16 register = template.Library()
17
18
19 @register.simple_tag()
20 def define(value, *args, **kwargs):
21 """
22 Shortcut function to overcome the shortcomings of the django templating language
23
24 Use as follows: {% define "hello_world" as hello %}
25
26 Ref: https://stackoverflow.com/questions/1070398/how-to-set-a-value-of-a-variable-inside-a-template-code
27 """
28
29 return value
30
31
32 @register.simple_tag()
33 def decimal(x, *args, **kwargs):
34 """ Simplified rendering of a decimal number """
35
36 return InvenTree.helpers.decimal2string(x)
37
38
39 @register.simple_tag()
40 def str2bool(x, *args, **kwargs):
41 """ Convert a string to a boolean value """
42
43 return InvenTree.helpers.str2bool(x)
44
45
46 @register.simple_tag()
47 def inrange(n, *args, **kwargs):
48 """ Return range(n) for iterating through a numeric quantity """
49 return range(n)
50
51
52 @register.simple_tag()
53 def multiply(x, y, *args, **kwargs):
54 """ Multiply two numbers together """
55 return InvenTree.helpers.decimal2string(x * y)
56
57
58 @register.simple_tag()
59 def add(x, y, *args, **kwargs):
60 """ Add two numbers together """
61 return x + y
62
63
64 @register.simple_tag()
65 def part_allocation_count(build, part, *args, **kwargs):
66 """ Return the total number of <part> allocated to <build> """
67
68 return InvenTree.helpers.decimal2string(build.getAllocatedQuantity(part))
69
70
71 @register.simple_tag()
72 def inventree_instance_name(*args, **kwargs):
73 """ Return the InstanceName associated with the current database """
74 return version.inventreeInstanceName()
75
76
77 @register.simple_tag()
78 def inventree_title(*args, **kwargs):
79 """ Return the title for the current instance - respecting the settings """
80 return version.inventreeInstanceTitle()
81
82
83 @register.simple_tag()
84 def inventree_version(*args, **kwargs):
85 """ Return InvenTree version string """
86 return version.inventreeVersion()
87
88
89 @register.simple_tag()
90 def django_version(*args, **kwargs):
91 """ Return Django version string """
92 return version.inventreeDjangoVersion()
93
94
95 @register.simple_tag()
96 def inventree_commit_hash(*args, **kwargs):
97 """ Return InvenTree git commit hash string """
98 return version.inventreeCommitHash()
99
100
101 @register.simple_tag()
102 def inventree_commit_date(*args, **kwargs):
103 """ Return InvenTree git commit date string """
104 return version.inventreeCommitDate()
105
106
107 @register.simple_tag()
108 def inventree_github_url(*args, **kwargs):
109 """ Return URL for InvenTree github site """
110 return "https://github.com/InvenTree/InvenTree/"
111
112
113 @register.simple_tag()
114 def inventree_docs_url(*args, **kwargs):
115 """ Return URL for InvenTree documenation site """
116 return "https://inventree.readthedocs.io/"
117
118
119 @register.simple_tag()
120 def setting_object(key, *args, **kwargs):
121 """
122 Return a setting object speciifed by the given key
123 (Or return None if the setting does not exist)
124 """
125
126 setting = InvenTreeSetting.get_setting_object(key)
127
128 return setting
129
130
131 @register.simple_tag()
132 def settings_value(key, *args, **kwargs):
133 """
134 Return a settings value specified by the given key
135 """
136
137 return InvenTreeSetting.get_setting(key)
138
139
140 @register.simple_tag()
141 def get_color_theme_css(username):
142 try:
143 user_theme = ColorTheme.objects.filter(user=username).get()
144 user_theme_name = user_theme.name
145 if not user_theme_name or not ColorTheme.is_valid_choice(user_theme):
146 user_theme_name = 'default'
147 except ColorTheme.DoesNotExist:
148 user_theme_name = 'default'
149
150 # Build path to CSS sheet
151 inventree_css_sheet = os.path.join('css', 'color-themes', user_theme_name + '.css')
152
153 # Build static URL
154 inventree_css_static_url = os.path.join(settings.STATIC_URL, inventree_css_sheet)
155
156 return inventree_css_static_url
157
158
159 @register.simple_tag()
160 def authorized_owners(group):
161 """ Return authorized owners """
162
163 owners = []
164
165 try:
166 for owner in group.get_related_owners(include_group=True):
167 owners.append(owner.owner)
168 except AttributeError:
169 # group is None
170 pass
171 except TypeError:
172 # group.get_users returns None
173 pass
174
175 return owners
176
177
178 @register.simple_tag()
179 def object_link(url_name, pk, ref):
180 """ Return highlighted link to object """
181
182 ref_url = reverse(url_name, kwargs={'pk': pk})
183 return mark_safe('<b><a href="{}">{}</a></b>'.format(ref_url, ref))
184
185
186 class I18nStaticNode(StaticNode):
187 """
188 custom StaticNode
189 replaces a variable named *lng* in the path with the current language
190 """
191 def render(self, context):
192 self.path.var = self.path.var.format(lng=context.request.LANGUAGE_CODE)
193 ret = super().render(context)
194 return ret
195
196
197 @register.tag('i18n_static')
198 def do_i18n_static(parser, token):
199 """
200 Overrides normal static, adds language - lookup for prerenderd files #1485
201
202 usage (like static):
203 {% i18n_static path [as varname] %}
204 """
205 bits = token.split_contents()
206 loc_name = settings.STATICFILES_I18_PREFIX
207
208 # change path to called ressource
209 bits[1] = f"'{loc_name}/{{lng}}.{bits[1][1:-1]}'"
210 token.contents = ' '.join(bits)
211 return I18nStaticNode.handle_token(parser, token)
212
[end of InvenTree/part/templatetags/inventree_extras.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/InvenTree/part/templatetags/inventree_extras.py b/InvenTree/part/templatetags/inventree_extras.py
--- a/InvenTree/part/templatetags/inventree_extras.py
+++ b/InvenTree/part/templatetags/inventree_extras.py
@@ -116,6 +116,12 @@
return "https://inventree.readthedocs.io/"
[email protected]_tag()
+def inventree_credits_url(*args, **kwargs):
+ """ Return URL for InvenTree credits site """
+ return "https://inventree.readthedocs.io/en/latest/credits/"
+
+
@register.simple_tag()
def setting_object(key, *args, **kwargs):
"""
| {"golden_diff": "diff --git a/InvenTree/part/templatetags/inventree_extras.py b/InvenTree/part/templatetags/inventree_extras.py\n--- a/InvenTree/part/templatetags/inventree_extras.py\n+++ b/InvenTree/part/templatetags/inventree_extras.py\n@@ -116,6 +116,12 @@\n return \"https://inventree.readthedocs.io/\"\n \n \[email protected]_tag()\n+def inventree_credits_url(*args, **kwargs):\n+ \"\"\" Return URL for InvenTree credits site \"\"\"\n+ return \"https://inventree.readthedocs.io/en/latest/credits/\"\n+\n+\n @register.simple_tag()\n def setting_object(key, *args, **kwargs):\n \"\"\"\n", "issue": "Embed information on 3rd-party libraries\nI would like to create a place (I suggest the \"about\" modal) where all the great used libraries get mentioned and their licenses linked. The legal depts. in bigger organisations like to see something like that to be sure there is nothing bad going on licensing wise ([like this](https://www.theregister.com/2021/03/25/ruby_rails_code/)).\r\n\r\n@SchrodingersGat \r\nThoughts / did I oversee that there already is something like that? I made a tech-demo (not polished) how it could look in [my public repo](https://github.com/matmair/InvenTree/tree/3rdparty-info).\n", "before_files": [{"content": "\"\"\" This module provides template tags for extra functionality\nover and above the built-in Django tags.\n\"\"\"\nimport os\n\nfrom django import template\nfrom django.urls import reverse\nfrom django.utils.safestring import mark_safe\nfrom django.templatetags.static import StaticNode\nfrom InvenTree import version, settings\n\nimport InvenTree.helpers\n\nfrom common.models import InvenTreeSetting, ColorTheme\n\nregister = template.Library()\n\n\[email protected]_tag()\ndef define(value, *args, **kwargs):\n \"\"\"\n Shortcut function to overcome the shortcomings of the django templating language\n\n Use as follows: {% define \"hello_world\" as hello %}\n\n Ref: https://stackoverflow.com/questions/1070398/how-to-set-a-value-of-a-variable-inside-a-template-code\n \"\"\"\n\n return value\n\n\[email protected]_tag()\ndef decimal(x, *args, **kwargs):\n \"\"\" Simplified rendering of a decimal number \"\"\"\n\n return InvenTree.helpers.decimal2string(x)\n\n\[email protected]_tag()\ndef str2bool(x, *args, **kwargs):\n \"\"\" Convert a string to a boolean value \"\"\"\n\n return InvenTree.helpers.str2bool(x)\n\n\[email protected]_tag()\ndef inrange(n, *args, **kwargs):\n \"\"\" Return range(n) for iterating through a numeric quantity \"\"\"\n return range(n)\n \n\[email protected]_tag()\ndef multiply(x, y, *args, **kwargs):\n \"\"\" Multiply two numbers together \"\"\"\n return InvenTree.helpers.decimal2string(x * y)\n\n\[email protected]_tag()\ndef add(x, y, *args, **kwargs):\n \"\"\" Add two numbers together \"\"\"\n return x + y\n \n\[email protected]_tag()\ndef part_allocation_count(build, part, *args, **kwargs):\n \"\"\" Return the total number of <part> allocated to <build> \"\"\"\n\n return InvenTree.helpers.decimal2string(build.getAllocatedQuantity(part))\n\n\[email protected]_tag()\ndef inventree_instance_name(*args, **kwargs):\n \"\"\" Return the InstanceName associated with the current database \"\"\"\n return version.inventreeInstanceName()\n\n\[email protected]_tag()\ndef inventree_title(*args, **kwargs):\n \"\"\" Return the title for the current instance - respecting the settings \"\"\"\n return version.inventreeInstanceTitle()\n\n\[email protected]_tag()\ndef inventree_version(*args, **kwargs):\n \"\"\" Return InvenTree version string \"\"\"\n return version.inventreeVersion()\n\n\[email protected]_tag()\ndef django_version(*args, **kwargs):\n \"\"\" Return Django version string \"\"\"\n return version.inventreeDjangoVersion()\n\n\[email protected]_tag()\ndef inventree_commit_hash(*args, **kwargs):\n \"\"\" Return InvenTree git commit hash string \"\"\"\n return version.inventreeCommitHash()\n\n\[email protected]_tag()\ndef inventree_commit_date(*args, **kwargs):\n \"\"\" Return InvenTree git commit date string \"\"\"\n return version.inventreeCommitDate()\n\n\[email protected]_tag()\ndef inventree_github_url(*args, **kwargs):\n \"\"\" Return URL for InvenTree github site \"\"\"\n return \"https://github.com/InvenTree/InvenTree/\"\n\n\[email protected]_tag()\ndef inventree_docs_url(*args, **kwargs):\n \"\"\" Return URL for InvenTree documenation site \"\"\"\n return \"https://inventree.readthedocs.io/\"\n\n\[email protected]_tag()\ndef setting_object(key, *args, **kwargs):\n \"\"\"\n Return a setting object speciifed by the given key\n (Or return None if the setting does not exist)\n \"\"\"\n\n setting = InvenTreeSetting.get_setting_object(key)\n\n return setting\n\n\[email protected]_tag()\ndef settings_value(key, *args, **kwargs):\n \"\"\"\n Return a settings value specified by the given key\n \"\"\"\n\n return InvenTreeSetting.get_setting(key)\n\n\[email protected]_tag()\ndef get_color_theme_css(username):\n try:\n user_theme = ColorTheme.objects.filter(user=username).get()\n user_theme_name = user_theme.name\n if not user_theme_name or not ColorTheme.is_valid_choice(user_theme):\n user_theme_name = 'default'\n except ColorTheme.DoesNotExist:\n user_theme_name = 'default'\n\n # Build path to CSS sheet\n inventree_css_sheet = os.path.join('css', 'color-themes', user_theme_name + '.css')\n\n # Build static URL\n inventree_css_static_url = os.path.join(settings.STATIC_URL, inventree_css_sheet)\n\n return inventree_css_static_url\n\n\[email protected]_tag()\ndef authorized_owners(group):\n \"\"\" Return authorized owners \"\"\"\n\n owners = []\n\n try:\n for owner in group.get_related_owners(include_group=True):\n owners.append(owner.owner)\n except AttributeError:\n # group is None\n pass\n except TypeError:\n # group.get_users returns None\n pass\n \n return owners\n\n\[email protected]_tag()\ndef object_link(url_name, pk, ref):\n \"\"\" Return highlighted link to object \"\"\"\n\n ref_url = reverse(url_name, kwargs={'pk': pk})\n return mark_safe('<b><a href=\"{}\">{}</a></b>'.format(ref_url, ref))\n\n\nclass I18nStaticNode(StaticNode):\n \"\"\"\n custom StaticNode\n replaces a variable named *lng* in the path with the current language\n \"\"\"\n def render(self, context):\n self.path.var = self.path.var.format(lng=context.request.LANGUAGE_CODE)\n ret = super().render(context)\n return ret\n\n\[email protected]('i18n_static')\ndef do_i18n_static(parser, token):\n \"\"\"\n Overrides normal static, adds language - lookup for prerenderd files #1485\n\n usage (like static):\n {% i18n_static path [as varname] %}\n \"\"\"\n bits = token.split_contents()\n loc_name = settings.STATICFILES_I18_PREFIX\n\n # change path to called ressource\n bits[1] = f\"'{loc_name}/{{lng}}.{bits[1][1:-1]}'\"\n token.contents = ' '.join(bits)\n return I18nStaticNode.handle_token(parser, token)\n", "path": "InvenTree/part/templatetags/inventree_extras.py"}]} | 2,589 | 174 |
gh_patches_debug_2352 | rasdani/github-patches | git_diff | ansible-collections__community.general-5383 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
community.general.xenserver_facts module error __init__() missing 1 required positional argument: 'argument_spec'
### Summary
When I try to use the [community.general.xenserver_facts module](https://docs.ansible.com/ansible/latest/collections/community/general/xenserver_facts_module.html) I get an error.
### Issue Type
Bug Report
### Component Name
xenserver_facts_module
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.13.4]
config file = /home/chris/.ansible.cfg
configured module search path = ['/home/chris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/chris/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0]
jinja version = 3.0.3
libyaml = True
```
### Community.general Version
```console (paste below)
$ ansible-galaxy collection list community.general
# /home/chris/.ansible/collections/ansible_collections
Collection Version
----------------- -------
community.general 5.7.0
# /usr/lib/python3/dist-packages/ansible_collections
Collection Version
----------------- -------
community.general 5.6.0
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
DEFAULT_TIMEOUT(/home/chris/.ansible.cfg) = 300
```
### OS / Environment
Debian Bookworm
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Gather Xen facts
community.general.xenserver_facts:
```
### Expected Results
Something other than an error...
### Actual Results
```console (paste below)
TASK [xen : Gather Xen facts] ********************************************************************************************************
task path: /home/chris/webarch/servers/galaxy/roles/xen/tasks/main.yml:2
redirecting (type: modules) community.general.xenserver_facts to community.general.cloud.misc.xenserver_facts
redirecting (type: modules) community.general.xenserver_facts to community.general.cloud.misc.xenserver_facts
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: __init__() missing 1 required positional argument: 'argument_spec'
fatal: [xen4.webarch.net]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"<stdin>\", line 107, in <module>\n File \"<stdin>\", line 99, in _ansiballz_main\n File \"<stdin>\", line 47, in invoke_module\n File \"/usr/lib/python3.9/runpy.py\", line 210, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.9/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.9/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_community.general.xenserver_facts_payload_ilqmvdk8/ansible_community.general.xenserver_facts_payload.zip/ansible_collections/community/general/plugins/modules/cloud/misc/xenserver_facts.py\", line 206, in <module>\n File \"/tmp/ansible_community.general.xenserver_facts_payload_ilqmvdk8/ansible_community.general.xenserver_facts_payload.zip/ansible_collections/community/general/plugins/modules/cloud/misc/xenserver_facts.py\", line 165, in main\nTypeError: __init__() missing 1 required positional argument: 'argument_spec'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/modules/cloud/misc/xenserver_facts.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright Ansible Project
5 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
6 # SPDX-License-Identifier: GPL-3.0-or-later
7
8 from __future__ import absolute_import, division, print_function
9 __metaclass__ = type
10
11
12 DOCUMENTATION = '''
13 ---
14 module: xenserver_facts
15 short_description: get facts reported on xenserver
16 description:
17 - Reads data out of XenAPI, can be used instead of multiple xe commands.
18 author:
19 - Andy Hill (@andyhky)
20 - Tim Rupp (@caphrim007)
21 - Robin Lee (@cheese)
22 options: {}
23 '''
24
25 EXAMPLES = '''
26 - name: Gather facts from xenserver
27 community.general.xenserver_facts:
28
29 - name: Print running VMs
30 ansible.builtin.debug:
31 msg: "{{ item }}"
32 with_items: "{{ xs_vms.keys() }}"
33 when: xs_vms[item]['power_state'] == "Running"
34
35 # Which will print:
36 #
37 # TASK: [Print running VMs] ***********************************************************
38 # skipping: [10.13.0.22] => (item=CentOS 4.7 (32-bit))
39 # ok: [10.13.0.22] => (item=Control domain on host: 10.0.13.22) => {
40 # "item": "Control domain on host: 10.0.13.22",
41 # "msg": "Control domain on host: 10.0.13.22"
42 # }
43 '''
44
45
46 HAVE_XENAPI = False
47 try:
48 import XenAPI
49 HAVE_XENAPI = True
50 except ImportError:
51 pass
52
53 from ansible.module_utils import distro
54 from ansible.module_utils.basic import AnsibleModule
55
56
57 class XenServerFacts:
58 def __init__(self):
59 self.codes = {
60 '5.5.0': 'george',
61 '5.6.100': 'oxford',
62 '6.0.0': 'boston',
63 '6.1.0': 'tampa',
64 '6.2.0': 'clearwater'
65 }
66
67 @property
68 def version(self):
69 result = distro.linux_distribution()[1]
70 return result
71
72 @property
73 def codename(self):
74 if self.version in self.codes:
75 result = self.codes[self.version]
76 else:
77 result = None
78
79 return result
80
81
82 def get_xenapi_session():
83 session = XenAPI.xapi_local()
84 session.xenapi.login_with_password('', '')
85 return session
86
87
88 def get_networks(session):
89 recs = session.xenapi.network.get_all_records()
90 networks = change_keys(recs, key='name_label')
91 return networks
92
93
94 def get_pifs(session):
95 recs = session.xenapi.PIF.get_all_records()
96 pifs = change_keys(recs, key='uuid')
97 xs_pifs = {}
98 devicenums = range(0, 7)
99 for pif in pifs.values():
100 for eth in devicenums:
101 interface_name = "eth%s" % (eth)
102 bond_name = interface_name.replace('eth', 'bond')
103 if pif['device'] == interface_name:
104 xs_pifs[interface_name] = pif
105 elif pif['device'] == bond_name:
106 xs_pifs[bond_name] = pif
107 return xs_pifs
108
109
110 def get_vlans(session):
111 recs = session.xenapi.VLAN.get_all_records()
112 return change_keys(recs, key='tag')
113
114
115 def change_keys(recs, key='uuid', filter_func=None):
116 """
117 Take a xapi dict, and make the keys the value of recs[ref][key].
118
119 Preserves the ref in rec['ref']
120
121 """
122 new_recs = {}
123
124 for ref, rec in recs.items():
125 if filter_func is not None and not filter_func(rec):
126 continue
127
128 for param_name, param_value in rec.items():
129 # param_value may be of type xmlrpc.client.DateTime,
130 # which is not simply convertable to str.
131 # Use 'value' attr to get the str value,
132 # following an example in xmlrpc.client.DateTime document
133 if hasattr(param_value, "value"):
134 rec[param_name] = param_value.value
135 new_recs[rec[key]] = rec
136 new_recs[rec[key]]['ref'] = ref
137
138 return new_recs
139
140
141 def get_host(session):
142 """Get the host"""
143 host_recs = session.xenapi.host.get_all()
144 # We only have one host, so just return its entry
145 return session.xenapi.host.get_record(host_recs[0])
146
147
148 def get_vms(session):
149 recs = session.xenapi.VM.get_all_records()
150 if not recs:
151 return None
152 vms = change_keys(recs, key='name_label')
153 return vms
154
155
156 def get_srs(session):
157 recs = session.xenapi.SR.get_all_records()
158 if not recs:
159 return None
160 srs = change_keys(recs, key='name_label')
161 return srs
162
163
164 def main():
165 module = AnsibleModule(
166 supports_check_mode=True,
167 )
168
169 if not HAVE_XENAPI:
170 module.fail_json(changed=False, msg="python xen api required for this module")
171
172 obj = XenServerFacts()
173 try:
174 session = get_xenapi_session()
175 except XenAPI.Failure as e:
176 module.fail_json(msg='%s' % e)
177
178 data = {
179 'xenserver_version': obj.version,
180 'xenserver_codename': obj.codename
181 }
182
183 xs_networks = get_networks(session)
184 xs_pifs = get_pifs(session)
185 xs_vlans = get_vlans(session)
186 xs_vms = get_vms(session)
187 xs_srs = get_srs(session)
188
189 if xs_vlans:
190 data['xs_vlans'] = xs_vlans
191 if xs_pifs:
192 data['xs_pifs'] = xs_pifs
193 if xs_networks:
194 data['xs_networks'] = xs_networks
195
196 if xs_vms:
197 data['xs_vms'] = xs_vms
198
199 if xs_srs:
200 data['xs_srs'] = xs_srs
201
202 module.exit_json(ansible_facts=data)
203
204
205 if __name__ == '__main__':
206 main()
207
[end of plugins/modules/cloud/misc/xenserver_facts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/modules/cloud/misc/xenserver_facts.py b/plugins/modules/cloud/misc/xenserver_facts.py
--- a/plugins/modules/cloud/misc/xenserver_facts.py
+++ b/plugins/modules/cloud/misc/xenserver_facts.py
@@ -162,9 +162,7 @@
def main():
- module = AnsibleModule(
- supports_check_mode=True,
- )
+ module = AnsibleModule({}, supports_check_mode=True)
if not HAVE_XENAPI:
module.fail_json(changed=False, msg="python xen api required for this module")
| {"golden_diff": "diff --git a/plugins/modules/cloud/misc/xenserver_facts.py b/plugins/modules/cloud/misc/xenserver_facts.py\n--- a/plugins/modules/cloud/misc/xenserver_facts.py\n+++ b/plugins/modules/cloud/misc/xenserver_facts.py\n@@ -162,9 +162,7 @@\n \n \n def main():\n- module = AnsibleModule(\n- supports_check_mode=True,\n- )\n+ module = AnsibleModule({}, supports_check_mode=True)\n \n if not HAVE_XENAPI:\n module.fail_json(changed=False, msg=\"python xen api required for this module\")\n", "issue": "community.general.xenserver_facts module error __init__() missing 1 required positional argument: 'argument_spec'\n### Summary\n\nWhen I try to use the [community.general.xenserver_facts module](https://docs.ansible.com/ansible/latest/collections/community/general/xenserver_facts_module.html) I get an error.\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\nxenserver_facts_module\n\n### Ansible Version\n\n```console (paste below)\r\n$ ansible --version\r\nansible [core 2.13.4]\r\n config file = /home/chris/.ansible.cfg\r\n configured module search path = ['/home/chris/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3/dist-packages/ansible\r\n ansible collection location = /home/chris/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/bin/ansible\r\n python version = 3.10.7 (main, Oct 1 2022, 04:31:04) [GCC 12.2.0]\r\n jinja version = 3.0.3\r\n libyaml = True\r\n```\r\n\n\n### Community.general Version\n\n```console (paste below)\r\n$ ansible-galaxy collection list community.general\r\n\r\n# /home/chris/.ansible/collections/ansible_collections\r\nCollection Version\r\n----------------- -------\r\ncommunity.general 5.7.0 \r\n\r\n# /usr/lib/python3/dist-packages/ansible_collections\r\nCollection Version\r\n----------------- -------\r\ncommunity.general 5.6.0 \r\n```\r\n\n\n### Configuration\n\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\nDEFAULT_TIMEOUT(/home/chris/.ansible.cfg) = 300\r\n```\r\n\n\n### OS / Environment\n\nDebian Bookworm\n\n### Steps to Reproduce\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n- name: Gather Xen facts\r\n community.general.xenserver_facts:\r\n```\r\n\n\n### Expected Results\n\nSomething other than an error...\n\n### Actual Results\n\n```console (paste below)\r\nTASK [xen : Gather Xen facts] ********************************************************************************************************\r\ntask path: /home/chris/webarch/servers/galaxy/roles/xen/tasks/main.yml:2\r\nredirecting (type: modules) community.general.xenserver_facts to community.general.cloud.misc.xenserver_facts\r\nredirecting (type: modules) community.general.xenserver_facts to community.general.cloud.misc.xenserver_facts\r\nAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: __init__() missing 1 required positional argument: 'argument_spec'\r\nfatal: [xen4.webarch.net]: FAILED! => {\"changed\": false, \"module_stderr\": \"Traceback (most recent call last):\\n File \\\"<stdin>\\\", line 107, in <module>\\n File \\\"<stdin>\\\", line 99, in _ansiballz_main\\n File \\\"<stdin>\\\", line 47, in invoke_module\\n File \\\"/usr/lib/python3.9/runpy.py\\\", line 210, in run_module\\n return _run_module_code(code, init_globals, run_name, mod_spec)\\n File \\\"/usr/lib/python3.9/runpy.py\\\", line 97, in _run_module_code\\n _run_code(code, mod_globals, init_globals,\\n File \\\"/usr/lib/python3.9/runpy.py\\\", line 87, in _run_code\\n exec(code, run_globals)\\n File \\\"/tmp/ansible_community.general.xenserver_facts_payload_ilqmvdk8/ansible_community.general.xenserver_facts_payload.zip/ansible_collections/community/general/plugins/modules/cloud/misc/xenserver_facts.py\\\", line 206, in <module>\\n File \\\"/tmp/ansible_community.general.xenserver_facts_payload_ilqmvdk8/ansible_community.general.xenserver_facts_payload.zip/ansible_collections/community/general/plugins/modules/cloud/misc/xenserver_facts.py\\\", line 165, in main\\nTypeError: __init__() missing 1 required positional argument: 'argument_spec'\\n\", \"module_stdout\": \"\", \"msg\": \"MODULE FAILURE\\nSee stdout/stderr for the exact error\", \"rc\": 1}\r\n```\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n#\n# Copyright Ansible Project\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: xenserver_facts\nshort_description: get facts reported on xenserver\ndescription:\n - Reads data out of XenAPI, can be used instead of multiple xe commands.\nauthor:\n - Andy Hill (@andyhky)\n - Tim Rupp (@caphrim007)\n - Robin Lee (@cheese)\noptions: {}\n'''\n\nEXAMPLES = '''\n- name: Gather facts from xenserver\n community.general.xenserver_facts:\n\n- name: Print running VMs\n ansible.builtin.debug:\n msg: \"{{ item }}\"\n with_items: \"{{ xs_vms.keys() }}\"\n when: xs_vms[item]['power_state'] == \"Running\"\n\n# Which will print:\n#\n# TASK: [Print running VMs] ***********************************************************\n# skipping: [10.13.0.22] => (item=CentOS 4.7 (32-bit))\n# ok: [10.13.0.22] => (item=Control domain on host: 10.0.13.22) => {\n# \"item\": \"Control domain on host: 10.0.13.22\",\n# \"msg\": \"Control domain on host: 10.0.13.22\"\n# }\n'''\n\n\nHAVE_XENAPI = False\ntry:\n import XenAPI\n HAVE_XENAPI = True\nexcept ImportError:\n pass\n\nfrom ansible.module_utils import distro\nfrom ansible.module_utils.basic import AnsibleModule\n\n\nclass XenServerFacts:\n def __init__(self):\n self.codes = {\n '5.5.0': 'george',\n '5.6.100': 'oxford',\n '6.0.0': 'boston',\n '6.1.0': 'tampa',\n '6.2.0': 'clearwater'\n }\n\n @property\n def version(self):\n result = distro.linux_distribution()[1]\n return result\n\n @property\n def codename(self):\n if self.version in self.codes:\n result = self.codes[self.version]\n else:\n result = None\n\n return result\n\n\ndef get_xenapi_session():\n session = XenAPI.xapi_local()\n session.xenapi.login_with_password('', '')\n return session\n\n\ndef get_networks(session):\n recs = session.xenapi.network.get_all_records()\n networks = change_keys(recs, key='name_label')\n return networks\n\n\ndef get_pifs(session):\n recs = session.xenapi.PIF.get_all_records()\n pifs = change_keys(recs, key='uuid')\n xs_pifs = {}\n devicenums = range(0, 7)\n for pif in pifs.values():\n for eth in devicenums:\n interface_name = \"eth%s\" % (eth)\n bond_name = interface_name.replace('eth', 'bond')\n if pif['device'] == interface_name:\n xs_pifs[interface_name] = pif\n elif pif['device'] == bond_name:\n xs_pifs[bond_name] = pif\n return xs_pifs\n\n\ndef get_vlans(session):\n recs = session.xenapi.VLAN.get_all_records()\n return change_keys(recs, key='tag')\n\n\ndef change_keys(recs, key='uuid', filter_func=None):\n \"\"\"\n Take a xapi dict, and make the keys the value of recs[ref][key].\n\n Preserves the ref in rec['ref']\n\n \"\"\"\n new_recs = {}\n\n for ref, rec in recs.items():\n if filter_func is not None and not filter_func(rec):\n continue\n\n for param_name, param_value in rec.items():\n # param_value may be of type xmlrpc.client.DateTime,\n # which is not simply convertable to str.\n # Use 'value' attr to get the str value,\n # following an example in xmlrpc.client.DateTime document\n if hasattr(param_value, \"value\"):\n rec[param_name] = param_value.value\n new_recs[rec[key]] = rec\n new_recs[rec[key]]['ref'] = ref\n\n return new_recs\n\n\ndef get_host(session):\n \"\"\"Get the host\"\"\"\n host_recs = session.xenapi.host.get_all()\n # We only have one host, so just return its entry\n return session.xenapi.host.get_record(host_recs[0])\n\n\ndef get_vms(session):\n recs = session.xenapi.VM.get_all_records()\n if not recs:\n return None\n vms = change_keys(recs, key='name_label')\n return vms\n\n\ndef get_srs(session):\n recs = session.xenapi.SR.get_all_records()\n if not recs:\n return None\n srs = change_keys(recs, key='name_label')\n return srs\n\n\ndef main():\n module = AnsibleModule(\n supports_check_mode=True,\n )\n\n if not HAVE_XENAPI:\n module.fail_json(changed=False, msg=\"python xen api required for this module\")\n\n obj = XenServerFacts()\n try:\n session = get_xenapi_session()\n except XenAPI.Failure as e:\n module.fail_json(msg='%s' % e)\n\n data = {\n 'xenserver_version': obj.version,\n 'xenserver_codename': obj.codename\n }\n\n xs_networks = get_networks(session)\n xs_pifs = get_pifs(session)\n xs_vlans = get_vlans(session)\n xs_vms = get_vms(session)\n xs_srs = get_srs(session)\n\n if xs_vlans:\n data['xs_vlans'] = xs_vlans\n if xs_pifs:\n data['xs_pifs'] = xs_pifs\n if xs_networks:\n data['xs_networks'] = xs_networks\n\n if xs_vms:\n data['xs_vms'] = xs_vms\n\n if xs_srs:\n data['xs_srs'] = xs_srs\n\n module.exit_json(ansible_facts=data)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/cloud/misc/xenserver_facts.py"}]} | 3,489 | 128 |
gh_patches_debug_16197 | rasdani/github-patches | git_diff | gratipay__gratipay.com-1971 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
claimed users still have locked accounts elsewhere
``` sql
select platform, claimed_time is null AS claimed, count(claimed_time is null) as nlocked
from elsewhere
join participants on username = participant
where is_locked
group by platform, claimed_time is null
order by platform;
```
```
platform | claimed | nlocked
----------+---------+---------
github | t | 17
github | f | 5
twitter | t | 4
twitter | f | 4
(4 rows)
```
I would expect claimed accounts to not be marked is_locked in elsewhere.
Here's a query to see the deets:
``` sql
select platform, user_info -> 'screen_name' AS screen_name, user_info->'login' AS login, claimed_time
from elsewhere
join participants on username = participant
where is_locked
order by platform, claimed_time, screen_name, login;
```
</issue>
<code>
[start of gittip/models/__init__.py]
1 """
2
3 The most important object in the Gittip object model is Participant, and the
4 second most important one is Ccommunity. There are a few others, but those are
5 the most important two. Participant, in particular, is at the center of
6 everything on Gittip.
7
8 """
9 from postgres import Postgres
10
11 class GittipDB(Postgres):
12
13 def self_check(self):
14 """
15 Runs all available self checks on the database.
16 """
17 self._check_balances()
18 self._check_tips()
19 self._check_orphans()
20 self._check_orphans_no_tips()
21 self._check_paydays_volumes()
22
23 def _check_tips(self):
24 """
25 Checks that there are no rows in tips with duplicate (tipper, tippee, mtime).
26
27 https://github.com/gittip/www.gittip.com/issues/1704
28 """
29 conflicting_tips = self.one("""
30 SELECT count(*)
31 FROM
32 (
33 SELECT * FROM tips
34 EXCEPT
35 SELECT DISTINCT ON(tipper, tippee, mtime) *
36 FROM tips
37 ORDER BY tipper, tippee, mtime
38 ) AS foo
39 """)
40 assert conflicting_tips == 0
41
42 def _check_balances(self):
43 """
44 Recalculates balances for all participants from transfers and exchanges.
45
46 https://github.com/gittip/www.gittip.com/issues/1118
47 """
48 with self.get_cursor() as cursor:
49 if cursor.one("select exists (select * from paydays where ts_end < ts_start) as running"):
50 # payday is running and the query bellow does not account for pending
51 return
52 b = cursor.one("""
53 select count(*)
54 from (
55 select username, sum(a) as balance
56 from (
57 select participant as username, sum(amount) as a
58 from exchanges
59 where amount > 0
60 group by participant
61
62 union
63
64 select participant as username, sum(amount-fee) as a
65 from exchanges
66 where amount < 0
67 group by participant
68
69 union
70
71 select tipper as username, sum(-amount) as a
72 from transfers
73 group by tipper
74
75 union
76
77 select tippee as username, sum(amount) as a
78 from transfers
79 group by tippee
80 ) as foo
81 group by username
82
83 except
84
85 select username, balance
86 from participants
87 ) as foo2
88 """)
89 assert b == 0, "conflicting balances: {}".format(b)
90
91 def _check_orphans(self):
92 """
93 Finds participants that
94 * does not have corresponding elsewhere account
95 * have not been absorbed by other participant
96
97 These are broken because new participants arise from elsewhere
98 and elsewhere is detached only by take over which makes a note
99 in absorptions if it removes the last elsewhere account.
100
101 Especially bad case is when also claimed_time is set because
102 there must have been elsewhere account attached and used to sign in.
103
104 https://github.com/gittip/www.gittip.com/issues/617
105 """
106 orphans = self.all("""
107 select username
108 from participants
109 where not exists (select * from elsewhere where elsewhere.participant=username)
110 and not exists (select * from absorptions where archived_as=username)
111 """)
112 assert len(orphans) == 0, "missing elsewheres: {}".format(list(orphans))
113
114 def _check_orphans_no_tips(self):
115 """
116 Finds participants
117 * without elsewhere account attached
118 * having non zero outstanding tip
119
120 This should not happen because when we remove the last elsewhere account
121 in take_over we also zero out all tips.
122 """
123 tips_with_orphans = self.all("""
124 WITH orphans AS (
125 SELECT username FROM participants
126 WHERE NOT EXISTS (SELECT 1 FROM elsewhere WHERE participant=username)
127 ), valid_tips AS (
128 SELECT * FROM (
129 SELECT DISTINCT ON (tipper, tippee) *
130 FROM tips
131 ORDER BY tipper, tippee, mtime DESC
132 ) AS foo
133 WHERE amount > 0
134 )
135 SELECT id FROM valid_tips
136 WHERE tipper IN (SELECT * FROM orphans)
137 OR tippee IN (SELECT * FROM orphans)
138 """)
139 known = set([25206, 46266]) # '4c074000c7bc', 'naderman', '3.00'
140 real = set(tips_with_orphans) - known
141 assert len(real) == 0, real
142
143 def _check_paydays_volumes(self):
144 """
145 Recalculate *_volume fields in paydays table using exchanges table.
146 """
147 with self.get_cursor() as cursor:
148 if cursor.one("select exists (select * from paydays where ts_end < ts_start) as running"):
149 # payday is running
150 return
151 charge_volume = cursor.all("""
152 select * from (
153 select id, ts_start, charge_volume, (
154 select coalesce(sum(amount+fee), 0)
155 from exchanges
156 where timestamp > ts_start
157 and timestamp < ts_end
158 and amount > 0
159 ) as ref
160 from paydays
161 order by id
162 ) as foo
163 where charge_volume != ref
164 """)
165 assert len(charge_volume) == 0
166
167 charge_fees_volume = cursor.all("""
168 select * from (
169 select id, ts_start, charge_fees_volume, (
170 select coalesce(sum(fee), 0)
171 from exchanges
172 where timestamp > ts_start
173 and timestamp < ts_end
174 and amount > 0
175 ) as ref
176 from paydays
177 order by id
178 ) as foo
179 where charge_fees_volume != ref
180 """)
181 assert len(charge_fees_volume) == 0
182
183 ach_volume = cursor.all("""
184 select * from (
185 select id, ts_start, ach_volume, (
186 select coalesce(sum(amount), 0)
187 from exchanges
188 where timestamp > ts_start
189 and timestamp < ts_end
190 and amount < 0
191 ) as ref
192 from paydays
193 order by id
194 ) as foo
195 where ach_volume != ref
196 """)
197 assert len(ach_volume) == 0
198
199 ach_fees_volume = cursor.all("""
200 select * from (
201 select id, ts_start, ach_fees_volume, (
202 select coalesce(sum(fee), 0)
203 from exchanges
204 where timestamp > ts_start
205 and timestamp < ts_end
206 and amount < 0
207 ) as ref
208 from paydays
209 order by id
210 ) as foo
211 where ach_fees_volume != ref
212 """)
213 assert len(ach_fees_volume) == 0
214 #
215
[end of gittip/models/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gittip/models/__init__.py b/gittip/models/__init__.py
--- a/gittip/models/__init__.py
+++ b/gittip/models/__init__.py
@@ -19,6 +19,7 @@
self._check_orphans()
self._check_orphans_no_tips()
self._check_paydays_volumes()
+ self._check_claimed_not_locked()
def _check_tips(self):
"""
@@ -211,4 +212,18 @@
where ach_fees_volume != ref
""")
assert len(ach_fees_volume) == 0
+
+ def _check_claimed_not_locked(self):
+ locked = self.all("""
+ SELECT participant
+ FROM elsewhere
+ WHERE EXISTS (
+ SELECT *
+ FROM participants
+ WHERE username=participant
+ AND claimed_time IS NOT NULL
+ ) AND is_locked
+ """)
+ assert len(locked) == 0
+
#
| {"golden_diff": "diff --git a/gittip/models/__init__.py b/gittip/models/__init__.py\n--- a/gittip/models/__init__.py\n+++ b/gittip/models/__init__.py\n@@ -19,6 +19,7 @@\n self._check_orphans()\n self._check_orphans_no_tips()\n self._check_paydays_volumes()\n+ self._check_claimed_not_locked()\n \n def _check_tips(self):\n \"\"\"\n@@ -211,4 +212,18 @@\n where ach_fees_volume != ref\n \"\"\")\n assert len(ach_fees_volume) == 0\n+\n+ def _check_claimed_not_locked(self):\n+ locked = self.all(\"\"\"\n+ SELECT participant\n+ FROM elsewhere\n+ WHERE EXISTS (\n+ SELECT *\n+ FROM participants\n+ WHERE username=participant\n+ AND claimed_time IS NOT NULL\n+ ) AND is_locked\n+ \"\"\")\n+ assert len(locked) == 0\n+\n #\n", "issue": "claimed users still have locked accounts elsewhere\n``` sql\nselect platform, claimed_time is null AS claimed, count(claimed_time is null) as nlocked \nfrom elsewhere \njoin participants on username = participant \nwhere is_locked \ngroup by platform, claimed_time is null \norder by platform;\n```\n\n```\n platform | claimed | nlocked \n----------+---------+---------\n github | t | 17\n github | f | 5\n twitter | t | 4\n twitter | f | 4\n(4 rows)\n```\n\nI would expect claimed accounts to not be marked is_locked in elsewhere.\n\nHere's a query to see the deets:\n\n``` sql\nselect platform, user_info -> 'screen_name' AS screen_name, user_info->'login' AS login, claimed_time \nfrom elsewhere \njoin participants on username = participant\nwhere is_locked \norder by platform, claimed_time, screen_name, login;\n```\n\n", "before_files": [{"content": "\"\"\"\n\nThe most important object in the Gittip object model is Participant, and the\nsecond most important one is Ccommunity. There are a few others, but those are\nthe most important two. Participant, in particular, is at the center of\neverything on Gittip.\n\n\"\"\"\nfrom postgres import Postgres\n\nclass GittipDB(Postgres):\n\n def self_check(self):\n \"\"\"\n Runs all available self checks on the database.\n \"\"\"\n self._check_balances()\n self._check_tips()\n self._check_orphans()\n self._check_orphans_no_tips()\n self._check_paydays_volumes()\n\n def _check_tips(self):\n \"\"\"\n Checks that there are no rows in tips with duplicate (tipper, tippee, mtime).\n\n https://github.com/gittip/www.gittip.com/issues/1704\n \"\"\"\n conflicting_tips = self.one(\"\"\"\n SELECT count(*)\n FROM\n (\n SELECT * FROM tips\n EXCEPT\n SELECT DISTINCT ON(tipper, tippee, mtime) *\n FROM tips\n ORDER BY tipper, tippee, mtime\n ) AS foo\n \"\"\")\n assert conflicting_tips == 0\n\n def _check_balances(self):\n \"\"\"\n Recalculates balances for all participants from transfers and exchanges.\n\n https://github.com/gittip/www.gittip.com/issues/1118\n \"\"\"\n with self.get_cursor() as cursor:\n if cursor.one(\"select exists (select * from paydays where ts_end < ts_start) as running\"):\n # payday is running and the query bellow does not account for pending\n return\n b = cursor.one(\"\"\"\n select count(*)\n from (\n select username, sum(a) as balance\n from (\n select participant as username, sum(amount) as a\n from exchanges\n where amount > 0\n group by participant\n\n union\n\n select participant as username, sum(amount-fee) as a\n from exchanges\n where amount < 0\n group by participant\n\n union\n\n select tipper as username, sum(-amount) as a\n from transfers\n group by tipper\n\n union\n\n select tippee as username, sum(amount) as a\n from transfers\n group by tippee\n ) as foo\n group by username\n\n except\n\n select username, balance\n from participants\n ) as foo2\n \"\"\")\n assert b == 0, \"conflicting balances: {}\".format(b)\n\n def _check_orphans(self):\n \"\"\"\n Finds participants that\n * does not have corresponding elsewhere account\n * have not been absorbed by other participant\n\n These are broken because new participants arise from elsewhere\n and elsewhere is detached only by take over which makes a note\n in absorptions if it removes the last elsewhere account.\n\n Especially bad case is when also claimed_time is set because\n there must have been elsewhere account attached and used to sign in.\n\n https://github.com/gittip/www.gittip.com/issues/617\n \"\"\"\n orphans = self.all(\"\"\"\n select username\n from participants\n where not exists (select * from elsewhere where elsewhere.participant=username)\n and not exists (select * from absorptions where archived_as=username)\n \"\"\")\n assert len(orphans) == 0, \"missing elsewheres: {}\".format(list(orphans))\n\n def _check_orphans_no_tips(self):\n \"\"\"\n Finds participants\n * without elsewhere account attached\n * having non zero outstanding tip\n\n This should not happen because when we remove the last elsewhere account\n in take_over we also zero out all tips.\n \"\"\"\n tips_with_orphans = self.all(\"\"\"\n WITH orphans AS (\n SELECT username FROM participants\n WHERE NOT EXISTS (SELECT 1 FROM elsewhere WHERE participant=username)\n ), valid_tips AS (\n SELECT * FROM (\n SELECT DISTINCT ON (tipper, tippee) *\n FROM tips\n ORDER BY tipper, tippee, mtime DESC\n ) AS foo\n WHERE amount > 0\n )\n SELECT id FROM valid_tips\n WHERE tipper IN (SELECT * FROM orphans)\n OR tippee IN (SELECT * FROM orphans)\n \"\"\")\n known = set([25206, 46266]) # '4c074000c7bc', 'naderman', '3.00'\n real = set(tips_with_orphans) - known\n assert len(real) == 0, real\n\n def _check_paydays_volumes(self):\n \"\"\"\n Recalculate *_volume fields in paydays table using exchanges table.\n \"\"\"\n with self.get_cursor() as cursor:\n if cursor.one(\"select exists (select * from paydays where ts_end < ts_start) as running\"):\n # payday is running\n return\n charge_volume = cursor.all(\"\"\"\n select * from (\n select id, ts_start, charge_volume, (\n select coalesce(sum(amount+fee), 0)\n from exchanges\n where timestamp > ts_start\n and timestamp < ts_end\n and amount > 0\n ) as ref\n from paydays\n order by id\n ) as foo\n where charge_volume != ref\n \"\"\")\n assert len(charge_volume) == 0\n\n charge_fees_volume = cursor.all(\"\"\"\n select * from (\n select id, ts_start, charge_fees_volume, (\n select coalesce(sum(fee), 0)\n from exchanges\n where timestamp > ts_start\n and timestamp < ts_end\n and amount > 0\n ) as ref\n from paydays\n order by id\n ) as foo\n where charge_fees_volume != ref\n \"\"\")\n assert len(charge_fees_volume) == 0\n\n ach_volume = cursor.all(\"\"\"\n select * from (\n select id, ts_start, ach_volume, (\n select coalesce(sum(amount), 0)\n from exchanges\n where timestamp > ts_start\n and timestamp < ts_end\n and amount < 0\n ) as ref\n from paydays\n order by id\n ) as foo\n where ach_volume != ref\n \"\"\")\n assert len(ach_volume) == 0\n\n ach_fees_volume = cursor.all(\"\"\"\n select * from (\n select id, ts_start, ach_fees_volume, (\n select coalesce(sum(fee), 0)\n from exchanges\n where timestamp > ts_start\n and timestamp < ts_end\n and amount < 0\n ) as ref\n from paydays\n order by id\n ) as foo\n where ach_fees_volume != ref\n \"\"\")\n assert len(ach_fees_volume) == 0\n#\n", "path": "gittip/models/__init__.py"}]} | 2,783 | 226 |
gh_patches_debug_40231 | rasdani/github-patches | git_diff | getnikola__nikola-1182 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
STORY_INDEX is limited
It only creates index pages for the top folder and not for any subfolders.
STORY_INDEX is limited
It only creates index pages for the top folder and not for any subfolders.
</issue>
<code>
[start of nikola/plugins/task/indexes.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2014 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 from __future__ import unicode_literals
28 import glob
29 import itertools
30 import os
31
32 from nikola.plugin_categories import Task
33 from nikola.utils import config_changed
34
35
36 class Indexes(Task):
37 """Render the blog indexes."""
38
39 name = "render_indexes"
40
41 def set_site(self, site):
42 site.register_path_handler('index', self.index_path)
43 return super(Indexes, self).set_site(site)
44
45 def gen_tasks(self):
46 self.site.scan_posts()
47 yield self.group_task()
48
49 kw = {
50 "translations": self.site.config['TRANSLATIONS'],
51 "index_display_post_count":
52 self.site.config['INDEX_DISPLAY_POST_COUNT'],
53 "messages": self.site.MESSAGES,
54 "index_teasers": self.site.config['INDEX_TEASERS'],
55 "output_folder": self.site.config['OUTPUT_FOLDER'],
56 "filters": self.site.config['FILTERS'],
57 "show_untranslated_posts": self.site.config['SHOW_UNTRANSLATED_POSTS'],
58 "indexes_title": self.site.config['INDEXES_TITLE'],
59 "indexes_pages": self.site.config['INDEXES_PAGES'],
60 "indexes_pages_main": self.site.config['INDEXES_PAGES_MAIN'],
61 "blog_title": self.site.config["BLOG_TITLE"],
62 }
63
64 template_name = "index.tmpl"
65 posts = self.site.posts
66 for lang in kw["translations"]:
67 # Split in smaller lists
68 lists = []
69 if kw["show_untranslated_posts"]:
70 filtered_posts = posts
71 else:
72 filtered_posts = [x for x in posts if x.is_translation_available(lang)]
73 lists.append(filtered_posts[:kw["index_display_post_count"]])
74 filtered_posts = filtered_posts[kw["index_display_post_count"]:]
75 while filtered_posts:
76 lists.append(filtered_posts[-kw["index_display_post_count"]:])
77 filtered_posts = filtered_posts[:-kw["index_display_post_count"]]
78 num_pages = len(lists)
79 for i, post_list in enumerate(lists):
80 context = {}
81 indexes_title = kw['indexes_title'] or kw['blog_title'](lang)
82 if kw["indexes_pages_main"]:
83 ipages_i = i + 1
84 ipages_msg = "page %d"
85 else:
86 ipages_i = i
87 ipages_msg = "old posts, page %d"
88 if kw["indexes_pages"]:
89 indexes_pages = kw["indexes_pages"] % ipages_i
90 else:
91 indexes_pages = " (" + \
92 kw["messages"][lang][ipages_msg] % ipages_i + ")"
93 if i > 0 or kw["indexes_pages_main"]:
94 context["title"] = indexes_title + indexes_pages
95 else:
96 context["title"] = indexes_title
97 context["prevlink"] = None
98 context["nextlink"] = None
99 context['index_teasers'] = kw['index_teasers']
100 if i == 0: # index.html page
101 context["prevlink"] = None
102 if num_pages > 1:
103 context["nextlink"] = "index-{0}.html".format(num_pages - 1)
104 else:
105 context["nextlink"] = None
106 else: # index-x.html pages
107 if i > 1:
108 context["nextlink"] = "index-{0}.html".format(i - 1)
109 if i < num_pages - 1:
110 context["prevlink"] = "index-{0}.html".format(i + 1)
111 elif i == num_pages - 1:
112 context["prevlink"] = "index.html"
113 context["permalink"] = self.site.link("index", i, lang)
114 output_name = os.path.join(
115 kw['output_folder'], self.site.path("index", i,
116 lang))
117 task = self.site.generic_post_list_renderer(
118 lang,
119 post_list,
120 output_name,
121 template_name,
122 kw['filters'],
123 context,
124 )
125 task_cfg = {1: task['uptodate'][0].config, 2: kw}
126 task['uptodate'] = [config_changed(task_cfg)]
127 task['basename'] = 'render_indexes'
128 yield task
129
130 if not self.site.config["STORY_INDEX"]:
131 return
132 kw = {
133 "translations": self.site.config['TRANSLATIONS'],
134 "post_pages": self.site.config["post_pages"],
135 "output_folder": self.site.config['OUTPUT_FOLDER'],
136 "filters": self.site.config['FILTERS'],
137 }
138 template_name = "list.tmpl"
139 for lang in kw["translations"]:
140 # Need to group by folder to avoid duplicated tasks (Issue #758)
141 for dirname, wildcards in itertools.groupby((w for w, d, x, i in kw["post_pages"] if not i), os.path.dirname):
142 context = {}
143 # vim/pyflakes thinks it's unused
144 # src_dir = os.path.dirname(wildcard)
145 files = []
146 for wildcard in wildcards:
147 files += glob.glob(wildcard)
148 post_list = [self.site.global_data[p] for p in files]
149 output_name = os.path.join(kw["output_folder"],
150 self.site.path("post_path",
151 wildcard,
152 lang)).encode('utf8')
153 context["items"] = [(post.title(lang), post.permalink(lang))
154 for post in post_list]
155 task = self.site.generic_post_list_renderer(lang, post_list,
156 output_name,
157 template_name,
158 kw['filters'],
159 context)
160 task_cfg = {1: task['uptodate'][0].config, 2: kw}
161 task['uptodate'] = [config_changed(task_cfg)]
162 task['basename'] = self.name
163 yield task
164
165 def index_path(self, name, lang):
166 if name not in [None, 0]:
167 return [_f for _f in [self.site.config['TRANSLATIONS'][lang],
168 self.site.config['INDEX_PATH'],
169 'index-{0}.html'.format(name)] if _f]
170 else:
171 return [_f for _f in [self.site.config['TRANSLATIONS'][lang],
172 self.site.config['INDEX_PATH'],
173 self.site.config['INDEX_FILE']]
174 if _f]
175
[end of nikola/plugins/task/indexes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nikola/plugins/task/indexes.py b/nikola/plugins/task/indexes.py
--- a/nikola/plugins/task/indexes.py
+++ b/nikola/plugins/task/indexes.py
@@ -25,8 +25,7 @@
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from __future__ import unicode_literals
-import glob
-import itertools
+from collections import defaultdict
import os
from nikola.plugin_categories import Task
@@ -134,33 +133,33 @@
"post_pages": self.site.config["post_pages"],
"output_folder": self.site.config['OUTPUT_FOLDER'],
"filters": self.site.config['FILTERS'],
+ "index_file": self.site.config['INDEX_FILE'],
}
template_name = "list.tmpl"
for lang in kw["translations"]:
# Need to group by folder to avoid duplicated tasks (Issue #758)
- for dirname, wildcards in itertools.groupby((w for w, d, x, i in kw["post_pages"] if not i), os.path.dirname):
- context = {}
- # vim/pyflakes thinks it's unused
- # src_dir = os.path.dirname(wildcard)
- files = []
- for wildcard in wildcards:
- files += glob.glob(wildcard)
- post_list = [self.site.global_data[p] for p in files]
- output_name = os.path.join(kw["output_folder"],
- self.site.path("post_path",
- wildcard,
- lang)).encode('utf8')
- context["items"] = [(post.title(lang), post.permalink(lang))
- for post in post_list]
- task = self.site.generic_post_list_renderer(lang, post_list,
- output_name,
- template_name,
- kw['filters'],
- context)
- task_cfg = {1: task['uptodate'][0].config, 2: kw}
- task['uptodate'] = [config_changed(task_cfg)]
- task['basename'] = self.name
- yield task
+ # Group all pages by path prefix
+ groups = defaultdict(list)
+ for p in self.site.timeline:
+ if not p.is_post:
+ dirname = os.path.dirname(p.destination_path(lang))
+ groups[dirname].append(p)
+ for dirname, post_list in groups.items():
+ context = {}
+ context["items"] = [
+ (post.title(lang), post.permalink(lang))
+ for post in post_list
+ ]
+ output_name = os.path.join(kw['output_folder'], dirname, kw['index_file'])
+ task = self.site.generic_post_list_renderer(lang, post_list,
+ output_name,
+ template_name,
+ kw['filters'],
+ context)
+ task_cfg = {1: task['uptodate'][0].config, 2: kw}
+ task['uptodate'] = [config_changed(task_cfg)]
+ task['basename'] = self.name
+ yield task
def index_path(self, name, lang):
if name not in [None, 0]:
| {"golden_diff": "diff --git a/nikola/plugins/task/indexes.py b/nikola/plugins/task/indexes.py\n--- a/nikola/plugins/task/indexes.py\n+++ b/nikola/plugins/task/indexes.py\n@@ -25,8 +25,7 @@\n # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n \n from __future__ import unicode_literals\n-import glob\n-import itertools\n+from collections import defaultdict\n import os\n \n from nikola.plugin_categories import Task\n@@ -134,33 +133,33 @@\n \"post_pages\": self.site.config[\"post_pages\"],\n \"output_folder\": self.site.config['OUTPUT_FOLDER'],\n \"filters\": self.site.config['FILTERS'],\n+ \"index_file\": self.site.config['INDEX_FILE'],\n }\n template_name = \"list.tmpl\"\n for lang in kw[\"translations\"]:\n # Need to group by folder to avoid duplicated tasks (Issue #758)\n- for dirname, wildcards in itertools.groupby((w for w, d, x, i in kw[\"post_pages\"] if not i), os.path.dirname):\n- context = {}\n- # vim/pyflakes thinks it's unused\n- # src_dir = os.path.dirname(wildcard)\n- files = []\n- for wildcard in wildcards:\n- files += glob.glob(wildcard)\n- post_list = [self.site.global_data[p] for p in files]\n- output_name = os.path.join(kw[\"output_folder\"],\n- self.site.path(\"post_path\",\n- wildcard,\n- lang)).encode('utf8')\n- context[\"items\"] = [(post.title(lang), post.permalink(lang))\n- for post in post_list]\n- task = self.site.generic_post_list_renderer(lang, post_list,\n- output_name,\n- template_name,\n- kw['filters'],\n- context)\n- task_cfg = {1: task['uptodate'][0].config, 2: kw}\n- task['uptodate'] = [config_changed(task_cfg)]\n- task['basename'] = self.name\n- yield task\n+ # Group all pages by path prefix\n+ groups = defaultdict(list)\n+ for p in self.site.timeline:\n+ if not p.is_post:\n+ dirname = os.path.dirname(p.destination_path(lang))\n+ groups[dirname].append(p)\n+ for dirname, post_list in groups.items():\n+ context = {}\n+ context[\"items\"] = [\n+ (post.title(lang), post.permalink(lang))\n+ for post in post_list\n+ ]\n+ output_name = os.path.join(kw['output_folder'], dirname, kw['index_file'])\n+ task = self.site.generic_post_list_renderer(lang, post_list,\n+ output_name,\n+ template_name,\n+ kw['filters'],\n+ context)\n+ task_cfg = {1: task['uptodate'][0].config, 2: kw}\n+ task['uptodate'] = [config_changed(task_cfg)]\n+ task['basename'] = self.name\n+ yield task\n \n def index_path(self, name, lang):\n if name not in [None, 0]:\n", "issue": "STORY_INDEX is limited\nIt only creates index pages for the top folder and not for any subfolders.\n\nSTORY_INDEX is limited\nIt only creates index pages for the top folder and not for any subfolders.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2014 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import unicode_literals\nimport glob\nimport itertools\nimport os\n\nfrom nikola.plugin_categories import Task\nfrom nikola.utils import config_changed\n\n\nclass Indexes(Task):\n \"\"\"Render the blog indexes.\"\"\"\n\n name = \"render_indexes\"\n\n def set_site(self, site):\n site.register_path_handler('index', self.index_path)\n return super(Indexes, self).set_site(site)\n\n def gen_tasks(self):\n self.site.scan_posts()\n yield self.group_task()\n\n kw = {\n \"translations\": self.site.config['TRANSLATIONS'],\n \"index_display_post_count\":\n self.site.config['INDEX_DISPLAY_POST_COUNT'],\n \"messages\": self.site.MESSAGES,\n \"index_teasers\": self.site.config['INDEX_TEASERS'],\n \"output_folder\": self.site.config['OUTPUT_FOLDER'],\n \"filters\": self.site.config['FILTERS'],\n \"show_untranslated_posts\": self.site.config['SHOW_UNTRANSLATED_POSTS'],\n \"indexes_title\": self.site.config['INDEXES_TITLE'],\n \"indexes_pages\": self.site.config['INDEXES_PAGES'],\n \"indexes_pages_main\": self.site.config['INDEXES_PAGES_MAIN'],\n \"blog_title\": self.site.config[\"BLOG_TITLE\"],\n }\n\n template_name = \"index.tmpl\"\n posts = self.site.posts\n for lang in kw[\"translations\"]:\n # Split in smaller lists\n lists = []\n if kw[\"show_untranslated_posts\"]:\n filtered_posts = posts\n else:\n filtered_posts = [x for x in posts if x.is_translation_available(lang)]\n lists.append(filtered_posts[:kw[\"index_display_post_count\"]])\n filtered_posts = filtered_posts[kw[\"index_display_post_count\"]:]\n while filtered_posts:\n lists.append(filtered_posts[-kw[\"index_display_post_count\"]:])\n filtered_posts = filtered_posts[:-kw[\"index_display_post_count\"]]\n num_pages = len(lists)\n for i, post_list in enumerate(lists):\n context = {}\n indexes_title = kw['indexes_title'] or kw['blog_title'](lang)\n if kw[\"indexes_pages_main\"]:\n ipages_i = i + 1\n ipages_msg = \"page %d\"\n else:\n ipages_i = i\n ipages_msg = \"old posts, page %d\"\n if kw[\"indexes_pages\"]:\n indexes_pages = kw[\"indexes_pages\"] % ipages_i\n else:\n indexes_pages = \" (\" + \\\n kw[\"messages\"][lang][ipages_msg] % ipages_i + \")\"\n if i > 0 or kw[\"indexes_pages_main\"]:\n context[\"title\"] = indexes_title + indexes_pages\n else:\n context[\"title\"] = indexes_title\n context[\"prevlink\"] = None\n context[\"nextlink\"] = None\n context['index_teasers'] = kw['index_teasers']\n if i == 0: # index.html page\n context[\"prevlink\"] = None\n if num_pages > 1:\n context[\"nextlink\"] = \"index-{0}.html\".format(num_pages - 1)\n else:\n context[\"nextlink\"] = None\n else: # index-x.html pages\n if i > 1:\n context[\"nextlink\"] = \"index-{0}.html\".format(i - 1)\n if i < num_pages - 1:\n context[\"prevlink\"] = \"index-{0}.html\".format(i + 1)\n elif i == num_pages - 1:\n context[\"prevlink\"] = \"index.html\"\n context[\"permalink\"] = self.site.link(\"index\", i, lang)\n output_name = os.path.join(\n kw['output_folder'], self.site.path(\"index\", i,\n lang))\n task = self.site.generic_post_list_renderer(\n lang,\n post_list,\n output_name,\n template_name,\n kw['filters'],\n context,\n )\n task_cfg = {1: task['uptodate'][0].config, 2: kw}\n task['uptodate'] = [config_changed(task_cfg)]\n task['basename'] = 'render_indexes'\n yield task\n\n if not self.site.config[\"STORY_INDEX\"]:\n return\n kw = {\n \"translations\": self.site.config['TRANSLATIONS'],\n \"post_pages\": self.site.config[\"post_pages\"],\n \"output_folder\": self.site.config['OUTPUT_FOLDER'],\n \"filters\": self.site.config['FILTERS'],\n }\n template_name = \"list.tmpl\"\n for lang in kw[\"translations\"]:\n # Need to group by folder to avoid duplicated tasks (Issue #758)\n for dirname, wildcards in itertools.groupby((w for w, d, x, i in kw[\"post_pages\"] if not i), os.path.dirname):\n context = {}\n # vim/pyflakes thinks it's unused\n # src_dir = os.path.dirname(wildcard)\n files = []\n for wildcard in wildcards:\n files += glob.glob(wildcard)\n post_list = [self.site.global_data[p] for p in files]\n output_name = os.path.join(kw[\"output_folder\"],\n self.site.path(\"post_path\",\n wildcard,\n lang)).encode('utf8')\n context[\"items\"] = [(post.title(lang), post.permalink(lang))\n for post in post_list]\n task = self.site.generic_post_list_renderer(lang, post_list,\n output_name,\n template_name,\n kw['filters'],\n context)\n task_cfg = {1: task['uptodate'][0].config, 2: kw}\n task['uptodate'] = [config_changed(task_cfg)]\n task['basename'] = self.name\n yield task\n\n def index_path(self, name, lang):\n if name not in [None, 0]:\n return [_f for _f in [self.site.config['TRANSLATIONS'][lang],\n self.site.config['INDEX_PATH'],\n 'index-{0}.html'.format(name)] if _f]\n else:\n return [_f for _f in [self.site.config['TRANSLATIONS'][lang],\n self.site.config['INDEX_PATH'],\n self.site.config['INDEX_FILE']]\n if _f]\n", "path": "nikola/plugins/task/indexes.py"}]} | 2,577 | 682 |
gh_patches_debug_1690 | rasdani/github-patches | git_diff | ansible__molecule-3446 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
remnants of ansible-lint in molecule docs
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Do not report bugs before reproducing them with the code of the main branch! -->
<!--- Please also check https://molecule.readthedocs.io/en/latest/faq.html --->
<!--- Please use https://groups.google.com/forum/#!forum/molecule-users for usage questions -->
# Issue Type
- Bug report
# Molecule and Ansible details
Note: the python was installed via conda, but the python packages are installed using pip, not conda.
```
ansible [core 2.12.2]
python version = 3.9.10 | packaged by conda-forge | (main, Feb 1 2022, 21:25:34) [Clang 11.1.0 ]
jinja version = 3.0.3
libyaml = True
ansible python module location = /opt/homebrew/Caskroom/miniconda/base/envs/tmptest/lib/python3.9/site-packages/ansible
molecule 3.6.1 using python 3.9
ansible:2.12.2
delegated:3.6.1 from molecule
```
Molecule installation method (one of):
- pip
Ansible installation method (one of):
- pip
Detail any linters or test runners used:
ansible-lint
# Desired Behavior
Assuming ansible-lint was removed for good reason and shouldn't be added back into the `molecule[lint]` extra:
I think this would make clearer ansible-lint is never installed by molecule:
* add note here that ansible-lint is not installed by molecule, even if you installed the `[lint]` extra
https://molecule.readthedocs.io/en/latest/configuration.html?highlight=ansible-lint#lint
* remove misleading comment in setup.cfg https://github.com/ansible-community/molecule/blob/c33c205b570cd95d599d16afa8772fabba51dd40/setup.cfg#L112
* change to actual default (yamllint) https://github.com/ansible-community/molecule/blob/c7ae6a27bed9ba6423d6dfe11d8e0d5c54da094f/src/molecule/command/init/scenario.py#L165
# Actual Behaviour
I can't say for sure (don't know project history well enough) but I *think* ansible-lint was once available through the `molecule[lint]` extra, but was subsequently promoted to a regular dependency, and eventually removed from molecule deps altogether. It appears there were a few spots in docs that got missed and may mislead a new user (see: me) into thinking ansible-lint will be installed with `pip install molecule[lint]` when in fact it is not.
</issue>
<code>
[start of src/molecule/command/init/scenario.py]
1 # Copyright (c) 2015-2018 Cisco Systems, Inc.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to
5 # deal in the Software without restriction, including without limitation the
6 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
7 # sell copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20 """Base class used by init scenario command."""
21
22 import logging
23 import os
24 from typing import Dict
25
26 import click
27
28 from molecule import api, config, util
29 from molecule.command import base as command_base
30 from molecule.command.init import base
31 from molecule.config import DEFAULT_DRIVER
32
33 LOG = logging.getLogger(__name__)
34
35
36 class Scenario(base.Base):
37 """
38 Scenario Class.
39
40 .. program:: molecule init scenario bar --role-name foo
41
42 .. option:: molecule init scenario bar --role-name foo
43
44 Initialize a new scenario. In order to customise the role, please refer
45 to the `init role` command.
46
47 .. program:: cd foo; molecule init scenario bar --role-name foo
48
49 .. option:: cd foo; molecule init scenario bar --role-name foo
50
51 Initialize an existing role with Molecule:
52
53 .. program:: cd foo; molecule init scenario bar --role-name foo
54
55 .. option:: cd foo; molecule init scenario bar --role-name foo
56
57 Initialize a new scenario using a local *cookiecutter* template for the
58 driver configuration.
59 """ # noqa
60
61 def __init__(self, command_args: Dict[str, str]):
62 """Construct Scenario."""
63 self._command_args = command_args
64
65 def execute(self):
66 """
67 Execute the actions necessary to perform a `molecule init scenario` and \
68 returns None.
69
70 :return: None
71 """
72 scenario_name = self._command_args["scenario_name"]
73 role_name = os.getcwd().split(os.sep)[-1]
74 role_directory = util.abs_path(os.path.join(os.getcwd(), os.pardir))
75
76 msg = f"Initializing new scenario {scenario_name}..."
77 LOG.info(msg)
78 molecule_directory = config.molecule_directory(
79 os.path.join(role_directory, role_name)
80 )
81 scenario_directory = os.path.join(molecule_directory, scenario_name)
82
83 if os.path.isdir(scenario_directory):
84 msg = (
85 f"The directory molecule/{scenario_name} exists. "
86 "Cannot create new scenario."
87 )
88 util.sysexit_with_message(msg)
89
90 driver_template = api.drivers()[
91 self._command_args["driver_name"]
92 ].template_dir()
93 if "driver_template" in self._command_args:
94 self._validate_template_dir(self._command_args["driver_template"])
95 cli_driver_template = f"{self._command_args['driver_template']}/{self._command_args['driver_name']}"
96 if os.path.isdir(cli_driver_template):
97 driver_template = cli_driver_template
98 else:
99 LOG.warning(
100 "Driver not found in custom template directory(%s), "
101 "using the default template instead",
102 cli_driver_template,
103 )
104 scenario_base_directory = os.path.join(role_directory, role_name)
105 templates = [
106 driver_template,
107 api.verifiers()[self._command_args["verifier_name"]].template_dir(),
108 ]
109 self._process_templates("molecule", self._command_args, role_directory)
110 for template in templates:
111 self._process_templates(
112 template, self._command_args, scenario_base_directory
113 )
114
115 role_directory = os.path.join(role_directory, role_name)
116 msg = f"Initialized scenario in {scenario_directory} successfully."
117 LOG.info(msg)
118
119
120 def _role_exists(ctx, param, value: str): # pragma: no cover
121 # if role name was not mentioned we assume that current directory is the
122 # one hosting the role and determining the role name.
123 if not value:
124 value = os.path.basename(os.getcwd())
125
126 role_directory = os.path.join(os.pardir, value)
127 if not os.path.exists(role_directory):
128 msg = f"The role '{value}' not found. " "Please choose the proper role name."
129 util.sysexit_with_message(msg)
130 return value
131
132
133 def _default_scenario_exists(ctx, param, value: str): # pragma: no cover
134 if value == command_base.MOLECULE_DEFAULT_SCENARIO_NAME:
135 return value
136
137 default_scenario_directory = os.path.join(
138 "molecule", command_base.MOLECULE_DEFAULT_SCENARIO_NAME
139 )
140 if not os.path.exists(default_scenario_directory):
141 msg = f"The default scenario not found. Please create a scenario named '{command_base.MOLECULE_DEFAULT_SCENARIO_NAME}' first."
142 util.sysexit_with_message(msg)
143 return value
144
145
146 @command_base.click_command_ex()
147 @click.pass_context
148 @click.option(
149 "--dependency-name",
150 type=click.Choice(["galaxy"]),
151 default="galaxy",
152 help="Name of dependency to initialize. (galaxy)",
153 )
154 @click.option(
155 "--driver-name",
156 "-d",
157 type=click.Choice([str(s) for s in api.drivers()]),
158 default=DEFAULT_DRIVER,
159 help=f"Name of driver to initialize. ({DEFAULT_DRIVER})",
160 )
161 @click.option(
162 "--lint-name",
163 type=click.Choice(["yamllint"]),
164 default="yamllint",
165 help="Name of lint to initialize. (ansible-lint)",
166 )
167 @click.option(
168 "--provisioner-name",
169 type=click.Choice(["ansible"]),
170 default="ansible",
171 help="Name of provisioner to initialize. (ansible)",
172 )
173 @click.option(
174 "--role-name",
175 "-r",
176 required=False,
177 callback=_role_exists,
178 help="Name of the role to create.",
179 )
180 @click.argument(
181 "scenario-name",
182 default=command_base.MOLECULE_DEFAULT_SCENARIO_NAME,
183 required=False,
184 callback=_default_scenario_exists,
185 )
186 @click.option(
187 "--verifier-name",
188 type=click.Choice([str(s) for s in api.verifiers()]),
189 default="ansible",
190 help="Name of verifier to initialize. (ansible)",
191 )
192 def scenario(
193 ctx,
194 dependency_name,
195 driver_name,
196 lint_name,
197 provisioner_name,
198 role_name,
199 scenario_name,
200 verifier_name,
201 ): # pragma: no cover
202 """Initialize a new scenario for use with Molecule.
203
204 If name is not specified the 'default' value will be used.
205 """
206 command_args = {
207 "dependency_name": dependency_name,
208 "driver_name": driver_name,
209 "lint_name": lint_name,
210 "provisioner_name": provisioner_name,
211 "role_name": role_name,
212 "scenario_name": scenario_name,
213 "subcommand": __name__,
214 "verifier_name": verifier_name,
215 }
216
217 s = Scenario(command_args)
218 s.execute()
219
[end of src/molecule/command/init/scenario.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/molecule/command/init/scenario.py b/src/molecule/command/init/scenario.py
--- a/src/molecule/command/init/scenario.py
+++ b/src/molecule/command/init/scenario.py
@@ -162,7 +162,7 @@
"--lint-name",
type=click.Choice(["yamllint"]),
default="yamllint",
- help="Name of lint to initialize. (ansible-lint)",
+ help="Name of lint to initialize. (yamllint)",
)
@click.option(
"--provisioner-name",
| {"golden_diff": "diff --git a/src/molecule/command/init/scenario.py b/src/molecule/command/init/scenario.py\n--- a/src/molecule/command/init/scenario.py\n+++ b/src/molecule/command/init/scenario.py\n@@ -162,7 +162,7 @@\n \"--lint-name\",\n type=click.Choice([\"yamllint\"]),\n default=\"yamllint\",\n- help=\"Name of lint to initialize. (ansible-lint)\",\n+ help=\"Name of lint to initialize. (yamllint)\",\n )\n @click.option(\n \"--provisioner-name\",\n", "issue": "remnants of ansible-lint in molecule docs\n<!--- Verify first that your issue is not already reported on GitHub -->\r\n<!--- Do not report bugs before reproducing them with the code of the main branch! -->\r\n<!--- Please also check https://molecule.readthedocs.io/en/latest/faq.html --->\r\n<!--- Please use https://groups.google.com/forum/#!forum/molecule-users for usage questions -->\r\n\r\n# Issue Type\r\n\r\n- Bug report\r\n\r\n# Molecule and Ansible details\r\n\r\nNote: the python was installed via conda, but the python packages are installed using pip, not conda.\r\n```\r\nansible [core 2.12.2]\r\n python version = 3.9.10 | packaged by conda-forge | (main, Feb 1 2022, 21:25:34) [Clang 11.1.0 ]\r\n jinja version = 3.0.3\r\n libyaml = True\r\n ansible python module location = /opt/homebrew/Caskroom/miniconda/base/envs/tmptest/lib/python3.9/site-packages/ansible\r\nmolecule 3.6.1 using python 3.9\r\n ansible:2.12.2\r\n delegated:3.6.1 from molecule\r\n```\r\n\r\nMolecule installation method (one of):\r\n\r\n- pip\r\n\r\nAnsible installation method (one of):\r\n\r\n- pip\r\n\r\nDetail any linters or test runners used:\r\n\r\nansible-lint\r\n\r\n# Desired Behavior\r\n\r\nAssuming ansible-lint was removed for good reason and shouldn't be added back into the `molecule[lint]` extra:\r\n\r\nI think this would make clearer ansible-lint is never installed by molecule:\r\n* add note here that ansible-lint is not installed by molecule, even if you installed the `[lint]` extra\r\n https://molecule.readthedocs.io/en/latest/configuration.html?highlight=ansible-lint#lint\r\n* remove misleading comment in setup.cfg https://github.com/ansible-community/molecule/blob/c33c205b570cd95d599d16afa8772fabba51dd40/setup.cfg#L112\r\n* change to actual default (yamllint) https://github.com/ansible-community/molecule/blob/c7ae6a27bed9ba6423d6dfe11d8e0d5c54da094f/src/molecule/command/init/scenario.py#L165\r\n\r\n\r\n\r\n\r\n# Actual Behaviour\r\n\r\nI can't say for sure (don't know project history well enough) but I *think* ansible-lint was once available through the `molecule[lint]` extra, but was subsequently promoted to a regular dependency, and eventually removed from molecule deps altogether. It appears there were a few spots in docs that got missed and may mislead a new user (see: me) into thinking ansible-lint will be installed with `pip install molecule[lint]` when in fact it is not.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\"\"\"Base class used by init scenario command.\"\"\"\n\nimport logging\nimport os\nfrom typing import Dict\n\nimport click\n\nfrom molecule import api, config, util\nfrom molecule.command import base as command_base\nfrom molecule.command.init import base\nfrom molecule.config import DEFAULT_DRIVER\n\nLOG = logging.getLogger(__name__)\n\n\nclass Scenario(base.Base):\n \"\"\"\n Scenario Class.\n\n .. program:: molecule init scenario bar --role-name foo\n\n .. option:: molecule init scenario bar --role-name foo\n\n Initialize a new scenario. In order to customise the role, please refer\n to the `init role` command.\n\n .. program:: cd foo; molecule init scenario bar --role-name foo\n\n .. option:: cd foo; molecule init scenario bar --role-name foo\n\n Initialize an existing role with Molecule:\n\n .. program:: cd foo; molecule init scenario bar --role-name foo\n\n .. option:: cd foo; molecule init scenario bar --role-name foo\n\n Initialize a new scenario using a local *cookiecutter* template for the\n driver configuration.\n \"\"\" # noqa\n\n def __init__(self, command_args: Dict[str, str]):\n \"\"\"Construct Scenario.\"\"\"\n self._command_args = command_args\n\n def execute(self):\n \"\"\"\n Execute the actions necessary to perform a `molecule init scenario` and \\\n returns None.\n\n :return: None\n \"\"\"\n scenario_name = self._command_args[\"scenario_name\"]\n role_name = os.getcwd().split(os.sep)[-1]\n role_directory = util.abs_path(os.path.join(os.getcwd(), os.pardir))\n\n msg = f\"Initializing new scenario {scenario_name}...\"\n LOG.info(msg)\n molecule_directory = config.molecule_directory(\n os.path.join(role_directory, role_name)\n )\n scenario_directory = os.path.join(molecule_directory, scenario_name)\n\n if os.path.isdir(scenario_directory):\n msg = (\n f\"The directory molecule/{scenario_name} exists. \"\n \"Cannot create new scenario.\"\n )\n util.sysexit_with_message(msg)\n\n driver_template = api.drivers()[\n self._command_args[\"driver_name\"]\n ].template_dir()\n if \"driver_template\" in self._command_args:\n self._validate_template_dir(self._command_args[\"driver_template\"])\n cli_driver_template = f\"{self._command_args['driver_template']}/{self._command_args['driver_name']}\"\n if os.path.isdir(cli_driver_template):\n driver_template = cli_driver_template\n else:\n LOG.warning(\n \"Driver not found in custom template directory(%s), \"\n \"using the default template instead\",\n cli_driver_template,\n )\n scenario_base_directory = os.path.join(role_directory, role_name)\n templates = [\n driver_template,\n api.verifiers()[self._command_args[\"verifier_name\"]].template_dir(),\n ]\n self._process_templates(\"molecule\", self._command_args, role_directory)\n for template in templates:\n self._process_templates(\n template, self._command_args, scenario_base_directory\n )\n\n role_directory = os.path.join(role_directory, role_name)\n msg = f\"Initialized scenario in {scenario_directory} successfully.\"\n LOG.info(msg)\n\n\ndef _role_exists(ctx, param, value: str): # pragma: no cover\n # if role name was not mentioned we assume that current directory is the\n # one hosting the role and determining the role name.\n if not value:\n value = os.path.basename(os.getcwd())\n\n role_directory = os.path.join(os.pardir, value)\n if not os.path.exists(role_directory):\n msg = f\"The role '{value}' not found. \" \"Please choose the proper role name.\"\n util.sysexit_with_message(msg)\n return value\n\n\ndef _default_scenario_exists(ctx, param, value: str): # pragma: no cover\n if value == command_base.MOLECULE_DEFAULT_SCENARIO_NAME:\n return value\n\n default_scenario_directory = os.path.join(\n \"molecule\", command_base.MOLECULE_DEFAULT_SCENARIO_NAME\n )\n if not os.path.exists(default_scenario_directory):\n msg = f\"The default scenario not found. Please create a scenario named '{command_base.MOLECULE_DEFAULT_SCENARIO_NAME}' first.\"\n util.sysexit_with_message(msg)\n return value\n\n\n@command_base.click_command_ex()\[email protected]_context\[email protected](\n \"--dependency-name\",\n type=click.Choice([\"galaxy\"]),\n default=\"galaxy\",\n help=\"Name of dependency to initialize. (galaxy)\",\n)\[email protected](\n \"--driver-name\",\n \"-d\",\n type=click.Choice([str(s) for s in api.drivers()]),\n default=DEFAULT_DRIVER,\n help=f\"Name of driver to initialize. ({DEFAULT_DRIVER})\",\n)\[email protected](\n \"--lint-name\",\n type=click.Choice([\"yamllint\"]),\n default=\"yamllint\",\n help=\"Name of lint to initialize. (ansible-lint)\",\n)\[email protected](\n \"--provisioner-name\",\n type=click.Choice([\"ansible\"]),\n default=\"ansible\",\n help=\"Name of provisioner to initialize. (ansible)\",\n)\[email protected](\n \"--role-name\",\n \"-r\",\n required=False,\n callback=_role_exists,\n help=\"Name of the role to create.\",\n)\[email protected](\n \"scenario-name\",\n default=command_base.MOLECULE_DEFAULT_SCENARIO_NAME,\n required=False,\n callback=_default_scenario_exists,\n)\[email protected](\n \"--verifier-name\",\n type=click.Choice([str(s) for s in api.verifiers()]),\n default=\"ansible\",\n help=\"Name of verifier to initialize. (ansible)\",\n)\ndef scenario(\n ctx,\n dependency_name,\n driver_name,\n lint_name,\n provisioner_name,\n role_name,\n scenario_name,\n verifier_name,\n): # pragma: no cover\n \"\"\"Initialize a new scenario for use with Molecule.\n\n If name is not specified the 'default' value will be used.\n \"\"\"\n command_args = {\n \"dependency_name\": dependency_name,\n \"driver_name\": driver_name,\n \"lint_name\": lint_name,\n \"provisioner_name\": provisioner_name,\n \"role_name\": role_name,\n \"scenario_name\": scenario_name,\n \"subcommand\": __name__,\n \"verifier_name\": verifier_name,\n }\n\n s = Scenario(command_args)\n s.execute()\n", "path": "src/molecule/command/init/scenario.py"}]} | 3,387 | 126 |
gh_patches_debug_21710 | rasdani/github-patches | git_diff | tensorflow__addons-2261 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'TrackableWeightHandler' object has no attribute 'trainable'
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): 18.04
- TensorFlow version and how it was installed (source or binary): 2.3.0
- TensorFlow-Addons version and how it was installed (source or binary): 0.11.2
- Python version: 3.6.8
- Is GPU used? (yes/no): yes
**Describe the bug**
```TrackableWeightHandler``` doesn't have trainable and assign attributes but also to be appended to weights and trainable_weights. When I use ```AverageModelCheckpoint```, I meet the bug. I'm not sure it's a bug of addons or tensorflow.
```TrackableWeightHandler``` is added in ```TextVectorization```
A clear and concise description of what the bug is.
```
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/root/.vscode-server/extensions/ms-python.python-2020.11.371526539/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module>
cli.main()
File "/root/.vscode-server/extensions/ms-python.python-2020.11.371526539/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/root/.vscode-server/extensions/ms-python.python-2020.11.371526539/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 267, in run_file
runpy.run_path(options.target, run_name=compat.force_str("__main__"))
File "/usr/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/srv/automl/efficientdet/nlp.py", line 65, in <module>
tfa.callbacks.AverageModelCheckpoint(update_weights=True, filepath=os.path.join(MODEL_DIR, 'ckpt_moving_average'))])
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1137, in fit
callbacks.on_epoch_end(epoch, epoch_logs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py", line 412, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py", line 1249, in on_epoch_end
self._save_model(epoch=epoch, logs=logs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_addons/callbacks/average_model_checkpoint.py", line 76, in _save_model
self.model.optimizer.assign_average_vars(self.model.trainable_weights)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_addons/optimizers/average_wrapper.py", line 125, in assign_average_vars
for var in var_list
File "/usr/local/lib/python3.6/dist-packages/tensorflow_addons/optimizers/average_wrapper.py", line 126, in <listcomp>
if var.trainable
AttributeError: 'TrackableWeightHandler' object has no attribute 'trainable'
```
**Code to reproduce the issue**
https://colab.research.google.com/drive/1OM7kAA6aOcNSTTXpd74kUnw-EeeM2uH5?usp=sharing
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
**Other info / logs**
Change ```model.variables``` to ```model.trainable_weights``` could fix it temporarily.
</issue>
<code>
[start of tensorflow_addons/optimizers/average_wrapper.py]
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15
16 import abc
17
18 import tensorflow as tf
19 from tensorflow_addons.utils import types
20
21 from typeguard import typechecked
22
23
24 class AveragedOptimizerWrapper(tf.keras.optimizers.Optimizer, metaclass=abc.ABCMeta):
25 @typechecked
26 def __init__(
27 self, optimizer: types.Optimizer, name: str = "AverageOptimizer", **kwargs
28 ):
29 super().__init__(name, **kwargs)
30
31 if isinstance(optimizer, str):
32 optimizer = tf.keras.optimizers.get(optimizer)
33
34 if not isinstance(optimizer, tf.keras.optimizers.Optimizer):
35 raise TypeError(
36 "optimizer is not an object of tf.keras.optimizers.Optimizer"
37 )
38
39 self._optimizer = optimizer
40 self._track_trackable(self._optimizer, "awg_optimizer")
41
42 def _create_slots(self, var_list):
43 self._optimizer._create_slots(var_list=var_list)
44 for var in var_list:
45 self.add_slot(var, "average")
46
47 def _create_hypers(self):
48 self._optimizer._create_hypers()
49
50 def _prepare(self, var_list):
51 return self._optimizer._prepare(var_list=var_list)
52
53 def apply_gradients(self, grads_and_vars, name=None, **kwargs):
54 self._optimizer._iterations = self.iterations
55 return super().apply_gradients(grads_and_vars, name, **kwargs)
56
57 @abc.abstractmethod
58 def average_op(self, var, average_var):
59 raise NotImplementedError
60
61 def _apply_average_op(self, train_op, var):
62 average_var = self.get_slot(var, "average")
63 return self.average_op(var, average_var)
64
65 def _resource_apply_dense(self, grad, var):
66 train_op = self._optimizer._resource_apply_dense(grad, var)
67 average_op = self._apply_average_op(train_op, var)
68 return tf.group(train_op, average_op)
69
70 def _resource_apply_sparse(self, grad, var, indices):
71 train_op = self._optimizer._resource_apply_sparse(grad, var, indices)
72 average_op = self._apply_average_op(train_op, var)
73 return tf.group(train_op, average_op)
74
75 def _resource_apply_sparse_duplicate_indices(self, grad, var, indices):
76 train_op = self._optimizer._resource_apply_sparse_duplicate_indices(
77 grad, var, indices
78 )
79 average_op = self._apply_average_op(train_op, var)
80 return tf.group(train_op, average_op)
81
82 def assign_average_vars(self, var_list):
83 """Assign variables in var_list with their respective averages.
84
85 Args:
86 var_list: List of model variables to be assigned to their average.
87
88 Returns:
89 assign_op: The op corresponding to the assignment operation of
90 variables to their average.
91
92 Example:
93 ```python
94 model = tf.Sequential([...])
95 opt = tfa.optimizers.SWA(
96 tf.keras.optimizers.SGD(lr=2.0), 100, 10)
97 model.compile(opt, ...)
98 model.fit(x, y, ...)
99
100 # Update the weights to their mean before saving
101 opt.assign_average_vars(model.variables)
102
103 model.save('model.h5')
104 ```
105 """
106 assign_op = tf.group(
107 [
108 var.assign(self.get_slot(var, "average"), use_locking=self._use_locking)
109 for var in var_list
110 if var.trainable
111 ]
112 )
113 return assign_op
114
115 def get_config(self):
116 config = {
117 "optimizer": tf.keras.optimizers.serialize(self._optimizer),
118 }
119 base_config = super().get_config()
120 return {**base_config, **config}
121
122 @classmethod
123 def from_config(cls, config, custom_objects=None):
124 optimizer = tf.keras.optimizers.deserialize(
125 config.pop("optimizer"), custom_objects=custom_objects
126 )
127 return cls(optimizer, **config)
128
129 @property
130 def weights(self):
131 return self._weights + self._optimizer.weights
132
133 @property
134 def lr(self):
135 return self._optimizer._get_hyper("learning_rate")
136
137 @lr.setter
138 def lr(self, lr):
139 self._optimizer._set_hyper("learning_rate", lr) #
140
141 @property
142 def learning_rate(self):
143 return self._optimizer._get_hyper("learning_rate")
144
145 @learning_rate.setter
146 def learning_rate(self, learning_rate):
147 self._optimizer._set_hyper("learning_rate", learning_rate)
148
[end of tensorflow_addons/optimizers/average_wrapper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tensorflow_addons/optimizers/average_wrapper.py b/tensorflow_addons/optimizers/average_wrapper.py
--- a/tensorflow_addons/optimizers/average_wrapper.py
+++ b/tensorflow_addons/optimizers/average_wrapper.py
@@ -14,6 +14,7 @@
# ==============================================================================
import abc
+import warnings
import tensorflow as tf
from tensorflow_addons.utils import types
@@ -103,14 +104,17 @@
model.save('model.h5')
```
"""
- assign_op = tf.group(
- [
- var.assign(self.get_slot(var, "average"), use_locking=self._use_locking)
- for var in var_list
- if var.trainable
- ]
- )
- return assign_op
+ assign_ops = []
+ for var in var_list:
+ try:
+ assign_ops.append(
+ var.assign(
+ self.get_slot(var, "average"), use_locking=self._use_locking
+ )
+ )
+ except Exception as e:
+ warnings.warn("Unable to assign average slot to {} : {}".format(var, e))
+ return tf.group(assign_ops)
def get_config(self):
config = {
| {"golden_diff": "diff --git a/tensorflow_addons/optimizers/average_wrapper.py b/tensorflow_addons/optimizers/average_wrapper.py\n--- a/tensorflow_addons/optimizers/average_wrapper.py\n+++ b/tensorflow_addons/optimizers/average_wrapper.py\n@@ -14,6 +14,7 @@\n # ==============================================================================\n \n import abc\n+import warnings\n \n import tensorflow as tf\n from tensorflow_addons.utils import types\n@@ -103,14 +104,17 @@\n model.save('model.h5')\n ```\n \"\"\"\n- assign_op = tf.group(\n- [\n- var.assign(self.get_slot(var, \"average\"), use_locking=self._use_locking)\n- for var in var_list\n- if var.trainable\n- ]\n- )\n- return assign_op\n+ assign_ops = []\n+ for var in var_list:\n+ try:\n+ assign_ops.append(\n+ var.assign(\n+ self.get_slot(var, \"average\"), use_locking=self._use_locking\n+ )\n+ )\n+ except Exception as e:\n+ warnings.warn(\"Unable to assign average slot to {} : {}\".format(var, e))\n+ return tf.group(assign_ops)\n \n def get_config(self):\n config = {\n", "issue": "AttributeError: 'TrackableWeightHandler' object has no attribute 'trainable'\n**System information**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): 18.04\r\n- TensorFlow version and how it was installed (source or binary): 2.3.0\r\n- TensorFlow-Addons version and how it was installed (source or binary): 0.11.2\r\n- Python version: 3.6.8\r\n- Is GPU used? (yes/no): yes\r\n\r\n**Describe the bug**\r\n\r\n```TrackableWeightHandler``` doesn't have trainable and assign attributes but also to be appended to weights and trainable_weights. When I use ```AverageModelCheckpoint```, I meet the bug. I'm not sure it's a bug of addons or tensorflow.\r\n```TrackableWeightHandler``` is added in ```TextVectorization```\r\n\r\nA clear and concise description of what the bug is.\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/root/.vscode-server/extensions/ms-python.python-2020.11.371526539/pythonFiles/lib/python/debugpy/__main__.py\", line 45, in <module>\r\n cli.main()\r\n File \"/root/.vscode-server/extensions/ms-python.python-2020.11.371526539/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py\", line 430, in main\r\n run()\r\n File \"/root/.vscode-server/extensions/ms-python.python-2020.11.371526539/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py\", line 267, in run_file\r\n runpy.run_path(options.target, run_name=compat.force_str(\"__main__\"))\r\n File \"/usr/lib/python3.6/runpy.py\", line 263, in run_path\r\n pkg_name=pkg_name, script_name=fname)\r\n File \"/usr/lib/python3.6/runpy.py\", line 96, in _run_module_code\r\n mod_name, mod_spec, pkg_name, script_name)\r\n File \"/usr/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/srv/automl/efficientdet/nlp.py\", line 65, in <module>\r\n tfa.callbacks.AverageModelCheckpoint(update_weights=True, filepath=os.path.join(MODEL_DIR, 'ckpt_moving_average'))])\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py\", line 108, in _method_wrapper\r\n return method(self, *args, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py\", line 1137, in fit\r\n callbacks.on_epoch_end(epoch, epoch_logs)\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py\", line 412, in on_epoch_end\r\n callback.on_epoch_end(epoch, logs)\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py\", line 1249, in on_epoch_end\r\n self._save_model(epoch=epoch, logs=logs)\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow_addons/callbacks/average_model_checkpoint.py\", line 76, in _save_model\r\n self.model.optimizer.assign_average_vars(self.model.trainable_weights)\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow_addons/optimizers/average_wrapper.py\", line 125, in assign_average_vars\r\n for var in var_list\r\n File \"/usr/local/lib/python3.6/dist-packages/tensorflow_addons/optimizers/average_wrapper.py\", line 126, in <listcomp>\r\n if var.trainable\r\nAttributeError: 'TrackableWeightHandler' object has no attribute 'trainable'\r\n```\r\n\r\n**Code to reproduce the issue**\r\nhttps://colab.research.google.com/drive/1OM7kAA6aOcNSTTXpd74kUnw-EeeM2uH5?usp=sharing\r\n\r\nProvide a reproducible test case that is the bare minimum necessary to generate the problem.\r\n\r\n**Other info / logs**\r\n\r\nChange ```model.variables``` to ```model.trainable_weights``` could fix it temporarily.\r\n\r\n\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nimport abc\n\nimport tensorflow as tf\nfrom tensorflow_addons.utils import types\n\nfrom typeguard import typechecked\n\n\nclass AveragedOptimizerWrapper(tf.keras.optimizers.Optimizer, metaclass=abc.ABCMeta):\n @typechecked\n def __init__(\n self, optimizer: types.Optimizer, name: str = \"AverageOptimizer\", **kwargs\n ):\n super().__init__(name, **kwargs)\n\n if isinstance(optimizer, str):\n optimizer = tf.keras.optimizers.get(optimizer)\n\n if not isinstance(optimizer, tf.keras.optimizers.Optimizer):\n raise TypeError(\n \"optimizer is not an object of tf.keras.optimizers.Optimizer\"\n )\n\n self._optimizer = optimizer\n self._track_trackable(self._optimizer, \"awg_optimizer\")\n\n def _create_slots(self, var_list):\n self._optimizer._create_slots(var_list=var_list)\n for var in var_list:\n self.add_slot(var, \"average\")\n\n def _create_hypers(self):\n self._optimizer._create_hypers()\n\n def _prepare(self, var_list):\n return self._optimizer._prepare(var_list=var_list)\n\n def apply_gradients(self, grads_and_vars, name=None, **kwargs):\n self._optimizer._iterations = self.iterations\n return super().apply_gradients(grads_and_vars, name, **kwargs)\n\n @abc.abstractmethod\n def average_op(self, var, average_var):\n raise NotImplementedError\n\n def _apply_average_op(self, train_op, var):\n average_var = self.get_slot(var, \"average\")\n return self.average_op(var, average_var)\n\n def _resource_apply_dense(self, grad, var):\n train_op = self._optimizer._resource_apply_dense(grad, var)\n average_op = self._apply_average_op(train_op, var)\n return tf.group(train_op, average_op)\n\n def _resource_apply_sparse(self, grad, var, indices):\n train_op = self._optimizer._resource_apply_sparse(grad, var, indices)\n average_op = self._apply_average_op(train_op, var)\n return tf.group(train_op, average_op)\n\n def _resource_apply_sparse_duplicate_indices(self, grad, var, indices):\n train_op = self._optimizer._resource_apply_sparse_duplicate_indices(\n grad, var, indices\n )\n average_op = self._apply_average_op(train_op, var)\n return tf.group(train_op, average_op)\n\n def assign_average_vars(self, var_list):\n \"\"\"Assign variables in var_list with their respective averages.\n\n Args:\n var_list: List of model variables to be assigned to their average.\n\n Returns:\n assign_op: The op corresponding to the assignment operation of\n variables to their average.\n\n Example:\n ```python\n model = tf.Sequential([...])\n opt = tfa.optimizers.SWA(\n tf.keras.optimizers.SGD(lr=2.0), 100, 10)\n model.compile(opt, ...)\n model.fit(x, y, ...)\n\n # Update the weights to their mean before saving\n opt.assign_average_vars(model.variables)\n\n model.save('model.h5')\n ```\n \"\"\"\n assign_op = tf.group(\n [\n var.assign(self.get_slot(var, \"average\"), use_locking=self._use_locking)\n for var in var_list\n if var.trainable\n ]\n )\n return assign_op\n\n def get_config(self):\n config = {\n \"optimizer\": tf.keras.optimizers.serialize(self._optimizer),\n }\n base_config = super().get_config()\n return {**base_config, **config}\n\n @classmethod\n def from_config(cls, config, custom_objects=None):\n optimizer = tf.keras.optimizers.deserialize(\n config.pop(\"optimizer\"), custom_objects=custom_objects\n )\n return cls(optimizer, **config)\n\n @property\n def weights(self):\n return self._weights + self._optimizer.weights\n\n @property\n def lr(self):\n return self._optimizer._get_hyper(\"learning_rate\")\n\n @lr.setter\n def lr(self, lr):\n self._optimizer._set_hyper(\"learning_rate\", lr) #\n\n @property\n def learning_rate(self):\n return self._optimizer._get_hyper(\"learning_rate\")\n\n @learning_rate.setter\n def learning_rate(self, learning_rate):\n self._optimizer._set_hyper(\"learning_rate\", learning_rate)\n", "path": "tensorflow_addons/optimizers/average_wrapper.py"}]} | 3,002 | 282 |
gh_patches_debug_37418 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-2202 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Passing non-bytes file input leads to error
https://t.me/pythontelegrambotgroup/396541
TL;DR:
`send_document(open('text_file', 'rb'))` works but `send_document(open('text_file', 'r'))` raises is error.
This is, because we try to guess if the file is an image using `imghdr.what(None, stream)` in `InputFile.is_image`, which only works if `stream` is a bytes stream.
If I comment the `is_image` out, the file is sent without issue, so I guess we should just check if the input is bytes before calling `is_image`
</issue>
<code>
[start of telegram/files/inputfile.py]
1 #!/usr/bin/env python
2 # pylint: disable=W0622,E0611
3 #
4 # A library that provides a Python interface to the Telegram Bot API
5 # Copyright (C) 2015-2020
6 # Leandro Toledo de Souza <[email protected]>
7 #
8 # This program is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU Lesser Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # This program is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU Lesser Public License for more details.
17 #
18 # You should have received a copy of the GNU Lesser Public License
19 # along with this program. If not, see [http://www.gnu.org/licenses/].
20 """This module contains an object that represents a Telegram InputFile."""
21
22 import imghdr
23 import mimetypes
24 import os
25 from typing import IO, Optional, Tuple
26 from uuid import uuid4
27
28 from telegram import TelegramError
29
30 DEFAULT_MIME_TYPE = 'application/octet-stream'
31
32
33 class InputFile:
34 """This object represents a Telegram InputFile.
35
36 Attributes:
37 input_file_content (:obj:`bytes`): The binary content of the file to send.
38 filename (:obj:`str`): Optional. Filename for the file to be sent.
39 attach (:obj:`str`): Optional. Attach id for sending multiple files.
40
41 Args:
42 obj (:obj:`File handler`): An open file descriptor.
43 filename (:obj:`str`, optional): Filename for this InputFile.
44 attach (:obj:`bool`, optional): Whether this should be send as one file or is part of a
45 collection of files.
46
47 Raises:
48 TelegramError
49
50 """
51
52 def __init__(self, obj: IO, filename: str = None, attach: bool = None):
53 self.filename = None
54 self.input_file_content = obj.read()
55 self.attach = 'attached' + uuid4().hex if attach else None
56
57 if filename:
58 self.filename = filename
59 elif hasattr(obj, 'name') and not isinstance(obj.name, int):
60 self.filename = os.path.basename(obj.name)
61
62 try:
63 self.mimetype = self.is_image(self.input_file_content)
64 except TelegramError:
65 if self.filename:
66 self.mimetype = mimetypes.guess_type(self.filename)[0] or DEFAULT_MIME_TYPE
67 else:
68 self.mimetype = DEFAULT_MIME_TYPE
69 if not self.filename:
70 self.filename = self.mimetype.replace('/', '.')
71
72 @property
73 def field_tuple(self) -> Tuple[str, bytes, str]:
74 return self.filename, self.input_file_content, self.mimetype
75
76 @staticmethod
77 def is_image(stream: bytes) -> str:
78 """Check if the content file is an image by analyzing its headers.
79
80 Args:
81 stream (:obj:`bytes`): A byte stream representing the content of a file.
82
83 Returns:
84 :obj:`str`: The str mime-type of an image.
85
86 """
87 image = imghdr.what(None, stream)
88 if image:
89 return 'image/%s' % image
90
91 raise TelegramError('Could not parse file content')
92
93 @staticmethod
94 def is_file(obj: object) -> bool:
95 return hasattr(obj, 'read')
96
97 def to_dict(self) -> Optional[str]:
98 if self.attach:
99 return 'attach://' + self.attach
100 return None
101
[end of telegram/files/inputfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/telegram/files/inputfile.py b/telegram/files/inputfile.py
--- a/telegram/files/inputfile.py
+++ b/telegram/files/inputfile.py
@@ -20,14 +20,14 @@
"""This module contains an object that represents a Telegram InputFile."""
import imghdr
+import logging
import mimetypes
import os
from typing import IO, Optional, Tuple
from uuid import uuid4
-from telegram import TelegramError
-
DEFAULT_MIME_TYPE = 'application/octet-stream'
+logger = logging.getLogger(__name__)
class InputFile:
@@ -59,13 +59,14 @@
elif hasattr(obj, 'name') and not isinstance(obj.name, int):
self.filename = os.path.basename(obj.name)
- try:
- self.mimetype = self.is_image(self.input_file_content)
- except TelegramError:
- if self.filename:
- self.mimetype = mimetypes.guess_type(self.filename)[0] or DEFAULT_MIME_TYPE
- else:
- self.mimetype = DEFAULT_MIME_TYPE
+ image_mime_type = self.is_image(self.input_file_content)
+ if image_mime_type:
+ self.mimetype = image_mime_type
+ elif self.filename:
+ self.mimetype = mimetypes.guess_type(self.filename)[0] or DEFAULT_MIME_TYPE
+ else:
+ self.mimetype = DEFAULT_MIME_TYPE
+
if not self.filename:
self.filename = self.mimetype.replace('/', '.')
@@ -74,21 +75,27 @@
return self.filename, self.input_file_content, self.mimetype
@staticmethod
- def is_image(stream: bytes) -> str:
+ def is_image(stream: bytes) -> Optional[str]:
"""Check if the content file is an image by analyzing its headers.
Args:
stream (:obj:`bytes`): A byte stream representing the content of a file.
Returns:
- :obj:`str`: The str mime-type of an image.
+ :obj:`str` | :obj:`None`: The mime-type of an image, if the input is an image, or
+ :obj:`None` else.
"""
- image = imghdr.what(None, stream)
- if image:
- return 'image/%s' % image
-
- raise TelegramError('Could not parse file content')
+ try:
+ image = imghdr.what(None, stream)
+ if image:
+ return f'image/{image}'
+ return None
+ except Exception:
+ logger.debug(
+ "Could not parse file content. Assuming that file is not an image.", exc_info=True
+ )
+ return None
@staticmethod
def is_file(obj: object) -> bool:
| {"golden_diff": "diff --git a/telegram/files/inputfile.py b/telegram/files/inputfile.py\n--- a/telegram/files/inputfile.py\n+++ b/telegram/files/inputfile.py\n@@ -20,14 +20,14 @@\n \"\"\"This module contains an object that represents a Telegram InputFile.\"\"\"\n \n import imghdr\n+import logging\n import mimetypes\n import os\n from typing import IO, Optional, Tuple\n from uuid import uuid4\n \n-from telegram import TelegramError\n-\n DEFAULT_MIME_TYPE = 'application/octet-stream'\n+logger = logging.getLogger(__name__)\n \n \n class InputFile:\n@@ -59,13 +59,14 @@\n elif hasattr(obj, 'name') and not isinstance(obj.name, int):\n self.filename = os.path.basename(obj.name)\n \n- try:\n- self.mimetype = self.is_image(self.input_file_content)\n- except TelegramError:\n- if self.filename:\n- self.mimetype = mimetypes.guess_type(self.filename)[0] or DEFAULT_MIME_TYPE\n- else:\n- self.mimetype = DEFAULT_MIME_TYPE\n+ image_mime_type = self.is_image(self.input_file_content)\n+ if image_mime_type:\n+ self.mimetype = image_mime_type\n+ elif self.filename:\n+ self.mimetype = mimetypes.guess_type(self.filename)[0] or DEFAULT_MIME_TYPE\n+ else:\n+ self.mimetype = DEFAULT_MIME_TYPE\n+\n if not self.filename:\n self.filename = self.mimetype.replace('/', '.')\n \n@@ -74,21 +75,27 @@\n return self.filename, self.input_file_content, self.mimetype\n \n @staticmethod\n- def is_image(stream: bytes) -> str:\n+ def is_image(stream: bytes) -> Optional[str]:\n \"\"\"Check if the content file is an image by analyzing its headers.\n \n Args:\n stream (:obj:`bytes`): A byte stream representing the content of a file.\n \n Returns:\n- :obj:`str`: The str mime-type of an image.\n+ :obj:`str` | :obj:`None`: The mime-type of an image, if the input is an image, or\n+ :obj:`None` else.\n \n \"\"\"\n- image = imghdr.what(None, stream)\n- if image:\n- return 'image/%s' % image\n-\n- raise TelegramError('Could not parse file content')\n+ try:\n+ image = imghdr.what(None, stream)\n+ if image:\n+ return f'image/{image}'\n+ return None\n+ except Exception:\n+ logger.debug(\n+ \"Could not parse file content. Assuming that file is not an image.\", exc_info=True\n+ )\n+ return None\n \n @staticmethod\n def is_file(obj: object) -> bool:\n", "issue": "[BUG] Passing non-bytes file input leads to error\nhttps://t.me/pythontelegrambotgroup/396541\r\n\r\nTL;DR:\r\n\r\n`send_document(open('text_file', 'rb'))` works but `send_document(open('text_file', 'r'))` raises is error.\r\nThis is, because we try to guess if the file is an image using `imghdr.what(None, stream)` in `InputFile.is_image`, which only works if `stream` is a bytes stream.\r\nIf I comment the `is_image` out, the file is sent without issue, so I guess we should just check if the input is bytes before calling `is_image`\n", "before_files": [{"content": "#!/usr/bin/env python\n# pylint: disable=W0622,E0611\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2020\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram InputFile.\"\"\"\n\nimport imghdr\nimport mimetypes\nimport os\nfrom typing import IO, Optional, Tuple\nfrom uuid import uuid4\n\nfrom telegram import TelegramError\n\nDEFAULT_MIME_TYPE = 'application/octet-stream'\n\n\nclass InputFile:\n \"\"\"This object represents a Telegram InputFile.\n\n Attributes:\n input_file_content (:obj:`bytes`): The binary content of the file to send.\n filename (:obj:`str`): Optional. Filename for the file to be sent.\n attach (:obj:`str`): Optional. Attach id for sending multiple files.\n\n Args:\n obj (:obj:`File handler`): An open file descriptor.\n filename (:obj:`str`, optional): Filename for this InputFile.\n attach (:obj:`bool`, optional): Whether this should be send as one file or is part of a\n collection of files.\n\n Raises:\n TelegramError\n\n \"\"\"\n\n def __init__(self, obj: IO, filename: str = None, attach: bool = None):\n self.filename = None\n self.input_file_content = obj.read()\n self.attach = 'attached' + uuid4().hex if attach else None\n\n if filename:\n self.filename = filename\n elif hasattr(obj, 'name') and not isinstance(obj.name, int):\n self.filename = os.path.basename(obj.name)\n\n try:\n self.mimetype = self.is_image(self.input_file_content)\n except TelegramError:\n if self.filename:\n self.mimetype = mimetypes.guess_type(self.filename)[0] or DEFAULT_MIME_TYPE\n else:\n self.mimetype = DEFAULT_MIME_TYPE\n if not self.filename:\n self.filename = self.mimetype.replace('/', '.')\n\n @property\n def field_tuple(self) -> Tuple[str, bytes, str]:\n return self.filename, self.input_file_content, self.mimetype\n\n @staticmethod\n def is_image(stream: bytes) -> str:\n \"\"\"Check if the content file is an image by analyzing its headers.\n\n Args:\n stream (:obj:`bytes`): A byte stream representing the content of a file.\n\n Returns:\n :obj:`str`: The str mime-type of an image.\n\n \"\"\"\n image = imghdr.what(None, stream)\n if image:\n return 'image/%s' % image\n\n raise TelegramError('Could not parse file content')\n\n @staticmethod\n def is_file(obj: object) -> bool:\n return hasattr(obj, 'read')\n\n def to_dict(self) -> Optional[str]:\n if self.attach:\n return 'attach://' + self.attach\n return None\n", "path": "telegram/files/inputfile.py"}]} | 1,659 | 609 |
gh_patches_debug_1828 | rasdani/github-patches | git_diff | getmoto__moto-1286 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Resources in mentioned in cloud formation template is not getting created.
Hi,
I am creating security group through cloud formation template and then trying to retrieve that through boto client but it says that security group does not exists. If i create security group through command line then i am able to fetch it.
It seems like resources in cloud formation does not get created when we deploy it.
</issue>
<code>
[start of moto/route53/models.py]
1 from __future__ import unicode_literals
2
3 from collections import defaultdict
4
5 import string
6 import random
7 import uuid
8 from jinja2 import Template
9
10 from moto.core import BaseBackend, BaseModel
11
12
13 ROUTE53_ID_CHOICE = string.ascii_uppercase + string.digits
14
15
16 def create_route53_zone_id():
17 # New ID's look like this Z1RWWTK7Y8UDDQ
18 return ''.join([random.choice(ROUTE53_ID_CHOICE) for _ in range(0, 15)])
19
20
21 class HealthCheck(BaseModel):
22
23 def __init__(self, health_check_id, health_check_args):
24 self.id = health_check_id
25 self.ip_address = health_check_args.get("ip_address")
26 self.port = health_check_args.get("port", 80)
27 self._type = health_check_args.get("type")
28 self.resource_path = health_check_args.get("resource_path")
29 self.fqdn = health_check_args.get("fqdn")
30 self.search_string = health_check_args.get("search_string")
31 self.request_interval = health_check_args.get("request_interval", 30)
32 self.failure_threshold = health_check_args.get("failure_threshold", 3)
33
34 @property
35 def physical_resource_id(self):
36 return self.id
37
38 @classmethod
39 def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
40 properties = cloudformation_json['Properties']['HealthCheckConfig']
41 health_check_args = {
42 "ip_address": properties.get('IPAddress'),
43 "port": properties.get('Port'),
44 "type": properties['Type'],
45 "resource_path": properties.get('ResourcePath'),
46 "fqdn": properties.get('FullyQualifiedDomainName'),
47 "search_string": properties.get('SearchString'),
48 "request_interval": properties.get('RequestInterval'),
49 "failure_threshold": properties.get('FailureThreshold'),
50 }
51 health_check = route53_backend.create_health_check(health_check_args)
52 return health_check
53
54 def to_xml(self):
55 template = Template("""<HealthCheck>
56 <Id>{{ health_check.id }}</Id>
57 <CallerReference>example.com 192.0.2.17</CallerReference>
58 <HealthCheckConfig>
59 <IPAddress>{{ health_check.ip_address }}</IPAddress>
60 <Port>{{ health_check.port }}</Port>
61 <Type>{{ health_check._type }}</Type>
62 <ResourcePath>{{ health_check.resource_path }}</ResourcePath>
63 <FullyQualifiedDomainName>{{ health_check.fqdn }}</FullyQualifiedDomainName>
64 <RequestInterval>{{ health_check.request_interval }}</RequestInterval>
65 <FailureThreshold>{{ health_check.failure_threshold }}</FailureThreshold>
66 {% if health_check.search_string %}
67 <SearchString>{{ health_check.search_string }}</SearchString>
68 {% endif %}
69 </HealthCheckConfig>
70 <HealthCheckVersion>1</HealthCheckVersion>
71 </HealthCheck>""")
72 return template.render(health_check=self)
73
74
75 class RecordSet(BaseModel):
76
77 def __init__(self, kwargs):
78 self.name = kwargs.get('Name')
79 self._type = kwargs.get('Type')
80 self.ttl = kwargs.get('TTL')
81 self.records = kwargs.get('ResourceRecords', [])
82 self.set_identifier = kwargs.get('SetIdentifier')
83 self.weight = kwargs.get('Weight')
84 self.region = kwargs.get('Region')
85 self.health_check = kwargs.get('HealthCheckId')
86 self.hosted_zone_name = kwargs.get('HostedZoneName')
87 self.hosted_zone_id = kwargs.get('HostedZoneId')
88
89 @classmethod
90 def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
91 properties = cloudformation_json['Properties']
92
93 zone_name = properties.get("HostedZoneName")
94 if zone_name:
95 hosted_zone = route53_backend.get_hosted_zone_by_name(zone_name)
96 else:
97 hosted_zone = route53_backend.get_hosted_zone(
98 properties["HostedZoneId"])
99 record_set = hosted_zone.add_rrset(properties)
100 return record_set
101
102 @classmethod
103 def update_from_cloudformation_json(cls, original_resource, new_resource_name, cloudformation_json, region_name):
104 cls.delete_from_cloudformation_json(
105 original_resource.name, cloudformation_json, region_name)
106 return cls.create_from_cloudformation_json(new_resource_name, cloudformation_json, region_name)
107
108 @classmethod
109 def delete_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
110 # this will break if you changed the zone the record is in,
111 # unfortunately
112 properties = cloudformation_json['Properties']
113
114 zone_name = properties.get("HostedZoneName")
115 if zone_name:
116 hosted_zone = route53_backend.get_hosted_zone_by_name(zone_name)
117 else:
118 hosted_zone = route53_backend.get_hosted_zone(
119 properties["HostedZoneId"])
120
121 try:
122 hosted_zone.delete_rrset_by_name(resource_name)
123 except KeyError:
124 pass
125
126 @property
127 def physical_resource_id(self):
128 return self.name
129
130 def to_xml(self):
131 template = Template("""<ResourceRecordSet>
132 <Name>{{ record_set.name }}</Name>
133 <Type>{{ record_set._type }}</Type>
134 {% if record_set.set_identifier %}
135 <SetIdentifier>{{ record_set.set_identifier }}</SetIdentifier>
136 {% endif %}
137 {% if record_set.weight %}
138 <Weight>{{ record_set.weight }}</Weight>
139 {% endif %}
140 {% if record_set.region %}
141 <Region>{{ record_set.region }}</Region>
142 {% endif %}
143 <TTL>{{ record_set.ttl }}</TTL>
144 <ResourceRecords>
145 {% for record in record_set.records %}
146 <ResourceRecord>
147 <Value>{{ record }}</Value>
148 </ResourceRecord>
149 {% endfor %}
150 </ResourceRecords>
151 {% if record_set.health_check %}
152 <HealthCheckId>{{ record_set.health_check }}</HealthCheckId>
153 {% endif %}
154 </ResourceRecordSet>""")
155 return template.render(record_set=self)
156
157 def delete(self, *args, **kwargs):
158 ''' Not exposed as part of the Route 53 API - used for CloudFormation. args are ignored '''
159 hosted_zone = route53_backend.get_hosted_zone_by_name(
160 self.hosted_zone_name)
161 if not hosted_zone:
162 hosted_zone = route53_backend.get_hosted_zone(self.hosted_zone_id)
163 hosted_zone.delete_rrset_by_name(self.name)
164
165
166 class FakeZone(BaseModel):
167
168 def __init__(self, name, id_, private_zone, comment=None):
169 self.name = name
170 self.id = id_
171 if comment is not None:
172 self.comment = comment
173 self.private_zone = private_zone
174 self.rrsets = []
175
176 def add_rrset(self, record_set):
177 record_set = RecordSet(record_set)
178 self.rrsets.append(record_set)
179 return record_set
180
181 def upsert_rrset(self, record_set):
182 new_rrset = RecordSet(record_set)
183 for i, rrset in enumerate(self.rrsets):
184 if rrset.name == new_rrset.name:
185 self.rrsets[i] = new_rrset
186 break
187 else:
188 self.rrsets.append(new_rrset)
189 return new_rrset
190
191 def delete_rrset_by_name(self, name):
192 self.rrsets = [
193 record_set for record_set in self.rrsets if record_set.name != name]
194
195 def delete_rrset_by_id(self, set_identifier):
196 self.rrsets = [
197 record_set for record_set in self.rrsets if record_set.set_identifier != set_identifier]
198
199 def get_record_sets(self, type_filter, name_filter):
200 record_sets = list(self.rrsets) # Copy the list
201 if type_filter:
202 record_sets = [
203 record_set for record_set in record_sets if record_set._type == type_filter]
204 if name_filter:
205 record_sets = [
206 record_set for record_set in record_sets if record_set.name == name_filter]
207
208 return record_sets
209
210 @property
211 def physical_resource_id(self):
212 return self.name
213
214 @classmethod
215 def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
216 properties = cloudformation_json['Properties']
217 name = properties["Name"]
218
219 hosted_zone = route53_backend.create_hosted_zone(
220 name, private_zone=False)
221 return hosted_zone
222
223
224 class RecordSetGroup(BaseModel):
225
226 def __init__(self, hosted_zone_id, record_sets):
227 self.hosted_zone_id = hosted_zone_id
228 self.record_sets = record_sets
229
230 @property
231 def physical_resource_id(self):
232 return "arn:aws:route53:::hostedzone/{0}".format(self.hosted_zone_id)
233
234 @classmethod
235 def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
236 properties = cloudformation_json['Properties']
237
238 zone_name = properties.get("HostedZoneName")
239 if zone_name:
240 hosted_zone = route53_backend.get_hosted_zone_by_name(zone_name)
241 else:
242 hosted_zone = route53_backend.get_hosted_zone(properties["HostedZoneId"])
243 record_sets = properties["RecordSets"]
244 for record_set in record_sets:
245 hosted_zone.add_rrset(record_set)
246
247 record_set_group = RecordSetGroup(hosted_zone.id, record_sets)
248 return record_set_group
249
250
251 class Route53Backend(BaseBackend):
252
253 def __init__(self):
254 self.zones = {}
255 self.health_checks = {}
256 self.resource_tags = defaultdict(dict)
257
258 def create_hosted_zone(self, name, private_zone, comment=None):
259 new_id = create_route53_zone_id()
260 new_zone = FakeZone(
261 name, new_id, private_zone=private_zone, comment=comment)
262 self.zones[new_id] = new_zone
263 return new_zone
264
265 def change_tags_for_resource(self, resource_id, tags):
266 if 'Tag' in tags:
267 if isinstance(tags['Tag'], list):
268 for tag in tags['Tag']:
269 self.resource_tags[resource_id][tag['Key']] = tag['Value']
270 else:
271 key, value = (tags['Tag']['Key'], tags['Tag']['Value'])
272 self.resource_tags[resource_id][key] = value
273 else:
274 if 'Key' in tags:
275 if isinstance(tags['Key'], list):
276 for key in tags['Key']:
277 del(self.resource_tags[resource_id][key])
278 else:
279 del(self.resource_tags[resource_id][tags['Key']])
280
281 def list_tags_for_resource(self, resource_id):
282 if resource_id in self.resource_tags:
283 return self.resource_tags[resource_id]
284
285 def get_all_hosted_zones(self):
286 return self.zones.values()
287
288 def get_hosted_zone(self, id_):
289 return self.zones.get(id_.replace("/hostedzone/", ""))
290
291 def get_hosted_zone_by_name(self, name):
292 for zone in self.get_all_hosted_zones():
293 if zone.name == name:
294 return zone
295
296 def delete_hosted_zone(self, id_):
297 return self.zones.pop(id_.replace("/hostedzone/", ""), None)
298
299 def create_health_check(self, health_check_args):
300 health_check_id = str(uuid.uuid4())
301 health_check = HealthCheck(health_check_id, health_check_args)
302 self.health_checks[health_check_id] = health_check
303 return health_check
304
305 def get_health_checks(self):
306 return self.health_checks.values()
307
308 def delete_health_check(self, health_check_id):
309 return self.health_checks.pop(health_check_id, None)
310
311
312 route53_backend = Route53Backend()
313
[end of moto/route53/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/moto/route53/models.py b/moto/route53/models.py
--- a/moto/route53/models.py
+++ b/moto/route53/models.py
@@ -209,7 +209,7 @@
@property
def physical_resource_id(self):
- return self.name
+ return self.id
@classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
| {"golden_diff": "diff --git a/moto/route53/models.py b/moto/route53/models.py\n--- a/moto/route53/models.py\n+++ b/moto/route53/models.py\n@@ -209,7 +209,7 @@\n \n @property\n def physical_resource_id(self):\n- return self.name\n+ return self.id\n \n @classmethod\n def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):\n", "issue": "Resources in mentioned in cloud formation template is not getting created.\nHi,\r\nI am creating security group through cloud formation template and then trying to retrieve that through boto client but it says that security group does not exists. If i create security group through command line then i am able to fetch it.\r\nIt seems like resources in cloud formation does not get created when we deploy it.\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom collections import defaultdict\n\nimport string\nimport random\nimport uuid\nfrom jinja2 import Template\n\nfrom moto.core import BaseBackend, BaseModel\n\n\nROUTE53_ID_CHOICE = string.ascii_uppercase + string.digits\n\n\ndef create_route53_zone_id():\n # New ID's look like this Z1RWWTK7Y8UDDQ\n return ''.join([random.choice(ROUTE53_ID_CHOICE) for _ in range(0, 15)])\n\n\nclass HealthCheck(BaseModel):\n\n def __init__(self, health_check_id, health_check_args):\n self.id = health_check_id\n self.ip_address = health_check_args.get(\"ip_address\")\n self.port = health_check_args.get(\"port\", 80)\n self._type = health_check_args.get(\"type\")\n self.resource_path = health_check_args.get(\"resource_path\")\n self.fqdn = health_check_args.get(\"fqdn\")\n self.search_string = health_check_args.get(\"search_string\")\n self.request_interval = health_check_args.get(\"request_interval\", 30)\n self.failure_threshold = health_check_args.get(\"failure_threshold\", 3)\n\n @property\n def physical_resource_id(self):\n return self.id\n\n @classmethod\n def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):\n properties = cloudformation_json['Properties']['HealthCheckConfig']\n health_check_args = {\n \"ip_address\": properties.get('IPAddress'),\n \"port\": properties.get('Port'),\n \"type\": properties['Type'],\n \"resource_path\": properties.get('ResourcePath'),\n \"fqdn\": properties.get('FullyQualifiedDomainName'),\n \"search_string\": properties.get('SearchString'),\n \"request_interval\": properties.get('RequestInterval'),\n \"failure_threshold\": properties.get('FailureThreshold'),\n }\n health_check = route53_backend.create_health_check(health_check_args)\n return health_check\n\n def to_xml(self):\n template = Template(\"\"\"<HealthCheck>\n <Id>{{ health_check.id }}</Id>\n <CallerReference>example.com 192.0.2.17</CallerReference>\n <HealthCheckConfig>\n <IPAddress>{{ health_check.ip_address }}</IPAddress>\n <Port>{{ health_check.port }}</Port>\n <Type>{{ health_check._type }}</Type>\n <ResourcePath>{{ health_check.resource_path }}</ResourcePath>\n <FullyQualifiedDomainName>{{ health_check.fqdn }}</FullyQualifiedDomainName>\n <RequestInterval>{{ health_check.request_interval }}</RequestInterval>\n <FailureThreshold>{{ health_check.failure_threshold }}</FailureThreshold>\n {% if health_check.search_string %}\n <SearchString>{{ health_check.search_string }}</SearchString>\n {% endif %}\n </HealthCheckConfig>\n <HealthCheckVersion>1</HealthCheckVersion>\n </HealthCheck>\"\"\")\n return template.render(health_check=self)\n\n\nclass RecordSet(BaseModel):\n\n def __init__(self, kwargs):\n self.name = kwargs.get('Name')\n self._type = kwargs.get('Type')\n self.ttl = kwargs.get('TTL')\n self.records = kwargs.get('ResourceRecords', [])\n self.set_identifier = kwargs.get('SetIdentifier')\n self.weight = kwargs.get('Weight')\n self.region = kwargs.get('Region')\n self.health_check = kwargs.get('HealthCheckId')\n self.hosted_zone_name = kwargs.get('HostedZoneName')\n self.hosted_zone_id = kwargs.get('HostedZoneId')\n\n @classmethod\n def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):\n properties = cloudformation_json['Properties']\n\n zone_name = properties.get(\"HostedZoneName\")\n if zone_name:\n hosted_zone = route53_backend.get_hosted_zone_by_name(zone_name)\n else:\n hosted_zone = route53_backend.get_hosted_zone(\n properties[\"HostedZoneId\"])\n record_set = hosted_zone.add_rrset(properties)\n return record_set\n\n @classmethod\n def update_from_cloudformation_json(cls, original_resource, new_resource_name, cloudformation_json, region_name):\n cls.delete_from_cloudformation_json(\n original_resource.name, cloudformation_json, region_name)\n return cls.create_from_cloudformation_json(new_resource_name, cloudformation_json, region_name)\n\n @classmethod\n def delete_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):\n # this will break if you changed the zone the record is in,\n # unfortunately\n properties = cloudformation_json['Properties']\n\n zone_name = properties.get(\"HostedZoneName\")\n if zone_name:\n hosted_zone = route53_backend.get_hosted_zone_by_name(zone_name)\n else:\n hosted_zone = route53_backend.get_hosted_zone(\n properties[\"HostedZoneId\"])\n\n try:\n hosted_zone.delete_rrset_by_name(resource_name)\n except KeyError:\n pass\n\n @property\n def physical_resource_id(self):\n return self.name\n\n def to_xml(self):\n template = Template(\"\"\"<ResourceRecordSet>\n <Name>{{ record_set.name }}</Name>\n <Type>{{ record_set._type }}</Type>\n {% if record_set.set_identifier %}\n <SetIdentifier>{{ record_set.set_identifier }}</SetIdentifier>\n {% endif %}\n {% if record_set.weight %}\n <Weight>{{ record_set.weight }}</Weight>\n {% endif %}\n {% if record_set.region %}\n <Region>{{ record_set.region }}</Region>\n {% endif %}\n <TTL>{{ record_set.ttl }}</TTL>\n <ResourceRecords>\n {% for record in record_set.records %}\n <ResourceRecord>\n <Value>{{ record }}</Value>\n </ResourceRecord>\n {% endfor %}\n </ResourceRecords>\n {% if record_set.health_check %}\n <HealthCheckId>{{ record_set.health_check }}</HealthCheckId>\n {% endif %}\n </ResourceRecordSet>\"\"\")\n return template.render(record_set=self)\n\n def delete(self, *args, **kwargs):\n ''' Not exposed as part of the Route 53 API - used for CloudFormation. args are ignored '''\n hosted_zone = route53_backend.get_hosted_zone_by_name(\n self.hosted_zone_name)\n if not hosted_zone:\n hosted_zone = route53_backend.get_hosted_zone(self.hosted_zone_id)\n hosted_zone.delete_rrset_by_name(self.name)\n\n\nclass FakeZone(BaseModel):\n\n def __init__(self, name, id_, private_zone, comment=None):\n self.name = name\n self.id = id_\n if comment is not None:\n self.comment = comment\n self.private_zone = private_zone\n self.rrsets = []\n\n def add_rrset(self, record_set):\n record_set = RecordSet(record_set)\n self.rrsets.append(record_set)\n return record_set\n\n def upsert_rrset(self, record_set):\n new_rrset = RecordSet(record_set)\n for i, rrset in enumerate(self.rrsets):\n if rrset.name == new_rrset.name:\n self.rrsets[i] = new_rrset\n break\n else:\n self.rrsets.append(new_rrset)\n return new_rrset\n\n def delete_rrset_by_name(self, name):\n self.rrsets = [\n record_set for record_set in self.rrsets if record_set.name != name]\n\n def delete_rrset_by_id(self, set_identifier):\n self.rrsets = [\n record_set for record_set in self.rrsets if record_set.set_identifier != set_identifier]\n\n def get_record_sets(self, type_filter, name_filter):\n record_sets = list(self.rrsets) # Copy the list\n if type_filter:\n record_sets = [\n record_set for record_set in record_sets if record_set._type == type_filter]\n if name_filter:\n record_sets = [\n record_set for record_set in record_sets if record_set.name == name_filter]\n\n return record_sets\n\n @property\n def physical_resource_id(self):\n return self.name\n\n @classmethod\n def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):\n properties = cloudformation_json['Properties']\n name = properties[\"Name\"]\n\n hosted_zone = route53_backend.create_hosted_zone(\n name, private_zone=False)\n return hosted_zone\n\n\nclass RecordSetGroup(BaseModel):\n\n def __init__(self, hosted_zone_id, record_sets):\n self.hosted_zone_id = hosted_zone_id\n self.record_sets = record_sets\n\n @property\n def physical_resource_id(self):\n return \"arn:aws:route53:::hostedzone/{0}\".format(self.hosted_zone_id)\n\n @classmethod\n def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):\n properties = cloudformation_json['Properties']\n\n zone_name = properties.get(\"HostedZoneName\")\n if zone_name:\n hosted_zone = route53_backend.get_hosted_zone_by_name(zone_name)\n else:\n hosted_zone = route53_backend.get_hosted_zone(properties[\"HostedZoneId\"])\n record_sets = properties[\"RecordSets\"]\n for record_set in record_sets:\n hosted_zone.add_rrset(record_set)\n\n record_set_group = RecordSetGroup(hosted_zone.id, record_sets)\n return record_set_group\n\n\nclass Route53Backend(BaseBackend):\n\n def __init__(self):\n self.zones = {}\n self.health_checks = {}\n self.resource_tags = defaultdict(dict)\n\n def create_hosted_zone(self, name, private_zone, comment=None):\n new_id = create_route53_zone_id()\n new_zone = FakeZone(\n name, new_id, private_zone=private_zone, comment=comment)\n self.zones[new_id] = new_zone\n return new_zone\n\n def change_tags_for_resource(self, resource_id, tags):\n if 'Tag' in tags:\n if isinstance(tags['Tag'], list):\n for tag in tags['Tag']:\n self.resource_tags[resource_id][tag['Key']] = tag['Value']\n else:\n key, value = (tags['Tag']['Key'], tags['Tag']['Value'])\n self.resource_tags[resource_id][key] = value\n else:\n if 'Key' in tags:\n if isinstance(tags['Key'], list):\n for key in tags['Key']:\n del(self.resource_tags[resource_id][key])\n else:\n del(self.resource_tags[resource_id][tags['Key']])\n\n def list_tags_for_resource(self, resource_id):\n if resource_id in self.resource_tags:\n return self.resource_tags[resource_id]\n\n def get_all_hosted_zones(self):\n return self.zones.values()\n\n def get_hosted_zone(self, id_):\n return self.zones.get(id_.replace(\"/hostedzone/\", \"\"))\n\n def get_hosted_zone_by_name(self, name):\n for zone in self.get_all_hosted_zones():\n if zone.name == name:\n return zone\n\n def delete_hosted_zone(self, id_):\n return self.zones.pop(id_.replace(\"/hostedzone/\", \"\"), None)\n\n def create_health_check(self, health_check_args):\n health_check_id = str(uuid.uuid4())\n health_check = HealthCheck(health_check_id, health_check_args)\n self.health_checks[health_check_id] = health_check\n return health_check\n\n def get_health_checks(self):\n return self.health_checks.values()\n\n def delete_health_check(self, health_check_id):\n return self.health_checks.pop(health_check_id, None)\n\n\nroute53_backend = Route53Backend()\n", "path": "moto/route53/models.py"}]} | 3,969 | 105 |
gh_patches_debug_30649 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2390 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove "Short Name" field on user form
I'd like to remove the "Short Name" field on the user form. In a [Matrix discussion](https://matrix.to/#/!xInuTkBwjXZXYatIlm:matrix.mathesar.org/$2e6ULgkergV1BZwlHxpNBNUKcG4W-bnetG9bApfg23M?via=matrix.mathesar.org&via=matrix.org) @kgodey said this was fine. It's an optional field so we can keep it in the API but just remove the form UI until we have a use-case for short names.
</issue>
<code>
[start of mathesar/urls.py]
1 from django.contrib.auth.views import LoginView
2 from django.urls import include, path, re_path
3 from rest_framework_nested import routers
4
5 from mathesar import views
6 from mathesar.api.db import viewsets as db_viewsets
7 from mathesar.api.ui import viewsets as ui_viewsets
8 from mathesar.users.password_reset import MathesarPasswordResetConfirmView
9
10 db_router = routers.DefaultRouter()
11 db_router.register(r'tables', db_viewsets.TableViewSet, basename='table')
12 db_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')
13 db_router.register(r'links', db_viewsets.LinkViewSet, basename='links')
14 db_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')
15 db_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')
16 db_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')
17
18 db_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')
19 db_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')
20 db_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')
21 db_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')
22 db_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')
23
24 ui_router = routers.DefaultRouter()
25 ui_router.register(r'version', ui_viewsets.VersionViewSet, basename='version')
26 ui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')
27 ui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')
28 ui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')
29 ui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')
30
31 urlpatterns = [
32 path('api/db/v0/', include(db_router.urls)),
33 path('api/db/v0/', include(db_table_router.urls)),
34 path('api/ui/v0/', include(ui_router.urls)),
35 path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),
36 path('auth/password_reset_confirm', MathesarPasswordResetConfirmView.as_view(), name='password_reset_confirm'),
37 path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),
38 path('auth/', include('django.contrib.auth.urls')),
39 path('', views.home, name='home'),
40 path('<db_name>/', views.schemas, name='schemas'),
41 re_path(
42 r'^(?P<db_name>\w+)/(?P<schema_id>\w+)/',
43 views.schema_home,
44 name='schema_home'
45 ),
46 ]
47
[end of mathesar/urls.py]
[start of mathesar/views.py]
1 from django.conf import settings
2 from django.contrib.auth.decorators import login_required
3 from django.shortcuts import render, redirect, get_object_or_404
4 from rest_framework import status
5 from rest_framework.decorators import api_view
6 from rest_framework.response import Response
7
8 from mathesar.api.db.permissions.database import DatabaseAccessPolicy
9 from mathesar.api.db.permissions.query import QueryAccessPolicy
10 from mathesar.api.db.permissions.schema import SchemaAccessPolicy
11 from mathesar.api.db.permissions.table import TableAccessPolicy
12 from mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer
13 from mathesar.api.serializers.schemas import SchemaSerializer
14 from mathesar.api.serializers.tables import TableSerializer
15 from mathesar.api.serializers.queries import QuerySerializer
16 from mathesar.api.ui.serializers.users import UserSerializer
17 from mathesar.database.types import UIType
18 from mathesar.models.base import Database, Schema, Table
19 from mathesar.models.query import UIQuery
20 from mathesar.state import reset_reflection
21
22
23 def get_schema_list(request, database):
24 qs = Schema.objects.filter(database=database)
25 permission_restricted_qs = SchemaAccessPolicy.scope_queryset(request, qs)
26 schema_serializer = SchemaSerializer(
27 permission_restricted_qs,
28 many=True,
29 context={'request': request}
30 )
31 return schema_serializer.data
32
33
34 def _get_permissible_db_queryset(request):
35 qs = Database.objects.all()
36 permission_restricted_qs = DatabaseAccessPolicy.scope_queryset(request, qs)
37 schema_qs = Schema.objects.all()
38 permitted_schemas = SchemaAccessPolicy.scope_queryset(request, schema_qs)
39 databases_from_permitted_schema = Database.objects.filter(schemas__in=permitted_schemas)
40 permission_restricted_qs = permission_restricted_qs | databases_from_permitted_schema
41 return permission_restricted_qs.distinct()
42
43
44 def get_database_list(request):
45 permission_restricted_db_qs = _get_permissible_db_queryset(request)
46 database_serializer = DatabaseSerializer(
47 permission_restricted_db_qs,
48 many=True,
49 context={'request': request}
50 )
51 return database_serializer.data
52
53
54 def get_table_list(request, schema):
55 if schema is None:
56 return []
57 qs = Table.objects.filter(schema=schema)
58 permission_restricted_qs = TableAccessPolicy.scope_queryset(request, qs)
59 table_serializer = TableSerializer(
60 permission_restricted_qs,
61 many=True,
62 context={'request': request}
63 )
64 return table_serializer.data
65
66
67 def get_queries_list(request, schema):
68 if schema is None:
69 return []
70 qs = UIQuery.objects.filter(base_table__schema=schema)
71 permission_restricted_qs = QueryAccessPolicy.scope_queryset(request, qs)
72
73 query_serializer = QuerySerializer(
74 permission_restricted_qs,
75 many=True,
76 context={'request': request}
77 )
78 return query_serializer.data
79
80
81 def get_ui_type_list(request, database):
82 if database is None:
83 return []
84 type_serializer = TypeSerializer(
85 UIType,
86 many=True,
87 context={'request': request}
88 )
89 return type_serializer.data
90
91
92 def get_user_data(request):
93 user_serializer = UserSerializer(
94 request.user,
95 many=False,
96 context={'request': request}
97 )
98 return user_serializer.data
99
100
101 def get_common_data(request, database, schema=None):
102 return {
103 'current_db': database.name if database else None,
104 'current_schema': schema.id if schema else None,
105 'schemas': get_schema_list(request, database),
106 'databases': get_database_list(request),
107 'tables': get_table_list(request, schema),
108 'queries': get_queries_list(request, schema),
109 'abstract_types': get_ui_type_list(request, database),
110 'user': get_user_data(request),
111 'live_demo_mode': getattr(settings, 'MATHESAR_LIVE_DEMO', False),
112 }
113
114
115 def get_current_database(request, db_name):
116 """Get database from passed name, with fall back behavior."""
117 permitted_databases = _get_permissible_db_queryset(request)
118 if db_name is not None:
119 current_database = get_object_or_404(permitted_databases, name=db_name)
120 else:
121 request_database_name = request.GET.get('database')
122 try:
123 if request_database_name is not None:
124 # Try to get the database named specified in the request
125 current_database = permitted_databases.get(name=request_database_name)
126 else:
127 # Try to get the first database available
128 current_database = permitted_databases.order_by('id').first()
129 except Database.DoesNotExist:
130 current_database = None
131 return current_database
132
133
134 def get_current_schema(request, schema_id, database):
135 # if there's a schema ID passed in, try to retrieve the schema, or return a 404 error.
136 if schema_id is not None:
137 permitted_schemas = SchemaAccessPolicy.scope_queryset(request, Schema.objects.all())
138 return get_object_or_404(permitted_schemas, id=schema_id)
139 else:
140 try:
141 # Try to get the first schema in the DB
142 return Schema.objects.filter(database=database).order_by('id').first()
143 except Schema.DoesNotExist:
144 return None
145
146
147 def render_schema(request, database, schema):
148 # if there's no schema available, redirect to the schemas page.
149 if not schema:
150 return redirect('schemas', db_name=database.name)
151 else:
152 # We are redirecting so that the correct URL is passed to the frontend.
153 return redirect('schema_home', db_name=database.name, schema_id=schema.id)
154
155
156 @login_required
157 @api_view(['POST'])
158 def reflect_all(_):
159 reset_reflection()
160 return Response(status=status.HTTP_200_OK)
161
162
163 @login_required
164 def home(request):
165 database = get_current_database(request, None)
166 if database is None:
167 return render(request, 'mathesar/index.html', {
168 'common_data': get_common_data(request, database)
169 })
170 return redirect('schemas', db_name=database.name)
171
172
173 @login_required
174 def schema_home(request, db_name, schema_id, **kwargs):
175 database = get_current_database(request, db_name)
176 schema = get_current_schema(request, schema_id, database)
177 return render(request, 'mathesar/index.html', {
178 'common_data': get_common_data(request, database, schema)
179 })
180
181
182 @login_required
183 def schemas(request, db_name):
184 database = get_current_database(request, db_name)
185 return render(request, 'mathesar/index.html', {
186 'common_data': get_common_data(request, database, None)
187 })
188
[end of mathesar/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mathesar/urls.py b/mathesar/urls.py
--- a/mathesar/urls.py
+++ b/mathesar/urls.py
@@ -37,6 +37,11 @@
path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),
path('auth/', include('django.contrib.auth.urls')),
path('', views.home, name='home'),
+ path('profile/', views.profile, name='profile'),
+ path('administration/', views.admin_home, name='admin_home'),
+ path('administration/users/', views.admin_home, name='admin_users_home'),
+ path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),
+ path('administration/general/', views.admin_home, name='admin_general'),
path('<db_name>/', views.schemas, name='schemas'),
re_path(
r'^(?P<db_name>\w+)/(?P<schema_id>\w+)/',
diff --git a/mathesar/views.py b/mathesar/views.py
--- a/mathesar/views.py
+++ b/mathesar/views.py
@@ -98,7 +98,7 @@
return user_serializer.data
-def get_common_data(request, database, schema=None):
+def get_common_data(request, database=None, schema=None):
return {
'current_db': database.name if database else None,
'current_schema': schema.id if schema else None,
@@ -170,6 +170,20 @@
return redirect('schemas', db_name=database.name)
+@login_required
+def profile(request):
+ return render(request, 'mathesar/index.html', {
+ 'common_data': get_common_data(request)
+ })
+
+
+@login_required
+def admin_home(request, **kwargs):
+ return render(request, 'mathesar/index.html', {
+ 'common_data': get_common_data(request)
+ })
+
+
@login_required
def schema_home(request, db_name, schema_id, **kwargs):
database = get_current_database(request, db_name)
| {"golden_diff": "diff --git a/mathesar/urls.py b/mathesar/urls.py\n--- a/mathesar/urls.py\n+++ b/mathesar/urls.py\n@@ -37,6 +37,11 @@\n path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),\n path('auth/', include('django.contrib.auth.urls')),\n path('', views.home, name='home'),\n+ path('profile/', views.profile, name='profile'),\n+ path('administration/', views.admin_home, name='admin_home'),\n+ path('administration/users/', views.admin_home, name='admin_users_home'),\n+ path('administration/users/<user_id>/', views.admin_home, name='admin_users_edit'),\n+ path('administration/general/', views.admin_home, name='admin_general'),\n path('<db_name>/', views.schemas, name='schemas'),\n re_path(\n r'^(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\ndiff --git a/mathesar/views.py b/mathesar/views.py\n--- a/mathesar/views.py\n+++ b/mathesar/views.py\n@@ -98,7 +98,7 @@\n return user_serializer.data\n \n \n-def get_common_data(request, database, schema=None):\n+def get_common_data(request, database=None, schema=None):\n return {\n 'current_db': database.name if database else None,\n 'current_schema': schema.id if schema else None,\n@@ -170,6 +170,20 @@\n return redirect('schemas', db_name=database.name)\n \n \n+@login_required\n+def profile(request):\n+ return render(request, 'mathesar/index.html', {\n+ 'common_data': get_common_data(request)\n+ })\n+\n+\n+@login_required\n+def admin_home(request, **kwargs):\n+ return render(request, 'mathesar/index.html', {\n+ 'common_data': get_common_data(request)\n+ })\n+\n+\n @login_required\n def schema_home(request, db_name, schema_id, **kwargs):\n database = get_current_database(request, db_name)\n", "issue": "Remove \"Short Name\" field on user form\nI'd like to remove the \"Short Name\" field on the user form. In a [Matrix discussion](https://matrix.to/#/!xInuTkBwjXZXYatIlm:matrix.mathesar.org/$2e6ULgkergV1BZwlHxpNBNUKcG4W-bnetG9bApfg23M?via=matrix.mathesar.org&via=matrix.org) @kgodey said this was fine. It's an optional field so we can keep it in the API but just remove the form UI until we have a use-case for short names.\n\n\n", "before_files": [{"content": "from django.contrib.auth.views import LoginView\nfrom django.urls import include, path, re_path\nfrom rest_framework_nested import routers\n\nfrom mathesar import views\nfrom mathesar.api.db import viewsets as db_viewsets\nfrom mathesar.api.ui import viewsets as ui_viewsets\nfrom mathesar.users.password_reset import MathesarPasswordResetConfirmView\n\ndb_router = routers.DefaultRouter()\ndb_router.register(r'tables', db_viewsets.TableViewSet, basename='table')\ndb_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')\ndb_router.register(r'links', db_viewsets.LinkViewSet, basename='links')\ndb_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')\ndb_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')\ndb_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')\n\ndb_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')\ndb_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')\ndb_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')\ndb_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')\ndb_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')\n\nui_router = routers.DefaultRouter()\nui_router.register(r'version', ui_viewsets.VersionViewSet, basename='version')\nui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')\nui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')\nui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')\nui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')\n\nurlpatterns = [\n path('api/db/v0/', include(db_router.urls)),\n path('api/db/v0/', include(db_table_router.urls)),\n path('api/ui/v0/', include(ui_router.urls)),\n path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),\n path('auth/password_reset_confirm', MathesarPasswordResetConfirmView.as_view(), name='password_reset_confirm'),\n path('auth/login/', LoginView.as_view(redirect_authenticated_user=True), name='login'),\n path('auth/', include('django.contrib.auth.urls')),\n path('', views.home, name='home'),\n path('<db_name>/', views.schemas, name='schemas'),\n re_path(\n r'^(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n views.schema_home,\n name='schema_home'\n ),\n]\n", "path": "mathesar/urls.py"}, {"content": "from django.conf import settings\nfrom django.contrib.auth.decorators import login_required\nfrom django.shortcuts import render, redirect, get_object_or_404\nfrom rest_framework import status\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\n\nfrom mathesar.api.db.permissions.database import DatabaseAccessPolicy\nfrom mathesar.api.db.permissions.query import QueryAccessPolicy\nfrom mathesar.api.db.permissions.schema import SchemaAccessPolicy\nfrom mathesar.api.db.permissions.table import TableAccessPolicy\nfrom mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer\nfrom mathesar.api.serializers.schemas import SchemaSerializer\nfrom mathesar.api.serializers.tables import TableSerializer\nfrom mathesar.api.serializers.queries import QuerySerializer\nfrom mathesar.api.ui.serializers.users import UserSerializer\nfrom mathesar.database.types import UIType\nfrom mathesar.models.base import Database, Schema, Table\nfrom mathesar.models.query import UIQuery\nfrom mathesar.state import reset_reflection\n\n\ndef get_schema_list(request, database):\n qs = Schema.objects.filter(database=database)\n permission_restricted_qs = SchemaAccessPolicy.scope_queryset(request, qs)\n schema_serializer = SchemaSerializer(\n permission_restricted_qs,\n many=True,\n context={'request': request}\n )\n return schema_serializer.data\n\n\ndef _get_permissible_db_queryset(request):\n qs = Database.objects.all()\n permission_restricted_qs = DatabaseAccessPolicy.scope_queryset(request, qs)\n schema_qs = Schema.objects.all()\n permitted_schemas = SchemaAccessPolicy.scope_queryset(request, schema_qs)\n databases_from_permitted_schema = Database.objects.filter(schemas__in=permitted_schemas)\n permission_restricted_qs = permission_restricted_qs | databases_from_permitted_schema\n return permission_restricted_qs.distinct()\n\n\ndef get_database_list(request):\n permission_restricted_db_qs = _get_permissible_db_queryset(request)\n database_serializer = DatabaseSerializer(\n permission_restricted_db_qs,\n many=True,\n context={'request': request}\n )\n return database_serializer.data\n\n\ndef get_table_list(request, schema):\n if schema is None:\n return []\n qs = Table.objects.filter(schema=schema)\n permission_restricted_qs = TableAccessPolicy.scope_queryset(request, qs)\n table_serializer = TableSerializer(\n permission_restricted_qs,\n many=True,\n context={'request': request}\n )\n return table_serializer.data\n\n\ndef get_queries_list(request, schema):\n if schema is None:\n return []\n qs = UIQuery.objects.filter(base_table__schema=schema)\n permission_restricted_qs = QueryAccessPolicy.scope_queryset(request, qs)\n\n query_serializer = QuerySerializer(\n permission_restricted_qs,\n many=True,\n context={'request': request}\n )\n return query_serializer.data\n\n\ndef get_ui_type_list(request, database):\n if database is None:\n return []\n type_serializer = TypeSerializer(\n UIType,\n many=True,\n context={'request': request}\n )\n return type_serializer.data\n\n\ndef get_user_data(request):\n user_serializer = UserSerializer(\n request.user,\n many=False,\n context={'request': request}\n )\n return user_serializer.data\n\n\ndef get_common_data(request, database, schema=None):\n return {\n 'current_db': database.name if database else None,\n 'current_schema': schema.id if schema else None,\n 'schemas': get_schema_list(request, database),\n 'databases': get_database_list(request),\n 'tables': get_table_list(request, schema),\n 'queries': get_queries_list(request, schema),\n 'abstract_types': get_ui_type_list(request, database),\n 'user': get_user_data(request),\n 'live_demo_mode': getattr(settings, 'MATHESAR_LIVE_DEMO', False),\n }\n\n\ndef get_current_database(request, db_name):\n \"\"\"Get database from passed name, with fall back behavior.\"\"\"\n permitted_databases = _get_permissible_db_queryset(request)\n if db_name is not None:\n current_database = get_object_or_404(permitted_databases, name=db_name)\n else:\n request_database_name = request.GET.get('database')\n try:\n if request_database_name is not None:\n # Try to get the database named specified in the request\n current_database = permitted_databases.get(name=request_database_name)\n else:\n # Try to get the first database available\n current_database = permitted_databases.order_by('id').first()\n except Database.DoesNotExist:\n current_database = None\n return current_database\n\n\ndef get_current_schema(request, schema_id, database):\n # if there's a schema ID passed in, try to retrieve the schema, or return a 404 error.\n if schema_id is not None:\n permitted_schemas = SchemaAccessPolicy.scope_queryset(request, Schema.objects.all())\n return get_object_or_404(permitted_schemas, id=schema_id)\n else:\n try:\n # Try to get the first schema in the DB\n return Schema.objects.filter(database=database).order_by('id').first()\n except Schema.DoesNotExist:\n return None\n\n\ndef render_schema(request, database, schema):\n # if there's no schema available, redirect to the schemas page.\n if not schema:\n return redirect('schemas', db_name=database.name)\n else:\n # We are redirecting so that the correct URL is passed to the frontend.\n return redirect('schema_home', db_name=database.name, schema_id=schema.id)\n\n\n@login_required\n@api_view(['POST'])\ndef reflect_all(_):\n reset_reflection()\n return Response(status=status.HTTP_200_OK)\n\n\n@login_required\ndef home(request):\n database = get_current_database(request, None)\n if database is None:\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database)\n })\n return redirect('schemas', db_name=database.name)\n\n\n@login_required\ndef schema_home(request, db_name, schema_id, **kwargs):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, schema_id, database)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, schema)\n })\n\n\n@login_required\ndef schemas(request, db_name):\n database = get_current_database(request, db_name)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, None)\n })\n", "path": "mathesar/views.py"}]} | 3,189 | 445 |
gh_patches_debug_5607 | rasdani/github-patches | git_diff | pallets__werkzeug-2643 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Stream argument to ProfilerMiddleware is not typed hinted correctly
The profiler middleware `src/werkzeug/middleware/profiler.py:80` documentation and implementation states that the argument `stream` can be set to `None` to disable output.
However the type hints specifies `stream: t.IO[str] = sys.stdout,` which is inconsistent with the documentation and the implementation.
To replicate the issue run a type hint aware validator against something initiating a `ProfilerMiddleware` with the `stream` argument set to `None`:
```
ProfilerMiddleware(
app,
stream=None
)
```
It will cause an error stating that `None` is not an acceptable value for `stream`.
It's a minor and easy fix that I'm happy to provide a PR for if deemed acceptable with the proposed change:
```
stream: t.Union[t.IO[str], None] = sys.stdout,
```
Environment:
- Python version: 3.10
- Werkzeug version: 2.2.3
</issue>
<code>
[start of src/werkzeug/middleware/profiler.py]
1 """
2 Application Profiler
3 ====================
4
5 This module provides a middleware that profiles each request with the
6 :mod:`cProfile` module. This can help identify bottlenecks in your code
7 that may be slowing down your application.
8
9 .. autoclass:: ProfilerMiddleware
10
11 :copyright: 2007 Pallets
12 :license: BSD-3-Clause
13 """
14 import os.path
15 import sys
16 import time
17 import typing as t
18 from pstats import Stats
19
20 try:
21 from cProfile import Profile
22 except ImportError:
23 from profile import Profile # type: ignore
24
25 if t.TYPE_CHECKING:
26 from _typeshed.wsgi import StartResponse
27 from _typeshed.wsgi import WSGIApplication
28 from _typeshed.wsgi import WSGIEnvironment
29
30
31 class ProfilerMiddleware:
32 """Wrap a WSGI application and profile the execution of each
33 request. Responses are buffered so that timings are more exact.
34
35 If ``stream`` is given, :class:`pstats.Stats` are written to it
36 after each request. If ``profile_dir`` is given, :mod:`cProfile`
37 data files are saved to that directory, one file per request.
38
39 The filename can be customized by passing ``filename_format``. If
40 it is a string, it will be formatted using :meth:`str.format` with
41 the following fields available:
42
43 - ``{method}`` - The request method; GET, POST, etc.
44 - ``{path}`` - The request path or 'root' should one not exist.
45 - ``{elapsed}`` - The elapsed time of the request.
46 - ``{time}`` - The time of the request.
47
48 If it is a callable, it will be called with the WSGI ``environ``
49 dict and should return a filename.
50
51 :param app: The WSGI application to wrap.
52 :param stream: Write stats to this stream. Disable with ``None``.
53 :param sort_by: A tuple of columns to sort stats by. See
54 :meth:`pstats.Stats.sort_stats`.
55 :param restrictions: A tuple of restrictions to filter stats by. See
56 :meth:`pstats.Stats.print_stats`.
57 :param profile_dir: Save profile data files to this directory.
58 :param filename_format: Format string for profile data file names,
59 or a callable returning a name. See explanation above.
60
61 .. code-block:: python
62
63 from werkzeug.middleware.profiler import ProfilerMiddleware
64 app = ProfilerMiddleware(app)
65
66 .. versionchanged:: 0.15
67 Stats are written even if ``profile_dir`` is given, and can be
68 disable by passing ``stream=None``.
69
70 .. versionadded:: 0.15
71 Added ``filename_format``.
72
73 .. versionadded:: 0.9
74 Added ``restrictions`` and ``profile_dir``.
75 """
76
77 def __init__(
78 self,
79 app: "WSGIApplication",
80 stream: t.IO[str] = sys.stdout,
81 sort_by: t.Iterable[str] = ("time", "calls"),
82 restrictions: t.Iterable[t.Union[str, int, float]] = (),
83 profile_dir: t.Optional[str] = None,
84 filename_format: str = "{method}.{path}.{elapsed:.0f}ms.{time:.0f}.prof",
85 ) -> None:
86 self._app = app
87 self._stream = stream
88 self._sort_by = sort_by
89 self._restrictions = restrictions
90 self._profile_dir = profile_dir
91 self._filename_format = filename_format
92
93 def __call__(
94 self, environ: "WSGIEnvironment", start_response: "StartResponse"
95 ) -> t.Iterable[bytes]:
96 response_body: t.List[bytes] = []
97
98 def catching_start_response(status, headers, exc_info=None): # type: ignore
99 start_response(status, headers, exc_info)
100 return response_body.append
101
102 def runapp() -> None:
103 app_iter = self._app(
104 environ, t.cast("StartResponse", catching_start_response)
105 )
106 response_body.extend(app_iter)
107
108 if hasattr(app_iter, "close"):
109 app_iter.close()
110
111 profile = Profile()
112 start = time.time()
113 profile.runcall(runapp)
114 body = b"".join(response_body)
115 elapsed = time.time() - start
116
117 if self._profile_dir is not None:
118 if callable(self._filename_format):
119 filename = self._filename_format(environ)
120 else:
121 filename = self._filename_format.format(
122 method=environ["REQUEST_METHOD"],
123 path=environ["PATH_INFO"].strip("/").replace("/", ".") or "root",
124 elapsed=elapsed * 1000.0,
125 time=time.time(),
126 )
127 filename = os.path.join(self._profile_dir, filename)
128 profile.dump_stats(filename)
129
130 if self._stream is not None:
131 stats = Stats(profile, stream=self._stream)
132 stats.sort_stats(*self._sort_by)
133 print("-" * 80, file=self._stream)
134 path_info = environ.get("PATH_INFO", "")
135 print(f"PATH: {path_info!r}", file=self._stream)
136 stats.print_stats(*self._restrictions)
137 print(f"{'-' * 80}\n", file=self._stream)
138
139 return [body]
140
[end of src/werkzeug/middleware/profiler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/werkzeug/middleware/profiler.py b/src/werkzeug/middleware/profiler.py
--- a/src/werkzeug/middleware/profiler.py
+++ b/src/werkzeug/middleware/profiler.py
@@ -77,7 +77,7 @@
def __init__(
self,
app: "WSGIApplication",
- stream: t.IO[str] = sys.stdout,
+ stream: t.Union[t.IO[str], None] = sys.stdout,
sort_by: t.Iterable[str] = ("time", "calls"),
restrictions: t.Iterable[t.Union[str, int, float]] = (),
profile_dir: t.Optional[str] = None,
| {"golden_diff": "diff --git a/src/werkzeug/middleware/profiler.py b/src/werkzeug/middleware/profiler.py\n--- a/src/werkzeug/middleware/profiler.py\n+++ b/src/werkzeug/middleware/profiler.py\n@@ -77,7 +77,7 @@\n def __init__(\n self,\n app: \"WSGIApplication\",\n- stream: t.IO[str] = sys.stdout,\n+ stream: t.Union[t.IO[str], None] = sys.stdout,\n sort_by: t.Iterable[str] = (\"time\", \"calls\"),\n restrictions: t.Iterable[t.Union[str, int, float]] = (),\n profile_dir: t.Optional[str] = None,\n", "issue": "Stream argument to ProfilerMiddleware is not typed hinted correctly\nThe profiler middleware `src/werkzeug/middleware/profiler.py:80` documentation and implementation states that the argument `stream` can be set to `None` to disable output.\r\n \r\nHowever the type hints specifies `stream: t.IO[str] = sys.stdout,` which is inconsistent with the documentation and the implementation.\r\n\r\nTo replicate the issue run a type hint aware validator against something initiating a `ProfilerMiddleware` with the `stream` argument set to `None`:\r\n```\r\nProfilerMiddleware(\r\n app,\r\n stream=None\r\n)\r\n```\r\nIt will cause an error stating that `None` is not an acceptable value for `stream`.\r\n\r\nIt's a minor and easy fix that I'm happy to provide a PR for if deemed acceptable with the proposed change:\r\n```\r\nstream: t.Union[t.IO[str], None] = sys.stdout,\r\n```\r\n\r\nEnvironment:\r\n\r\n- Python version: 3.10\r\n- Werkzeug version: 2.2.3\r\n\n", "before_files": [{"content": "\"\"\"\nApplication Profiler\n====================\n\nThis module provides a middleware that profiles each request with the\n:mod:`cProfile` module. This can help identify bottlenecks in your code\nthat may be slowing down your application.\n\n.. autoclass:: ProfilerMiddleware\n\n:copyright: 2007 Pallets\n:license: BSD-3-Clause\n\"\"\"\nimport os.path\nimport sys\nimport time\nimport typing as t\nfrom pstats import Stats\n\ntry:\n from cProfile import Profile\nexcept ImportError:\n from profile import Profile # type: ignore\n\nif t.TYPE_CHECKING:\n from _typeshed.wsgi import StartResponse\n from _typeshed.wsgi import WSGIApplication\n from _typeshed.wsgi import WSGIEnvironment\n\n\nclass ProfilerMiddleware:\n \"\"\"Wrap a WSGI application and profile the execution of each\n request. Responses are buffered so that timings are more exact.\n\n If ``stream`` is given, :class:`pstats.Stats` are written to it\n after each request. If ``profile_dir`` is given, :mod:`cProfile`\n data files are saved to that directory, one file per request.\n\n The filename can be customized by passing ``filename_format``. If\n it is a string, it will be formatted using :meth:`str.format` with\n the following fields available:\n\n - ``{method}`` - The request method; GET, POST, etc.\n - ``{path}`` - The request path or 'root' should one not exist.\n - ``{elapsed}`` - The elapsed time of the request.\n - ``{time}`` - The time of the request.\n\n If it is a callable, it will be called with the WSGI ``environ``\n dict and should return a filename.\n\n :param app: The WSGI application to wrap.\n :param stream: Write stats to this stream. Disable with ``None``.\n :param sort_by: A tuple of columns to sort stats by. See\n :meth:`pstats.Stats.sort_stats`.\n :param restrictions: A tuple of restrictions to filter stats by. See\n :meth:`pstats.Stats.print_stats`.\n :param profile_dir: Save profile data files to this directory.\n :param filename_format: Format string for profile data file names,\n or a callable returning a name. See explanation above.\n\n .. code-block:: python\n\n from werkzeug.middleware.profiler import ProfilerMiddleware\n app = ProfilerMiddleware(app)\n\n .. versionchanged:: 0.15\n Stats are written even if ``profile_dir`` is given, and can be\n disable by passing ``stream=None``.\n\n .. versionadded:: 0.15\n Added ``filename_format``.\n\n .. versionadded:: 0.9\n Added ``restrictions`` and ``profile_dir``.\n \"\"\"\n\n def __init__(\n self,\n app: \"WSGIApplication\",\n stream: t.IO[str] = sys.stdout,\n sort_by: t.Iterable[str] = (\"time\", \"calls\"),\n restrictions: t.Iterable[t.Union[str, int, float]] = (),\n profile_dir: t.Optional[str] = None,\n filename_format: str = \"{method}.{path}.{elapsed:.0f}ms.{time:.0f}.prof\",\n ) -> None:\n self._app = app\n self._stream = stream\n self._sort_by = sort_by\n self._restrictions = restrictions\n self._profile_dir = profile_dir\n self._filename_format = filename_format\n\n def __call__(\n self, environ: \"WSGIEnvironment\", start_response: \"StartResponse\"\n ) -> t.Iterable[bytes]:\n response_body: t.List[bytes] = []\n\n def catching_start_response(status, headers, exc_info=None): # type: ignore\n start_response(status, headers, exc_info)\n return response_body.append\n\n def runapp() -> None:\n app_iter = self._app(\n environ, t.cast(\"StartResponse\", catching_start_response)\n )\n response_body.extend(app_iter)\n\n if hasattr(app_iter, \"close\"):\n app_iter.close()\n\n profile = Profile()\n start = time.time()\n profile.runcall(runapp)\n body = b\"\".join(response_body)\n elapsed = time.time() - start\n\n if self._profile_dir is not None:\n if callable(self._filename_format):\n filename = self._filename_format(environ)\n else:\n filename = self._filename_format.format(\n method=environ[\"REQUEST_METHOD\"],\n path=environ[\"PATH_INFO\"].strip(\"/\").replace(\"/\", \".\") or \"root\",\n elapsed=elapsed * 1000.0,\n time=time.time(),\n )\n filename = os.path.join(self._profile_dir, filename)\n profile.dump_stats(filename)\n\n if self._stream is not None:\n stats = Stats(profile, stream=self._stream)\n stats.sort_stats(*self._sort_by)\n print(\"-\" * 80, file=self._stream)\n path_info = environ.get(\"PATH_INFO\", \"\")\n print(f\"PATH: {path_info!r}\", file=self._stream)\n stats.print_stats(*self._restrictions)\n print(f\"{'-' * 80}\\n\", file=self._stream)\n\n return [body]\n", "path": "src/werkzeug/middleware/profiler.py"}]} | 2,238 | 149 |
gh_patches_debug_25517 | rasdani/github-patches | git_diff | open-mmlab__mmengine-642 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug]resume error
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. I have read the [FAQ documentation](https://mmengine.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
3. The bug has not been fixed in the latest version.
**Describe the bug**
A clear and concise description of what the bug is.
**Reproduction**
1. What command or script did you run?
when I try to resume the model by adding `resume=True, load_from='epoch_380.pth'` in the config, the newest version of mmeingne raise a RuntimeError:
```
RuntimeError: The model and loaded state dict do not match exactly
unexpected key in source state_dict: data_preprocessor.mean, data_preprocessor.std
```
The error disappears when checkout to v0.1.0
</issue>
<code>
[start of mmengine/hooks/ema_hook.py]
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import copy
3 import itertools
4 import logging
5 from typing import Dict, Optional
6
7 from mmengine.logging import print_log
8 from mmengine.model import is_model_wrapper
9 from mmengine.registry import HOOKS, MODELS
10 from .hook import DATA_BATCH, Hook
11
12
13 @HOOKS.register_module()
14 class EMAHook(Hook):
15 """A Hook to apply Exponential Moving Average (EMA) on the model during
16 training.
17
18 Note:
19 - EMAHook takes priority over CheckpointHook.
20 - The original model parameters are actually saved in ema field after
21 train.
22 - ``begin_iter`` and ``begin_epoch`` cannot be set at the same time.
23
24 Args:
25 ema_type (str): The type of EMA strategy to use. You can find the
26 supported strategies in :mod:`mmengine.model.averaged_model`.
27 Defaults to 'ExponentialMovingAverage'.
28 strict_load (bool): Whether to strictly enforce that the keys of
29 ``state_dict`` in checkpoint match the keys returned by
30 ``self.module.state_dict``. Defaults to True.
31 begin_iter (int): The number of iteration to enable ``EMAHook``.
32 Defaults to 0.
33 begin_epoch (int): The number of epoch to enable ``EMAHook``. Defaults
34 to 0.
35 **kwargs: Keyword arguments passed to subclasses of
36 :obj:`BaseAveragedModel`
37 """
38
39 priority = 'NORMAL'
40
41 def __init__(self,
42 ema_type: str = 'ExponentialMovingAverage',
43 strict_load: bool = True,
44 begin_iter: int = 0,
45 begin_epoch: int = 0,
46 **kwargs):
47 self.strict_load = strict_load
48 self.ema_cfg = dict(type=ema_type, **kwargs)
49 assert not (begin_iter != 0 and begin_epoch != 0), (
50 '`begin_iter` and `begin_epoch` should not be both set.')
51 assert begin_iter >= 0, (
52 f'begin_iter must larger than 0, but got begin: {begin_iter}')
53 assert begin_epoch >= 0, (
54 f'begin_epoch must larger than 0, but got begin: {begin_epoch}')
55 self.begin_iter = begin_iter
56 self.begin_epoch = begin_epoch
57 # If `begin_epoch` and `begin_iter` are not set, `EMAHook` will be
58 # enabled at 0 iteration.
59 self.enabled_by_epoch = self.begin_epoch > 0
60
61 def before_run(self, runner) -> None:
62 """Create an ema copy of the model.
63
64 Args:
65 runner (Runner): The runner of the training process.
66 """
67 model = runner.model
68 if is_model_wrapper(model):
69 model = model.module
70 self.src_model = model
71 self.ema_model = MODELS.build(
72 self.ema_cfg, default_args=dict(model=self.src_model))
73
74 def before_train(self, runner) -> None:
75 """Check the begin_epoch/iter is smaller than max_epochs/iters.
76
77 Args:
78 runner (Runner): The runner of the training process.
79 """
80 if self.enabled_by_epoch:
81 assert self.begin_epoch <= runner.max_epochs, (
82 'self.begin_epoch should be smaller than runner.max_epochs: '
83 f'{runner.max_epochs}, but got begin: {self.begin_epoch}')
84 else:
85 assert self.begin_iter <= runner.max_iters, (
86 'self.begin_iter should be smaller than runner.max_iters: '
87 f'{runner.max_iters}, but got begin: {self.begin_iter}')
88
89 def after_train_iter(self,
90 runner,
91 batch_idx: int,
92 data_batch: DATA_BATCH = None,
93 outputs: Optional[dict] = None) -> None:
94 """Update ema parameter.
95
96 Args:
97 runner (Runner): The runner of the training process.
98 batch_idx (int): The index of the current batch in the train loop.
99 data_batch (Sequence[dict], optional): Data from dataloader.
100 Defaults to None.
101 outputs (dict, optional): Outputs from model. Defaults to None.
102 """
103 if self._ema_started(runner):
104 self.ema_model.update_parameters(self.src_model)
105 else:
106 ema_params = self.ema_model.module.state_dict()
107 src_params = self.src_model.state_dict()
108 for k, p in ema_params.items():
109 p.data.copy_(src_params[k].data)
110
111 def before_val_epoch(self, runner) -> None:
112 """We load parameter values from ema model to source model before
113 validation.
114
115 Args:
116 runner (Runner): The runner of the training process.
117 """
118 self._swap_ema_parameters()
119
120 def after_val_epoch(self,
121 runner,
122 metrics: Optional[Dict[str, float]] = None) -> None:
123 """We recover source model's parameter from ema model after validation.
124
125 Args:
126 runner (Runner): The runner of the validation process.
127 metrics (Dict[str, float], optional): Evaluation results of all
128 metrics on validation dataset. The keys are the names of the
129 metrics, and the values are corresponding results.
130 """
131 self._swap_ema_parameters()
132
133 def before_test_epoch(self, runner) -> None:
134 """We load parameter values from ema model to source model before test.
135
136 Args:
137 runner (Runner): The runner of the training process.
138 """
139 self._swap_ema_parameters()
140
141 def after_test_epoch(self,
142 runner,
143 metrics: Optional[Dict[str, float]] = None) -> None:
144 """We recover source model's parameter from ema model after test.
145
146 Args:
147 runner (Runner): The runner of the testing process.
148 metrics (Dict[str, float], optional): Evaluation results of all
149 metrics on test dataset. The keys are the names of the
150 metrics, and the values are corresponding results.
151 """
152 self._swap_ema_parameters()
153
154 def before_save_checkpoint(self, runner, checkpoint: dict) -> None:
155 """Save ema parameters to checkpoint.
156
157 Args:
158 runner (Runner): The runner of the testing process.
159 """
160 checkpoint['ema_state_dict'] = self.ema_model.state_dict()
161 # Save ema parameters to the source model's state dict so that we
162 # can directly load the averaged model weights for deployment.
163 # Swapping the state_dict key-values instead of swapping model
164 # parameters because the state_dict is a shallow copy of model
165 # parameters.
166 self._swap_ema_state_dict(checkpoint)
167
168 def after_load_checkpoint(self, runner, checkpoint: dict) -> None:
169 """Resume ema parameters from checkpoint.
170
171 Args:
172 runner (Runner): The runner of the testing process.
173 """
174 from mmengine.runner.checkpoint import load_state_dict
175 if 'ema_state_dict' in checkpoint and runner._resume:
176 # The original model parameters are actually saved in ema
177 # field swap the weights back to resume ema state.
178 self._swap_ema_state_dict(checkpoint)
179 self.ema_model.load_state_dict(
180 checkpoint['ema_state_dict'], strict=self.strict_load)
181
182 # Support load checkpoint without ema state dict.
183 else:
184 if runner._resume:
185 print_log(
186 'There is no `ema_state_dict` in checkpoint. '
187 '`EMAHook` will make a copy of `state_dict` as the '
188 'initial `ema_state_dict`', 'current', logging.WARNING)
189 load_state_dict(
190 self.ema_model.module,
191 copy.deepcopy(checkpoint['state_dict']),
192 strict=self.strict_load)
193
194 def _swap_ema_parameters(self) -> None:
195 """Swap the parameter of model with ema_model."""
196 avg_param = (
197 itertools.chain(self.ema_model.module.parameters(),
198 self.ema_model.module.buffers())
199 if self.ema_model.update_buffers else
200 self.ema_model.module.parameters())
201 src_param = (
202 itertools.chain(self.src_model.parameters(),
203 self.src_model.buffers())
204 if self.ema_model.update_buffers else self.src_model.parameters())
205 for p_avg, p_src in zip(avg_param, src_param):
206 tmp = p_avg.data.clone()
207 p_avg.data.copy_(p_src.data)
208 p_src.data.copy_(tmp)
209
210 def _swap_ema_state_dict(self, checkpoint):
211 """Swap the state dict values of model with ema_model."""
212 model_state = checkpoint['state_dict']
213 ema_state = checkpoint['ema_state_dict']
214 for k in ema_state:
215 if k[:7] == 'module.':
216 tmp = ema_state[k]
217 ema_state[k] = model_state[k[7:]]
218 model_state[k[7:]] = tmp
219
220 def _ema_started(self, runner) -> bool:
221 """Whether ``EMAHook`` has been initialized at current iteration or
222 epoch.
223
224 :attr:`ema_model` will be initialized when ``runner.iter`` or
225 ``runner.epoch`` is greater than ``self.begin`` for the first time.
226
227 Args:
228 runner (Runner): Runner of the training, validation process.
229
230 Returns:
231 bool: Whether ``EMAHook`` has been initialized.
232 """
233 if self.enabled_by_epoch:
234 return runner.epoch + 1 >= self.begin_epoch
235 else:
236 return runner.iter + 1 >= self.begin_iter
237
[end of mmengine/hooks/ema_hook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmengine/hooks/ema_hook.py b/mmengine/hooks/ema_hook.py
--- a/mmengine/hooks/ema_hook.py
+++ b/mmengine/hooks/ema_hook.py
@@ -27,11 +27,12 @@
Defaults to 'ExponentialMovingAverage'.
strict_load (bool): Whether to strictly enforce that the keys of
``state_dict`` in checkpoint match the keys returned by
- ``self.module.state_dict``. Defaults to True.
+ ``self.module.state_dict``. Defaults to False.
+ Changed in v0.3.0.
begin_iter (int): The number of iteration to enable ``EMAHook``.
Defaults to 0.
- begin_epoch (int): The number of epoch to enable ``EMAHook``. Defaults
- to 0.
+ begin_epoch (int): The number of epoch to enable ``EMAHook``.
+ Defaults to 0.
**kwargs: Keyword arguments passed to subclasses of
:obj:`BaseAveragedModel`
"""
@@ -40,7 +41,7 @@
def __init__(self,
ema_type: str = 'ExponentialMovingAverage',
- strict_load: bool = True,
+ strict_load: bool = False,
begin_iter: int = 0,
begin_epoch: int = 0,
**kwargs):
| {"golden_diff": "diff --git a/mmengine/hooks/ema_hook.py b/mmengine/hooks/ema_hook.py\n--- a/mmengine/hooks/ema_hook.py\n+++ b/mmengine/hooks/ema_hook.py\n@@ -27,11 +27,12 @@\n Defaults to 'ExponentialMovingAverage'.\n strict_load (bool): Whether to strictly enforce that the keys of\n ``state_dict`` in checkpoint match the keys returned by\n- ``self.module.state_dict``. Defaults to True.\n+ ``self.module.state_dict``. Defaults to False.\n+ Changed in v0.3.0.\n begin_iter (int): The number of iteration to enable ``EMAHook``.\n Defaults to 0.\n- begin_epoch (int): The number of epoch to enable ``EMAHook``. Defaults\n- to 0.\n+ begin_epoch (int): The number of epoch to enable ``EMAHook``.\n+ Defaults to 0.\n **kwargs: Keyword arguments passed to subclasses of\n :obj:`BaseAveragedModel`\n \"\"\"\n@@ -40,7 +41,7 @@\n \n def __init__(self,\n ema_type: str = 'ExponentialMovingAverage',\n- strict_load: bool = True,\n+ strict_load: bool = False,\n begin_iter: int = 0,\n begin_epoch: int = 0,\n **kwargs):\n", "issue": "[bug]resume error\nThanks for your error report and we appreciate it a lot.\r\n\r\n**Checklist**\r\n\r\n1. I have searched related issues but cannot get the expected help.\r\n2. I have read the [FAQ documentation](https://mmengine.readthedocs.io/en/latest/faq.html) but cannot get the expected help.\r\n3. The bug has not been fixed in the latest version.\r\n\r\n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n**Reproduction**\r\n\r\n1. What command or script did you run?\r\n\r\nwhen I try to resume the model by adding `resume=True, load_from='epoch_380.pth'` in the config, the newest version of mmeingne raise a RuntimeError:\r\n\r\n```\r\nRuntimeError: The model and loaded state dict do not match exactly\r\n\r\nunexpected key in source state_dict: data_preprocessor.mean, data_preprocessor.std\r\n```\r\n\r\nThe error disappears when checkout to v0.1.0\r\n\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport copy\nimport itertools\nimport logging\nfrom typing import Dict, Optional\n\nfrom mmengine.logging import print_log\nfrom mmengine.model import is_model_wrapper\nfrom mmengine.registry import HOOKS, MODELS\nfrom .hook import DATA_BATCH, Hook\n\n\[email protected]_module()\nclass EMAHook(Hook):\n \"\"\"A Hook to apply Exponential Moving Average (EMA) on the model during\n training.\n\n Note:\n - EMAHook takes priority over CheckpointHook.\n - The original model parameters are actually saved in ema field after\n train.\n - ``begin_iter`` and ``begin_epoch`` cannot be set at the same time.\n\n Args:\n ema_type (str): The type of EMA strategy to use. You can find the\n supported strategies in :mod:`mmengine.model.averaged_model`.\n Defaults to 'ExponentialMovingAverage'.\n strict_load (bool): Whether to strictly enforce that the keys of\n ``state_dict`` in checkpoint match the keys returned by\n ``self.module.state_dict``. Defaults to True.\n begin_iter (int): The number of iteration to enable ``EMAHook``.\n Defaults to 0.\n begin_epoch (int): The number of epoch to enable ``EMAHook``. Defaults\n to 0.\n **kwargs: Keyword arguments passed to subclasses of\n :obj:`BaseAveragedModel`\n \"\"\"\n\n priority = 'NORMAL'\n\n def __init__(self,\n ema_type: str = 'ExponentialMovingAverage',\n strict_load: bool = True,\n begin_iter: int = 0,\n begin_epoch: int = 0,\n **kwargs):\n self.strict_load = strict_load\n self.ema_cfg = dict(type=ema_type, **kwargs)\n assert not (begin_iter != 0 and begin_epoch != 0), (\n '`begin_iter` and `begin_epoch` should not be both set.')\n assert begin_iter >= 0, (\n f'begin_iter must larger than 0, but got begin: {begin_iter}')\n assert begin_epoch >= 0, (\n f'begin_epoch must larger than 0, but got begin: {begin_epoch}')\n self.begin_iter = begin_iter\n self.begin_epoch = begin_epoch\n # If `begin_epoch` and `begin_iter` are not set, `EMAHook` will be\n # enabled at 0 iteration.\n self.enabled_by_epoch = self.begin_epoch > 0\n\n def before_run(self, runner) -> None:\n \"\"\"Create an ema copy of the model.\n\n Args:\n runner (Runner): The runner of the training process.\n \"\"\"\n model = runner.model\n if is_model_wrapper(model):\n model = model.module\n self.src_model = model\n self.ema_model = MODELS.build(\n self.ema_cfg, default_args=dict(model=self.src_model))\n\n def before_train(self, runner) -> None:\n \"\"\"Check the begin_epoch/iter is smaller than max_epochs/iters.\n\n Args:\n runner (Runner): The runner of the training process.\n \"\"\"\n if self.enabled_by_epoch:\n assert self.begin_epoch <= runner.max_epochs, (\n 'self.begin_epoch should be smaller than runner.max_epochs: '\n f'{runner.max_epochs}, but got begin: {self.begin_epoch}')\n else:\n assert self.begin_iter <= runner.max_iters, (\n 'self.begin_iter should be smaller than runner.max_iters: '\n f'{runner.max_iters}, but got begin: {self.begin_iter}')\n\n def after_train_iter(self,\n runner,\n batch_idx: int,\n data_batch: DATA_BATCH = None,\n outputs: Optional[dict] = None) -> None:\n \"\"\"Update ema parameter.\n\n Args:\n runner (Runner): The runner of the training process.\n batch_idx (int): The index of the current batch in the train loop.\n data_batch (Sequence[dict], optional): Data from dataloader.\n Defaults to None.\n outputs (dict, optional): Outputs from model. Defaults to None.\n \"\"\"\n if self._ema_started(runner):\n self.ema_model.update_parameters(self.src_model)\n else:\n ema_params = self.ema_model.module.state_dict()\n src_params = self.src_model.state_dict()\n for k, p in ema_params.items():\n p.data.copy_(src_params[k].data)\n\n def before_val_epoch(self, runner) -> None:\n \"\"\"We load parameter values from ema model to source model before\n validation.\n\n Args:\n runner (Runner): The runner of the training process.\n \"\"\"\n self._swap_ema_parameters()\n\n def after_val_epoch(self,\n runner,\n metrics: Optional[Dict[str, float]] = None) -> None:\n \"\"\"We recover source model's parameter from ema model after validation.\n\n Args:\n runner (Runner): The runner of the validation process.\n metrics (Dict[str, float], optional): Evaluation results of all\n metrics on validation dataset. The keys are the names of the\n metrics, and the values are corresponding results.\n \"\"\"\n self._swap_ema_parameters()\n\n def before_test_epoch(self, runner) -> None:\n \"\"\"We load parameter values from ema model to source model before test.\n\n Args:\n runner (Runner): The runner of the training process.\n \"\"\"\n self._swap_ema_parameters()\n\n def after_test_epoch(self,\n runner,\n metrics: Optional[Dict[str, float]] = None) -> None:\n \"\"\"We recover source model's parameter from ema model after test.\n\n Args:\n runner (Runner): The runner of the testing process.\n metrics (Dict[str, float], optional): Evaluation results of all\n metrics on test dataset. The keys are the names of the\n metrics, and the values are corresponding results.\n \"\"\"\n self._swap_ema_parameters()\n\n def before_save_checkpoint(self, runner, checkpoint: dict) -> None:\n \"\"\"Save ema parameters to checkpoint.\n\n Args:\n runner (Runner): The runner of the testing process.\n \"\"\"\n checkpoint['ema_state_dict'] = self.ema_model.state_dict()\n # Save ema parameters to the source model's state dict so that we\n # can directly load the averaged model weights for deployment.\n # Swapping the state_dict key-values instead of swapping model\n # parameters because the state_dict is a shallow copy of model\n # parameters.\n self._swap_ema_state_dict(checkpoint)\n\n def after_load_checkpoint(self, runner, checkpoint: dict) -> None:\n \"\"\"Resume ema parameters from checkpoint.\n\n Args:\n runner (Runner): The runner of the testing process.\n \"\"\"\n from mmengine.runner.checkpoint import load_state_dict\n if 'ema_state_dict' in checkpoint and runner._resume:\n # The original model parameters are actually saved in ema\n # field swap the weights back to resume ema state.\n self._swap_ema_state_dict(checkpoint)\n self.ema_model.load_state_dict(\n checkpoint['ema_state_dict'], strict=self.strict_load)\n\n # Support load checkpoint without ema state dict.\n else:\n if runner._resume:\n print_log(\n 'There is no `ema_state_dict` in checkpoint. '\n '`EMAHook` will make a copy of `state_dict` as the '\n 'initial `ema_state_dict`', 'current', logging.WARNING)\n load_state_dict(\n self.ema_model.module,\n copy.deepcopy(checkpoint['state_dict']),\n strict=self.strict_load)\n\n def _swap_ema_parameters(self) -> None:\n \"\"\"Swap the parameter of model with ema_model.\"\"\"\n avg_param = (\n itertools.chain(self.ema_model.module.parameters(),\n self.ema_model.module.buffers())\n if self.ema_model.update_buffers else\n self.ema_model.module.parameters())\n src_param = (\n itertools.chain(self.src_model.parameters(),\n self.src_model.buffers())\n if self.ema_model.update_buffers else self.src_model.parameters())\n for p_avg, p_src in zip(avg_param, src_param):\n tmp = p_avg.data.clone()\n p_avg.data.copy_(p_src.data)\n p_src.data.copy_(tmp)\n\n def _swap_ema_state_dict(self, checkpoint):\n \"\"\"Swap the state dict values of model with ema_model.\"\"\"\n model_state = checkpoint['state_dict']\n ema_state = checkpoint['ema_state_dict']\n for k in ema_state:\n if k[:7] == 'module.':\n tmp = ema_state[k]\n ema_state[k] = model_state[k[7:]]\n model_state[k[7:]] = tmp\n\n def _ema_started(self, runner) -> bool:\n \"\"\"Whether ``EMAHook`` has been initialized at current iteration or\n epoch.\n\n :attr:`ema_model` will be initialized when ``runner.iter`` or\n ``runner.epoch`` is greater than ``self.begin`` for the first time.\n\n Args:\n runner (Runner): Runner of the training, validation process.\n\n Returns:\n bool: Whether ``EMAHook`` has been initialized.\n \"\"\"\n if self.enabled_by_epoch:\n return runner.epoch + 1 >= self.begin_epoch\n else:\n return runner.iter + 1 >= self.begin_iter\n", "path": "mmengine/hooks/ema_hook.py"}]} | 3,371 | 301 |
gh_patches_debug_36258 | rasdani/github-patches | git_diff | sublimelsp__LSP-2094 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`lsp_document_symbols` should initially focus on current symbol under caret
Sublime's default implementation of Goto Symbol pops up with current symbol having focus in the dropdown list, this way I can use up/down arrows to go to prev/next symbols (since Sublime doesn't have goto next/prev symbol features built-in, this is imo a superior way to mimic that feature).
LSP is capable of finding all the symbols for a given language but the problem is, `lsp_document_symbols` commands focuses the first symbol in the file, so I can't easily use the arrow keys to browse the next/previous symbols.
So it would be better if LSP's replacement Goto Symbol popup focused the symbol of the current scope / the symbol under the caret.
</issue>
<code>
[start of plugin/symbols.py]
1 import weakref
2 from .core.protocol import Request, DocumentSymbol, SymbolInformation, SymbolKind, SymbolTag
3 from .core.registry import LspTextCommand
4 from .core.sessions import print_to_status_bar
5 from .core.typing import Any, List, Optional, Tuple, Dict, Generator, Union, cast
6 from .core.views import range_to_region
7 from .core.views import SublimeKind
8 from .core.views import SYMBOL_KIND_SCOPES
9 from .core.views import SYMBOL_KINDS
10 from .core.views import text_document_identifier
11 from contextlib import contextmanager
12 import os
13 import sublime
14 import sublime_plugin
15
16
17 SUPPRESS_INPUT_SETTING_KEY = 'lsp_suppress_input'
18
19
20 def unpack_lsp_kind(kind: SymbolKind) -> SublimeKind:
21 return SYMBOL_KINDS.get(kind, sublime.KIND_AMBIGUOUS)
22
23
24 def format_symbol_kind(kind: SymbolKind) -> str:
25 return SYMBOL_KINDS.get(kind, (None, None, str(kind)))[2]
26
27
28 def get_symbol_scope_from_lsp_kind(kind: SymbolKind) -> str:
29 return SYMBOL_KIND_SCOPES.get(kind, "comment")
30
31
32 def symbol_information_to_quick_panel_item(
33 item: SymbolInformation,
34 show_file_name: bool = True
35 ) -> sublime.QuickPanelItem:
36 st_kind, st_icon, st_display_type = unpack_lsp_kind(item['kind'])
37 tags = item.get("tags") or []
38 if SymbolTag.Deprecated in tags:
39 st_display_type = "⚠ {} - Deprecated".format(st_display_type)
40 container = item.get("containerName") or ""
41 details = [] # List[str]
42 if container:
43 details.append(container)
44 if show_file_name:
45 file_name = os.path.basename(item['location']['uri'])
46 details.append(file_name)
47 return sublime.QuickPanelItem(
48 trigger=item["name"],
49 details=details,
50 annotation=st_display_type,
51 kind=(st_kind, st_icon, st_display_type))
52
53
54 @contextmanager
55 def _additional_name(names: List[str], name: str) -> Generator[None, None, None]:
56 names.append(name)
57 yield
58 names.pop(-1)
59
60
61 class LspSelectionClearCommand(sublime_plugin.TextCommand):
62 """
63 Selections may not be modified outside the run method of a text command. Thus, to allow modification in an async
64 context we need to have dedicated commands for this.
65
66 https://github.com/sublimehq/sublime_text/issues/485#issuecomment-337480388
67 """
68
69 def run(self, _: sublime.Edit) -> None:
70 self.view.sel().clear()
71
72
73 class LspSelectionAddCommand(sublime_plugin.TextCommand):
74
75 def run(self, _: sublime.Edit, regions: List[Tuple[int, int]]) -> None:
76 for region in regions:
77 self.view.sel().add(sublime.Region(*region))
78
79
80 class LspSelectionSetCommand(sublime_plugin.TextCommand):
81
82 def run(self, _: sublime.Edit, regions: List[Tuple[int, int]]) -> None:
83 self.view.sel().clear()
84 for region in regions:
85 self.view.sel().add(sublime.Region(*region))
86
87
88 class LspDocumentSymbolsCommand(LspTextCommand):
89
90 capability = 'documentSymbolProvider'
91 REGIONS_KEY = 'lsp_document_symbols'
92
93 def __init__(self, view: sublime.View) -> None:
94 super().__init__(view)
95 self.old_regions = [] # type: List[sublime.Region]
96 self.regions = [] # type: List[Tuple[sublime.Region, Optional[sublime.Region], str]]
97 self.is_first_selection = False
98
99 def run(self, edit: sublime.Edit, event: Optional[Dict[str, Any]] = None) -> None:
100 self.view.settings().set(SUPPRESS_INPUT_SETTING_KEY, True)
101 session = self.best_session(self.capability)
102 if session:
103 session.send_request(
104 Request.documentSymbols({"textDocument": text_document_identifier(self.view)}, self.view),
105 lambda response: sublime.set_timeout(lambda: self.handle_response(response)),
106 lambda error: sublime.set_timeout(lambda: self.handle_response_error(error)))
107
108 def handle_response(self, response: Union[List[DocumentSymbol], List[SymbolInformation], None]) -> None:
109 self.view.settings().erase(SUPPRESS_INPUT_SETTING_KEY)
110 window = self.view.window()
111 if window and isinstance(response, list) and len(response) > 0:
112 self.old_regions = [sublime.Region(r.a, r.b) for r in self.view.sel()]
113 self.is_first_selection = True
114 window.show_quick_panel(
115 self.process_symbols(response),
116 self.on_symbol_selected,
117 sublime.KEEP_OPEN_ON_FOCUS_LOST,
118 0,
119 self.on_highlighted)
120 self.view.run_command("lsp_selection_clear")
121
122 def handle_response_error(self, error: Any) -> None:
123 self.view.settings().erase(SUPPRESS_INPUT_SETTING_KEY)
124 print_to_status_bar(error)
125
126 def region(self, index: int) -> sublime.Region:
127 return self.regions[index][0]
128
129 def selection_region(self, index: int) -> Optional[sublime.Region]:
130 return self.regions[index][1]
131
132 def scope(self, index: int) -> str:
133 return self.regions[index][2]
134
135 def on_symbol_selected(self, index: int) -> None:
136 if index == -1:
137 if len(self.old_regions) > 0:
138 self.view.run_command("lsp_selection_add", {"regions": [(r.a, r.b) for r in self.old_regions]})
139 self.view.show_at_center(self.old_regions[0].begin())
140 else:
141 region = self.selection_region(index) or self.region(index)
142 self.view.run_command("lsp_selection_add", {"regions": [(region.a, region.a)]})
143 self.view.show_at_center(region.a)
144 self.view.erase_regions(self.REGIONS_KEY)
145 self.old_regions.clear()
146 self.regions.clear()
147
148 def on_highlighted(self, index: int) -> None:
149 if self.is_first_selection:
150 self.is_first_selection = False
151 return
152 region = self.region(index)
153 self.view.show_at_center(region.a)
154 self.view.add_regions(self.REGIONS_KEY, [region], self.scope(index), '', sublime.DRAW_NO_FILL)
155
156 def process_symbols(
157 self,
158 items: Union[List[DocumentSymbol], List[SymbolInformation]]
159 ) -> List[sublime.QuickPanelItem]:
160 self.regions.clear()
161 panel_items = []
162 if 'selectionRange' in items[0]:
163 items = cast(List[DocumentSymbol], items)
164 panel_items = self.process_document_symbols(items)
165 else:
166 items = cast(List[SymbolInformation], items)
167 panel_items = self.process_symbol_informations(items)
168 # Sort both lists in sync according to the range's begin point.
169 sorted_results = zip(*sorted(zip(self.regions, panel_items), key=lambda item: item[0][0].begin()))
170 sorted_regions, sorted_panel_items = sorted_results
171 self.regions = list(sorted_regions) # type: ignore
172 return list(sorted_panel_items) # type: ignore
173
174 def process_document_symbols(self, items: List[DocumentSymbol]) -> List[sublime.QuickPanelItem]:
175 quick_panel_items = [] # type: List[sublime.QuickPanelItem]
176 names = [] # type: List[str]
177 for item in items:
178 self.process_document_symbol_recursive(quick_panel_items, item, names)
179 return quick_panel_items
180
181 def process_document_symbol_recursive(self, quick_panel_items: List[sublime.QuickPanelItem], item: DocumentSymbol,
182 names: List[str]) -> None:
183 lsp_kind = item["kind"]
184 self.regions.append((range_to_region(item['range'], self.view),
185 range_to_region(item['selectionRange'], self.view),
186 get_symbol_scope_from_lsp_kind(lsp_kind)))
187 name = item['name']
188 with _additional_name(names, name):
189 st_kind, st_icon, st_display_type = unpack_lsp_kind(lsp_kind)
190 formatted_names = " > ".join(names)
191 st_details = item.get("detail") or ""
192 if st_details:
193 st_details = "{} | {}".format(st_details, formatted_names)
194 else:
195 st_details = formatted_names
196 tags = item.get("tags") or []
197 if SymbolTag.Deprecated in tags:
198 st_display_type = "⚠ {} - Deprecated".format(st_display_type)
199 quick_panel_items.append(
200 sublime.QuickPanelItem(
201 trigger=name,
202 details=st_details,
203 annotation=st_display_type,
204 kind=(st_kind, st_icon, st_display_type)))
205 children = item.get('children') or [] # type: List[DocumentSymbol]
206 for child in children:
207 self.process_document_symbol_recursive(quick_panel_items, child, names)
208
209 def process_symbol_informations(self, items: List[SymbolInformation]) -> List[sublime.QuickPanelItem]:
210 quick_panel_items = [] # type: List[sublime.QuickPanelItem]
211 for item in items:
212 self.regions.append((range_to_region(item['location']['range'], self.view),
213 None, get_symbol_scope_from_lsp_kind(item['kind'])))
214 quick_panel_item = symbol_information_to_quick_panel_item(item, show_file_name=False)
215 quick_panel_items.append(quick_panel_item)
216 return quick_panel_items
217
218
219 class SymbolQueryInput(sublime_plugin.TextInputHandler):
220 def want_event(self) -> bool:
221 return False
222
223 def placeholder(self) -> str:
224 return "Enter symbol name"
225
226
227 class LspWorkspaceSymbolsCommand(LspTextCommand):
228
229 capability = 'workspaceSymbolProvider'
230
231 def input(self, _args: Any) -> sublime_plugin.TextInputHandler:
232 return SymbolQueryInput()
233
234 def run(self, edit: sublime.Edit, symbol_query_input: str, event: Optional[Any] = None) -> None:
235 session = self.best_session(self.capability)
236 if session:
237 self.weaksession = weakref.ref(session)
238 session.send_request(
239 Request.workspaceSymbol({"query": symbol_query_input}),
240 lambda r: self._handle_response(symbol_query_input, r),
241 self._handle_error)
242
243 def _open_file(self, symbols: List[SymbolInformation], index: int) -> None:
244 if index != -1:
245 session = self.weaksession()
246 if session:
247 session.open_location_async(symbols[index]['location'], sublime.ENCODED_POSITION)
248
249 def _handle_response(self, query: str, response: Union[List[SymbolInformation], None]) -> None:
250 if response:
251 matches = response
252 window = self.view.window()
253 if window:
254 window.show_quick_panel(
255 list(map(symbol_information_to_quick_panel_item, matches)),
256 lambda i: self._open_file(matches, i))
257 else:
258 sublime.message_dialog("No matches found for query: '{}'".format(query))
259
260 def _handle_error(self, error: Dict[str, Any]) -> None:
261 reason = error.get("message", "none provided by server :(")
262 msg = "command 'workspace/symbol' failed. Reason: {}".format(reason)
263 sublime.error_message(msg)
264
[end of plugin/symbols.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/symbols.py b/plugin/symbols.py
--- a/plugin/symbols.py
+++ b/plugin/symbols.py
@@ -94,7 +94,6 @@
super().__init__(view)
self.old_regions = [] # type: List[sublime.Region]
self.regions = [] # type: List[Tuple[sublime.Region, Optional[sublime.Region], str]]
- self.is_first_selection = False
def run(self, edit: sublime.Edit, event: Optional[Dict[str, Any]] = None) -> None:
self.view.settings().set(SUPPRESS_INPUT_SETTING_KEY, True)
@@ -109,13 +108,22 @@
self.view.settings().erase(SUPPRESS_INPUT_SETTING_KEY)
window = self.view.window()
if window and isinstance(response, list) and len(response) > 0:
+ panel_items = self.process_symbols(response)
self.old_regions = [sublime.Region(r.a, r.b) for r in self.view.sel()]
- self.is_first_selection = True
+ # Find region that is either intersecting or before to the current selection end.
+ selected_index = 0
+ if len(self.old_regions):
+ first_selection = self.old_regions[0]
+ for i, (r, _, _) in enumerate(self.regions):
+ if r.begin() <= first_selection.b:
+ selected_index = i
+ else:
+ break
window.show_quick_panel(
- self.process_symbols(response),
+ panel_items,
self.on_symbol_selected,
sublime.KEEP_OPEN_ON_FOCUS_LOST,
- 0,
+ selected_index,
self.on_highlighted)
self.view.run_command("lsp_selection_clear")
@@ -146,9 +154,6 @@
self.regions.clear()
def on_highlighted(self, index: int) -> None:
- if self.is_first_selection:
- self.is_first_selection = False
- return
region = self.region(index)
self.view.show_at_center(region.a)
self.view.add_regions(self.REGIONS_KEY, [region], self.scope(index), '', sublime.DRAW_NO_FILL)
| {"golden_diff": "diff --git a/plugin/symbols.py b/plugin/symbols.py\n--- a/plugin/symbols.py\n+++ b/plugin/symbols.py\n@@ -94,7 +94,6 @@\n super().__init__(view)\n self.old_regions = [] # type: List[sublime.Region]\n self.regions = [] # type: List[Tuple[sublime.Region, Optional[sublime.Region], str]]\n- self.is_first_selection = False\n \n def run(self, edit: sublime.Edit, event: Optional[Dict[str, Any]] = None) -> None:\n self.view.settings().set(SUPPRESS_INPUT_SETTING_KEY, True)\n@@ -109,13 +108,22 @@\n self.view.settings().erase(SUPPRESS_INPUT_SETTING_KEY)\n window = self.view.window()\n if window and isinstance(response, list) and len(response) > 0:\n+ panel_items = self.process_symbols(response)\n self.old_regions = [sublime.Region(r.a, r.b) for r in self.view.sel()]\n- self.is_first_selection = True\n+ # Find region that is either intersecting or before to the current selection end.\n+ selected_index = 0\n+ if len(self.old_regions):\n+ first_selection = self.old_regions[0]\n+ for i, (r, _, _) in enumerate(self.regions):\n+ if r.begin() <= first_selection.b:\n+ selected_index = i\n+ else:\n+ break\n window.show_quick_panel(\n- self.process_symbols(response),\n+ panel_items,\n self.on_symbol_selected,\n sublime.KEEP_OPEN_ON_FOCUS_LOST,\n- 0,\n+ selected_index,\n self.on_highlighted)\n self.view.run_command(\"lsp_selection_clear\")\n \n@@ -146,9 +154,6 @@\n self.regions.clear()\n \n def on_highlighted(self, index: int) -> None:\n- if self.is_first_selection:\n- self.is_first_selection = False\n- return\n region = self.region(index)\n self.view.show_at_center(region.a)\n self.view.add_regions(self.REGIONS_KEY, [region], self.scope(index), '', sublime.DRAW_NO_FILL)\n", "issue": "`lsp_document_symbols` should initially focus on current symbol under caret\nSublime's default implementation of Goto Symbol pops up with current symbol having focus in the dropdown list, this way I can use up/down arrows to go to prev/next symbols (since Sublime doesn't have goto next/prev symbol features built-in, this is imo a superior way to mimic that feature).\r\n\r\nLSP is capable of finding all the symbols for a given language but the problem is, `lsp_document_symbols` commands focuses the first symbol in the file, so I can't easily use the arrow keys to browse the next/previous symbols.\r\n\r\nSo it would be better if LSP's replacement Goto Symbol popup focused the symbol of the current scope / the symbol under the caret.\n", "before_files": [{"content": "import weakref\nfrom .core.protocol import Request, DocumentSymbol, SymbolInformation, SymbolKind, SymbolTag\nfrom .core.registry import LspTextCommand\nfrom .core.sessions import print_to_status_bar\nfrom .core.typing import Any, List, Optional, Tuple, Dict, Generator, Union, cast\nfrom .core.views import range_to_region\nfrom .core.views import SublimeKind\nfrom .core.views import SYMBOL_KIND_SCOPES\nfrom .core.views import SYMBOL_KINDS\nfrom .core.views import text_document_identifier\nfrom contextlib import contextmanager\nimport os\nimport sublime\nimport sublime_plugin\n\n\nSUPPRESS_INPUT_SETTING_KEY = 'lsp_suppress_input'\n\n\ndef unpack_lsp_kind(kind: SymbolKind) -> SublimeKind:\n return SYMBOL_KINDS.get(kind, sublime.KIND_AMBIGUOUS)\n\n\ndef format_symbol_kind(kind: SymbolKind) -> str:\n return SYMBOL_KINDS.get(kind, (None, None, str(kind)))[2]\n\n\ndef get_symbol_scope_from_lsp_kind(kind: SymbolKind) -> str:\n return SYMBOL_KIND_SCOPES.get(kind, \"comment\")\n\n\ndef symbol_information_to_quick_panel_item(\n item: SymbolInformation,\n show_file_name: bool = True\n) -> sublime.QuickPanelItem:\n st_kind, st_icon, st_display_type = unpack_lsp_kind(item['kind'])\n tags = item.get(\"tags\") or []\n if SymbolTag.Deprecated in tags:\n st_display_type = \"\u26a0 {} - Deprecated\".format(st_display_type)\n container = item.get(\"containerName\") or \"\"\n details = [] # List[str]\n if container:\n details.append(container)\n if show_file_name:\n file_name = os.path.basename(item['location']['uri'])\n details.append(file_name)\n return sublime.QuickPanelItem(\n trigger=item[\"name\"],\n details=details,\n annotation=st_display_type,\n kind=(st_kind, st_icon, st_display_type))\n\n\n@contextmanager\ndef _additional_name(names: List[str], name: str) -> Generator[None, None, None]:\n names.append(name)\n yield\n names.pop(-1)\n\n\nclass LspSelectionClearCommand(sublime_plugin.TextCommand):\n \"\"\"\n Selections may not be modified outside the run method of a text command. Thus, to allow modification in an async\n context we need to have dedicated commands for this.\n\n https://github.com/sublimehq/sublime_text/issues/485#issuecomment-337480388\n \"\"\"\n\n def run(self, _: sublime.Edit) -> None:\n self.view.sel().clear()\n\n\nclass LspSelectionAddCommand(sublime_plugin.TextCommand):\n\n def run(self, _: sublime.Edit, regions: List[Tuple[int, int]]) -> None:\n for region in regions:\n self.view.sel().add(sublime.Region(*region))\n\n\nclass LspSelectionSetCommand(sublime_plugin.TextCommand):\n\n def run(self, _: sublime.Edit, regions: List[Tuple[int, int]]) -> None:\n self.view.sel().clear()\n for region in regions:\n self.view.sel().add(sublime.Region(*region))\n\n\nclass LspDocumentSymbolsCommand(LspTextCommand):\n\n capability = 'documentSymbolProvider'\n REGIONS_KEY = 'lsp_document_symbols'\n\n def __init__(self, view: sublime.View) -> None:\n super().__init__(view)\n self.old_regions = [] # type: List[sublime.Region]\n self.regions = [] # type: List[Tuple[sublime.Region, Optional[sublime.Region], str]]\n self.is_first_selection = False\n\n def run(self, edit: sublime.Edit, event: Optional[Dict[str, Any]] = None) -> None:\n self.view.settings().set(SUPPRESS_INPUT_SETTING_KEY, True)\n session = self.best_session(self.capability)\n if session:\n session.send_request(\n Request.documentSymbols({\"textDocument\": text_document_identifier(self.view)}, self.view),\n lambda response: sublime.set_timeout(lambda: self.handle_response(response)),\n lambda error: sublime.set_timeout(lambda: self.handle_response_error(error)))\n\n def handle_response(self, response: Union[List[DocumentSymbol], List[SymbolInformation], None]) -> None:\n self.view.settings().erase(SUPPRESS_INPUT_SETTING_KEY)\n window = self.view.window()\n if window and isinstance(response, list) and len(response) > 0:\n self.old_regions = [sublime.Region(r.a, r.b) for r in self.view.sel()]\n self.is_first_selection = True\n window.show_quick_panel(\n self.process_symbols(response),\n self.on_symbol_selected,\n sublime.KEEP_OPEN_ON_FOCUS_LOST,\n 0,\n self.on_highlighted)\n self.view.run_command(\"lsp_selection_clear\")\n\n def handle_response_error(self, error: Any) -> None:\n self.view.settings().erase(SUPPRESS_INPUT_SETTING_KEY)\n print_to_status_bar(error)\n\n def region(self, index: int) -> sublime.Region:\n return self.regions[index][0]\n\n def selection_region(self, index: int) -> Optional[sublime.Region]:\n return self.regions[index][1]\n\n def scope(self, index: int) -> str:\n return self.regions[index][2]\n\n def on_symbol_selected(self, index: int) -> None:\n if index == -1:\n if len(self.old_regions) > 0:\n self.view.run_command(\"lsp_selection_add\", {\"regions\": [(r.a, r.b) for r in self.old_regions]})\n self.view.show_at_center(self.old_regions[0].begin())\n else:\n region = self.selection_region(index) or self.region(index)\n self.view.run_command(\"lsp_selection_add\", {\"regions\": [(region.a, region.a)]})\n self.view.show_at_center(region.a)\n self.view.erase_regions(self.REGIONS_KEY)\n self.old_regions.clear()\n self.regions.clear()\n\n def on_highlighted(self, index: int) -> None:\n if self.is_first_selection:\n self.is_first_selection = False\n return\n region = self.region(index)\n self.view.show_at_center(region.a)\n self.view.add_regions(self.REGIONS_KEY, [region], self.scope(index), '', sublime.DRAW_NO_FILL)\n\n def process_symbols(\n self,\n items: Union[List[DocumentSymbol], List[SymbolInformation]]\n ) -> List[sublime.QuickPanelItem]:\n self.regions.clear()\n panel_items = []\n if 'selectionRange' in items[0]:\n items = cast(List[DocumentSymbol], items)\n panel_items = self.process_document_symbols(items)\n else:\n items = cast(List[SymbolInformation], items)\n panel_items = self.process_symbol_informations(items)\n # Sort both lists in sync according to the range's begin point.\n sorted_results = zip(*sorted(zip(self.regions, panel_items), key=lambda item: item[0][0].begin()))\n sorted_regions, sorted_panel_items = sorted_results\n self.regions = list(sorted_regions) # type: ignore\n return list(sorted_panel_items) # type: ignore\n\n def process_document_symbols(self, items: List[DocumentSymbol]) -> List[sublime.QuickPanelItem]:\n quick_panel_items = [] # type: List[sublime.QuickPanelItem]\n names = [] # type: List[str]\n for item in items:\n self.process_document_symbol_recursive(quick_panel_items, item, names)\n return quick_panel_items\n\n def process_document_symbol_recursive(self, quick_panel_items: List[sublime.QuickPanelItem], item: DocumentSymbol,\n names: List[str]) -> None:\n lsp_kind = item[\"kind\"]\n self.regions.append((range_to_region(item['range'], self.view),\n range_to_region(item['selectionRange'], self.view),\n get_symbol_scope_from_lsp_kind(lsp_kind)))\n name = item['name']\n with _additional_name(names, name):\n st_kind, st_icon, st_display_type = unpack_lsp_kind(lsp_kind)\n formatted_names = \" > \".join(names)\n st_details = item.get(\"detail\") or \"\"\n if st_details:\n st_details = \"{} | {}\".format(st_details, formatted_names)\n else:\n st_details = formatted_names\n tags = item.get(\"tags\") or []\n if SymbolTag.Deprecated in tags:\n st_display_type = \"\u26a0 {} - Deprecated\".format(st_display_type)\n quick_panel_items.append(\n sublime.QuickPanelItem(\n trigger=name,\n details=st_details,\n annotation=st_display_type,\n kind=(st_kind, st_icon, st_display_type)))\n children = item.get('children') or [] # type: List[DocumentSymbol]\n for child in children:\n self.process_document_symbol_recursive(quick_panel_items, child, names)\n\n def process_symbol_informations(self, items: List[SymbolInformation]) -> List[sublime.QuickPanelItem]:\n quick_panel_items = [] # type: List[sublime.QuickPanelItem]\n for item in items:\n self.regions.append((range_to_region(item['location']['range'], self.view),\n None, get_symbol_scope_from_lsp_kind(item['kind'])))\n quick_panel_item = symbol_information_to_quick_panel_item(item, show_file_name=False)\n quick_panel_items.append(quick_panel_item)\n return quick_panel_items\n\n\nclass SymbolQueryInput(sublime_plugin.TextInputHandler):\n def want_event(self) -> bool:\n return False\n\n def placeholder(self) -> str:\n return \"Enter symbol name\"\n\n\nclass LspWorkspaceSymbolsCommand(LspTextCommand):\n\n capability = 'workspaceSymbolProvider'\n\n def input(self, _args: Any) -> sublime_plugin.TextInputHandler:\n return SymbolQueryInput()\n\n def run(self, edit: sublime.Edit, symbol_query_input: str, event: Optional[Any] = None) -> None:\n session = self.best_session(self.capability)\n if session:\n self.weaksession = weakref.ref(session)\n session.send_request(\n Request.workspaceSymbol({\"query\": symbol_query_input}),\n lambda r: self._handle_response(symbol_query_input, r),\n self._handle_error)\n\n def _open_file(self, symbols: List[SymbolInformation], index: int) -> None:\n if index != -1:\n session = self.weaksession()\n if session:\n session.open_location_async(symbols[index]['location'], sublime.ENCODED_POSITION)\n\n def _handle_response(self, query: str, response: Union[List[SymbolInformation], None]) -> None:\n if response:\n matches = response\n window = self.view.window()\n if window:\n window.show_quick_panel(\n list(map(symbol_information_to_quick_panel_item, matches)),\n lambda i: self._open_file(matches, i))\n else:\n sublime.message_dialog(\"No matches found for query: '{}'\".format(query))\n\n def _handle_error(self, error: Dict[str, Any]) -> None:\n reason = error.get(\"message\", \"none provided by server :(\")\n msg = \"command 'workspace/symbol' failed. Reason: {}\".format(reason)\n sublime.error_message(msg)\n", "path": "plugin/symbols.py"}]} | 3,783 | 474 |
gh_patches_debug_28773 | rasdani/github-patches | git_diff | buildbot__buildbot-419 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
enable evaluation of coverage for buildbot-worker
</issue>
<code>
[start of master/buildbot/schedulers/basic.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 from twisted.internet import defer, reactor
17 from twisted.python import log
18 from buildbot import util, config
19 from buildbot.util import NotABranch
20 from collections import defaultdict
21 from buildbot.changes import filter, changes
22 from buildbot.schedulers import base, dependent
23
24 class BaseBasicScheduler(base.BaseScheduler):
25 """
26 @param onlyImportant: If True, only important changes will be added to the
27 buildset.
28 @type onlyImportant: boolean
29
30 """
31
32 compare_attrs = (base.BaseScheduler.compare_attrs +
33 ('treeStableTimer', 'change_filter', 'fileIsImportant',
34 'onlyImportant') )
35
36 _reactor = reactor # for tests
37
38 class NotSet: pass
39 def __init__(self, name, shouldntBeSet=NotSet, treeStableTimer=None,
40 builderNames=None, branch=NotABranch, branches=NotABranch,
41 fileIsImportant=None, properties={}, categories=None,
42 change_filter=None, onlyImportant=False, **kwargs):
43 if shouldntBeSet is not self.NotSet:
44 config.error(
45 "pass arguments to schedulers using keyword arguments")
46 if fileIsImportant and not callable(fileIsImportant):
47 config.error(
48 "fileIsImportant must be a callable")
49
50 # initialize parent classes
51 base.BaseScheduler.__init__(self, name, builderNames, properties, **kwargs)
52
53 self.treeStableTimer = treeStableTimer
54 self.fileIsImportant = fileIsImportant
55 self.onlyImportant = onlyImportant
56 self.change_filter = self.getChangeFilter(branch=branch,
57 branches=branches, change_filter=change_filter,
58 categories=categories)
59
60 # the IDelayedCall used to wake up when this scheduler's
61 # treeStableTimer expires.
62 self._stable_timers = defaultdict(lambda : None)
63 self._stable_timers_lock = defer.DeferredLock()
64
65 def getChangeFilter(self, branch, branches, change_filter, categories):
66 raise NotImplementedError
67
68 def startService(self, _returnDeferred=False):
69 base.BaseScheduler.startService(self)
70
71 d = self.startConsumingChanges(fileIsImportant=self.fileIsImportant,
72 change_filter=self.change_filter,
73 onlyImportant=self.onlyImportant)
74
75 # if treeStableTimer is False, then we don't care about classified
76 # changes, so get rid of any hanging around from previous
77 # configurations
78 if not self.treeStableTimer:
79 d.addCallback(lambda _ :
80 self.master.db.schedulers.flushChangeClassifications(
81 self.objectid))
82
83 # otherwise, if there are classified changes out there, start their
84 # treeStableTimers again
85 else:
86 d.addCallback(lambda _ :
87 self.scanExistingClassifiedChanges())
88
89 # handle Deferred errors, since startService does not return a Deferred
90 d.addErrback(log.err, "while starting SingleBranchScheduler '%s'"
91 % self.name)
92
93 if _returnDeferred:
94 return d # only used in tests
95
96 def stopService(self):
97 # the base stopService will unsubscribe from new changes
98 d = base.BaseScheduler.stopService(self)
99 d.addCallback(lambda _ :
100 self._stable_timers_lock.acquire())
101 def cancel_timers(_):
102 for timer in self._stable_timers.values():
103 if timer:
104 timer.cancel()
105 self._stable_timers = {}
106 self._stable_timers_lock.release()
107 d.addCallback(cancel_timers)
108 return d
109
110 @util.deferredLocked('_stable_timers_lock')
111 def gotChange(self, change, important):
112 if not self.treeStableTimer:
113 # if there's no treeStableTimer, we can completely ignore
114 # unimportant changes
115 if not important:
116 return defer.succeed(None)
117 # otherwise, we'll build it right away
118 return self.addBuildsetForChanges(reason='scheduler',
119 changeids=[ change.number ])
120
121 timer_name = self.getTimerNameForChange(change)
122
123 # if we have a treeStableTimer, then record the change's importance
124 # and:
125 # - for an important change, start the timer
126 # - for an unimportant change, reset the timer if it is running
127 d = self.master.db.schedulers.classifyChanges(
128 self.objectid, { change.number : important })
129 def fix_timer(_):
130 if not important and not self._stable_timers[timer_name]:
131 return
132 if self._stable_timers[timer_name]:
133 self._stable_timers[timer_name].cancel()
134 def fire_timer():
135 d = self.stableTimerFired(timer_name)
136 d.addErrback(log.err, "while firing stable timer")
137 self._stable_timers[timer_name] = self._reactor.callLater(
138 self.treeStableTimer, fire_timer)
139 d.addCallback(fix_timer)
140 return d
141
142 @defer.inlineCallbacks
143 def scanExistingClassifiedChanges(self):
144 # call gotChange for each classified change. This is called at startup
145 # and is intended to re-start the treeStableTimer for any changes that
146 # had not yet been built when the scheduler was stopped.
147
148 # NOTE: this may double-call gotChange for changes that arrive just as
149 # the scheduler starts up. In practice, this doesn't hurt anything.
150 classifications = \
151 yield self.master.db.schedulers.getChangeClassifications(
152 self.objectid)
153
154 # call gotChange for each change, after first fetching it from the db
155 for changeid, important in classifications.iteritems():
156 chdict = yield self.master.db.changes.getChange(changeid)
157
158 if not chdict:
159 continue
160
161 change = yield changes.Change.fromChdict(self.master, chdict)
162 yield self.gotChange(change, important)
163
164 def getTimerNameForChange(self, change):
165 raise NotImplementedError # see subclasses
166
167 def getChangeClassificationsForTimer(self, objectid, timer_name):
168 """similar to db.schedulers.getChangeClassifications, but given timer
169 name"""
170 raise NotImplementedError # see subclasses
171
172 @util.deferredLocked('_stable_timers_lock')
173 @defer.inlineCallbacks
174 def stableTimerFired(self, timer_name):
175 # if the service has already been stoppd then just bail out
176 if not self._stable_timers[timer_name]:
177 return
178
179 # delete this now-fired timer
180 del self._stable_timers[timer_name]
181
182 classifications = \
183 yield self.getChangeClassificationsForTimer(self.objectid,
184 timer_name)
185
186 # just in case: databases do weird things sometimes!
187 if not classifications: # pragma: no cover
188 return
189
190 changeids = sorted(classifications.keys())
191 yield self.addBuildsetForChanges(reason='scheduler',
192 changeids=changeids)
193
194 max_changeid = changeids[-1] # (changeids are sorted)
195 yield self.master.db.schedulers.flushChangeClassifications(
196 self.objectid, less_than=max_changeid+1)
197
198 class SingleBranchScheduler(BaseBasicScheduler):
199 def getChangeFilter(self, branch, branches, change_filter, categories):
200 if branch is NotABranch and not change_filter and self.codebases is not None:
201 config.error(
202 "the 'branch' argument to SingleBranchScheduler is " +
203 "mandatory unless change_filter or codebases is provided")
204 elif branches is not NotABranch:
205 config.error(
206 "the 'branches' argument is not allowed for " +
207 "SingleBranchScheduler")
208
209
210 return filter.ChangeFilter.fromSchedulerConstructorArgs(
211 change_filter=change_filter, branch=branch,
212 categories=categories)
213
214 def getTimerNameForChange(self, change):
215 return "only" # this class only uses one timer
216
217 def getChangeClassificationsForTimer(self, objectid, timer_name):
218 return self.master.db.schedulers.getChangeClassifications(
219 self.objectid)
220
221
222 class Scheduler(SingleBranchScheduler):
223 "alias for SingleBranchScheduler"
224 def __init__(self, *args, **kwargs):
225 log.msg("WARNING: the name 'Scheduler' is deprecated; use " +
226 "buildbot.schedulers.basic.SingleBranchScheduler instead " +
227 "(note that this may require you to change your import " +
228 "statement)")
229 SingleBranchScheduler.__init__(self, *args, **kwargs)
230
231
232 class AnyBranchScheduler(BaseBasicScheduler):
233 def getChangeFilter(self, branch, branches, change_filter, categories):
234 assert branch is NotABranch
235 return filter.ChangeFilter.fromSchedulerConstructorArgs(
236 change_filter=change_filter, branch=branches,
237 categories=categories)
238
239 def getTimerNameForChange(self, change):
240 return change.branch
241
242 def getChangeClassificationsForTimer(self, objectid, timer_name):
243 branch = timer_name # set in getTimerNameForChange
244 return self.master.db.schedulers.getChangeClassifications(
245 self.objectid, branch=branch)
246
247 # now at buildbot.schedulers.dependent, but keep the old name alive
248 Dependent = dependent.Dependent
249
[end of master/buildbot/schedulers/basic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/master/buildbot/schedulers/basic.py b/master/buildbot/schedulers/basic.py
--- a/master/buildbot/schedulers/basic.py
+++ b/master/buildbot/schedulers/basic.py
@@ -96,14 +96,12 @@
def stopService(self):
# the base stopService will unsubscribe from new changes
d = base.BaseScheduler.stopService(self)
- d.addCallback(lambda _ :
- self._stable_timers_lock.acquire())
+ @util.deferredLocked(self._stable_timers_lock)
def cancel_timers(_):
for timer in self._stable_timers.values():
if timer:
timer.cancel()
- self._stable_timers = {}
- self._stable_timers_lock.release()
+ self._stable_timers.clear()
d.addCallback(cancel_timers)
return d
@@ -195,6 +193,11 @@
yield self.master.db.schedulers.flushChangeClassifications(
self.objectid, less_than=max_changeid+1)
+ def getPendingBuildTimes(self):
+ # This isn't locked, since the caller expects and immediate value,
+ # and in any case, this is only an estimate.
+ return [timer.getTime() for timer in self._stable_timers.values() if timer and timer.active()]
+
class SingleBranchScheduler(BaseBasicScheduler):
def getChangeFilter(self, branch, branches, change_filter, categories):
if branch is NotABranch and not change_filter and self.codebases is not None:
| {"golden_diff": "diff --git a/master/buildbot/schedulers/basic.py b/master/buildbot/schedulers/basic.py\n--- a/master/buildbot/schedulers/basic.py\n+++ b/master/buildbot/schedulers/basic.py\n@@ -96,14 +96,12 @@\n def stopService(self):\n # the base stopService will unsubscribe from new changes\n d = base.BaseScheduler.stopService(self)\n- d.addCallback(lambda _ :\n- self._stable_timers_lock.acquire())\n+ @util.deferredLocked(self._stable_timers_lock)\n def cancel_timers(_):\n for timer in self._stable_timers.values():\n if timer:\n timer.cancel()\n- self._stable_timers = {}\n- self._stable_timers_lock.release()\n+ self._stable_timers.clear()\n d.addCallback(cancel_timers)\n return d\n \n@@ -195,6 +193,11 @@\n yield self.master.db.schedulers.flushChangeClassifications(\n self.objectid, less_than=max_changeid+1)\n \n+ def getPendingBuildTimes(self):\n+ # This isn't locked, since the caller expects and immediate value,\n+ # and in any case, this is only an estimate.\n+ return [timer.getTime() for timer in self._stable_timers.values() if timer and timer.active()]\n+\n class SingleBranchScheduler(BaseBasicScheduler):\n def getChangeFilter(self, branch, branches, change_filter, categories):\n if branch is NotABranch and not change_filter and self.codebases is not None:\n", "issue": "enable evaluation of coverage for buildbot-worker\n\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom twisted.internet import defer, reactor\nfrom twisted.python import log\nfrom buildbot import util, config\nfrom buildbot.util import NotABranch\nfrom collections import defaultdict\nfrom buildbot.changes import filter, changes\nfrom buildbot.schedulers import base, dependent\n\nclass BaseBasicScheduler(base.BaseScheduler):\n \"\"\"\n @param onlyImportant: If True, only important changes will be added to the\n buildset.\n @type onlyImportant: boolean\n\n \"\"\"\n\n compare_attrs = (base.BaseScheduler.compare_attrs +\n ('treeStableTimer', 'change_filter', 'fileIsImportant',\n 'onlyImportant') )\n\n _reactor = reactor # for tests\n\n class NotSet: pass\n def __init__(self, name, shouldntBeSet=NotSet, treeStableTimer=None,\n builderNames=None, branch=NotABranch, branches=NotABranch,\n fileIsImportant=None, properties={}, categories=None,\n change_filter=None, onlyImportant=False, **kwargs):\n if shouldntBeSet is not self.NotSet:\n config.error(\n \"pass arguments to schedulers using keyword arguments\")\n if fileIsImportant and not callable(fileIsImportant):\n config.error(\n \"fileIsImportant must be a callable\")\n\n # initialize parent classes\n base.BaseScheduler.__init__(self, name, builderNames, properties, **kwargs)\n\n self.treeStableTimer = treeStableTimer\n self.fileIsImportant = fileIsImportant\n self.onlyImportant = onlyImportant\n self.change_filter = self.getChangeFilter(branch=branch,\n branches=branches, change_filter=change_filter,\n categories=categories)\n\n # the IDelayedCall used to wake up when this scheduler's\n # treeStableTimer expires.\n self._stable_timers = defaultdict(lambda : None)\n self._stable_timers_lock = defer.DeferredLock()\n\n def getChangeFilter(self, branch, branches, change_filter, categories):\n raise NotImplementedError\n\n def startService(self, _returnDeferred=False):\n base.BaseScheduler.startService(self)\n\n d = self.startConsumingChanges(fileIsImportant=self.fileIsImportant,\n change_filter=self.change_filter,\n onlyImportant=self.onlyImportant)\n\n # if treeStableTimer is False, then we don't care about classified\n # changes, so get rid of any hanging around from previous\n # configurations\n if not self.treeStableTimer:\n d.addCallback(lambda _ :\n self.master.db.schedulers.flushChangeClassifications(\n self.objectid))\n\n # otherwise, if there are classified changes out there, start their\n # treeStableTimers again\n else:\n d.addCallback(lambda _ :\n self.scanExistingClassifiedChanges())\n\n # handle Deferred errors, since startService does not return a Deferred\n d.addErrback(log.err, \"while starting SingleBranchScheduler '%s'\"\n % self.name)\n\n if _returnDeferred:\n return d # only used in tests\n\n def stopService(self):\n # the base stopService will unsubscribe from new changes\n d = base.BaseScheduler.stopService(self)\n d.addCallback(lambda _ :\n self._stable_timers_lock.acquire())\n def cancel_timers(_):\n for timer in self._stable_timers.values():\n if timer:\n timer.cancel()\n self._stable_timers = {}\n self._stable_timers_lock.release()\n d.addCallback(cancel_timers)\n return d\n\n @util.deferredLocked('_stable_timers_lock')\n def gotChange(self, change, important):\n if not self.treeStableTimer:\n # if there's no treeStableTimer, we can completely ignore\n # unimportant changes\n if not important:\n return defer.succeed(None)\n # otherwise, we'll build it right away\n return self.addBuildsetForChanges(reason='scheduler',\n changeids=[ change.number ])\n\n timer_name = self.getTimerNameForChange(change)\n\n # if we have a treeStableTimer, then record the change's importance\n # and:\n # - for an important change, start the timer\n # - for an unimportant change, reset the timer if it is running\n d = self.master.db.schedulers.classifyChanges(\n self.objectid, { change.number : important })\n def fix_timer(_):\n if not important and not self._stable_timers[timer_name]:\n return\n if self._stable_timers[timer_name]:\n self._stable_timers[timer_name].cancel()\n def fire_timer():\n d = self.stableTimerFired(timer_name)\n d.addErrback(log.err, \"while firing stable timer\")\n self._stable_timers[timer_name] = self._reactor.callLater(\n self.treeStableTimer, fire_timer)\n d.addCallback(fix_timer)\n return d\n\n @defer.inlineCallbacks\n def scanExistingClassifiedChanges(self):\n # call gotChange for each classified change. This is called at startup\n # and is intended to re-start the treeStableTimer for any changes that\n # had not yet been built when the scheduler was stopped.\n\n # NOTE: this may double-call gotChange for changes that arrive just as\n # the scheduler starts up. In practice, this doesn't hurt anything.\n classifications = \\\n yield self.master.db.schedulers.getChangeClassifications(\n self.objectid)\n\n # call gotChange for each change, after first fetching it from the db\n for changeid, important in classifications.iteritems():\n chdict = yield self.master.db.changes.getChange(changeid)\n\n if not chdict:\n continue\n\n change = yield changes.Change.fromChdict(self.master, chdict)\n yield self.gotChange(change, important)\n\n def getTimerNameForChange(self, change):\n raise NotImplementedError # see subclasses\n\n def getChangeClassificationsForTimer(self, objectid, timer_name):\n \"\"\"similar to db.schedulers.getChangeClassifications, but given timer\n name\"\"\"\n raise NotImplementedError # see subclasses\n\n @util.deferredLocked('_stable_timers_lock')\n @defer.inlineCallbacks\n def stableTimerFired(self, timer_name):\n # if the service has already been stoppd then just bail out\n if not self._stable_timers[timer_name]:\n return\n\n # delete this now-fired timer\n del self._stable_timers[timer_name]\n\n classifications = \\\n yield self.getChangeClassificationsForTimer(self.objectid,\n timer_name)\n\n # just in case: databases do weird things sometimes!\n if not classifications: # pragma: no cover\n return\n\n changeids = sorted(classifications.keys())\n yield self.addBuildsetForChanges(reason='scheduler',\n changeids=changeids)\n\n max_changeid = changeids[-1] # (changeids are sorted)\n yield self.master.db.schedulers.flushChangeClassifications(\n self.objectid, less_than=max_changeid+1)\n\nclass SingleBranchScheduler(BaseBasicScheduler):\n def getChangeFilter(self, branch, branches, change_filter, categories):\n if branch is NotABranch and not change_filter and self.codebases is not None:\n config.error(\n \"the 'branch' argument to SingleBranchScheduler is \" +\n \"mandatory unless change_filter or codebases is provided\")\n elif branches is not NotABranch:\n config.error(\n \"the 'branches' argument is not allowed for \" +\n \"SingleBranchScheduler\")\n\n\n return filter.ChangeFilter.fromSchedulerConstructorArgs(\n change_filter=change_filter, branch=branch,\n categories=categories)\n\n def getTimerNameForChange(self, change):\n return \"only\" # this class only uses one timer\n\n def getChangeClassificationsForTimer(self, objectid, timer_name):\n return self.master.db.schedulers.getChangeClassifications(\n self.objectid)\n\n\nclass Scheduler(SingleBranchScheduler):\n \"alias for SingleBranchScheduler\"\n def __init__(self, *args, **kwargs):\n log.msg(\"WARNING: the name 'Scheduler' is deprecated; use \" +\n \"buildbot.schedulers.basic.SingleBranchScheduler instead \" +\n \"(note that this may require you to change your import \" +\n \"statement)\")\n SingleBranchScheduler.__init__(self, *args, **kwargs)\n\n\nclass AnyBranchScheduler(BaseBasicScheduler):\n def getChangeFilter(self, branch, branches, change_filter, categories):\n assert branch is NotABranch\n return filter.ChangeFilter.fromSchedulerConstructorArgs(\n change_filter=change_filter, branch=branches,\n categories=categories)\n\n def getTimerNameForChange(self, change):\n return change.branch\n\n def getChangeClassificationsForTimer(self, objectid, timer_name):\n branch = timer_name # set in getTimerNameForChange\n return self.master.db.schedulers.getChangeClassifications(\n self.objectid, branch=branch)\n\n# now at buildbot.schedulers.dependent, but keep the old name alive\nDependent = dependent.Dependent\n", "path": "master/buildbot/schedulers/basic.py"}]} | 3,300 | 327 |
gh_patches_debug_37397 | rasdani/github-patches | git_diff | PaddlePaddle__models-2214 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
object detection eval&infer报错
对object detection训练出的模型进行eval&infer报错信息如下:

mobilenet_ssd.py文件中并没有eval和infer中需要的mobile_net函数,可能需将eval.py&infer.py中的mobile_net修改为build_mobilenet_ssd
</issue>
<code>
[start of PaddleCV/object_detection/eval.py]
1 import os
2 import time
3 import numpy as np
4 import argparse
5 import functools
6 import math
7
8 import paddle
9 import paddle.fluid as fluid
10 import reader
11 from mobilenet_ssd import mobile_net
12 from utility import add_arguments, print_arguments
13
14 parser = argparse.ArgumentParser(description=__doc__)
15 add_arg = functools.partial(add_arguments, argparser=parser)
16 # yapf: disable
17 add_arg('dataset', str, 'pascalvoc', "coco2014, coco2017, and pascalvoc.")
18 add_arg('batch_size', int, 32, "Minibatch size.")
19 add_arg('use_gpu', bool, True, "Whether use GPU.")
20 add_arg('data_dir', str, '', "The data root path.")
21 add_arg('test_list', str, '', "The testing data lists.")
22 add_arg('model_dir', str, '', "The model path.")
23 add_arg('nms_threshold', float, 0.45, "NMS threshold.")
24 add_arg('ap_version', str, '11point', "integral, 11point.")
25 add_arg('resize_h', int, 300, "The resized image height.")
26 add_arg('resize_w', int, 300, "The resized image height.")
27 add_arg('mean_value_B', float, 127.5, "Mean value for B channel which will be subtracted.") #123.68
28 add_arg('mean_value_G', float, 127.5, "Mean value for G channel which will be subtracted.") #116.78
29 add_arg('mean_value_R', float, 127.5, "Mean value for R channel which will be subtracted.") #103.94
30 # yapf: enable
31
32
33 def build_program(main_prog, startup_prog, args, data_args):
34 image_shape = [3, data_args.resize_h, data_args.resize_w]
35 if 'coco' in data_args.dataset:
36 num_classes = 91
37 elif 'pascalvoc' in data_args.dataset:
38 num_classes = 21
39
40 with fluid.program_guard(main_prog, startup_prog):
41 py_reader = fluid.layers.py_reader(
42 capacity=64,
43 shapes=[[-1] + image_shape, [-1, 4], [-1, 1], [-1, 1]],
44 lod_levels=[0, 1, 1, 1],
45 dtypes=["float32", "float32", "int32", "int32"],
46 use_double_buffer=True)
47 with fluid.unique_name.guard():
48 image, gt_box, gt_label, difficult = fluid.layers.read_file(
49 py_reader)
50 locs, confs, box, box_var = mobile_net(num_classes, image,
51 image_shape)
52 nmsed_out = fluid.layers.detection_output(
53 locs, confs, box, box_var, nms_threshold=args.nms_threshold)
54 with fluid.program_guard(main_prog):
55 map = fluid.metrics.DetectionMAP(
56 nmsed_out,
57 gt_label,
58 gt_box,
59 difficult,
60 num_classes,
61 overlap_threshold=0.5,
62 evaluate_difficult=False,
63 ap_version=args.ap_version)
64 return py_reader, map
65
66
67 def eval(args, data_args, test_list, batch_size, model_dir=None):
68 startup_prog = fluid.Program()
69 test_prog = fluid.Program()
70
71 test_py_reader, map_eval = build_program(
72 main_prog=test_prog,
73 startup_prog=startup_prog,
74 args=args,
75 data_args=data_args)
76 test_prog = test_prog.clone(for_test=True)
77 place = fluid.CUDAPlace(0) if args.use_gpu else fluid.CPUPlace()
78 exe = fluid.Executor(place)
79 exe.run(startup_prog)
80
81 def if_exist(var):
82 return os.path.exists(os.path.join(model_dir, var.name))
83
84 fluid.io.load_vars(
85 exe, model_dir, main_program=test_prog, predicate=if_exist)
86
87 test_reader = reader.test(data_args, test_list, batch_size=batch_size)
88 test_py_reader.decorate_paddle_reader(test_reader)
89
90 _, accum_map = map_eval.get_map_var()
91 map_eval.reset(exe)
92 test_py_reader.start()
93 try:
94 batch_id = 0
95 while True:
96 test_map, = exe.run(test_prog, fetch_list=[accum_map])
97 if batch_id % 10 == 0:
98 print("Batch {0}, map {1}".format(batch_id, test_map))
99 batch_id += 1
100 except (fluid.core.EOFException, StopIteration):
101 test_py_reader.reset()
102 print("Test model {0}, map {1}".format(model_dir, test_map))
103
104
105 if __name__ == '__main__':
106 args = parser.parse_args()
107 print_arguments(args)
108
109 data_dir = 'data/pascalvoc'
110 test_list = 'test.txt'
111 label_file = 'label_list'
112
113 if not os.path.exists(args.model_dir):
114 raise ValueError("The model path [%s] does not exist." %
115 (args.model_dir))
116 if 'coco' in args.dataset:
117 data_dir = 'data/coco'
118 if '2014' in args.dataset:
119 test_list = 'annotations/instances_val2014.json'
120 elif '2017' in args.dataset:
121 test_list = 'annotations/instances_val2017.json'
122
123 data_args = reader.Settings(
124 dataset=args.dataset,
125 data_dir=args.data_dir if len(args.data_dir) > 0 else data_dir,
126 label_file=label_file,
127 resize_h=args.resize_h,
128 resize_w=args.resize_w,
129 mean_value=[args.mean_value_B, args.mean_value_G, args.mean_value_R],
130 apply_distort=False,
131 apply_expand=False,
132 ap_version=args.ap_version)
133 eval(
134 args,
135 data_args=data_args,
136 test_list=args.test_list if len(args.test_list) > 0 else test_list,
137 batch_size=args.batch_size,
138 model_dir=args.model_dir)
139
[end of PaddleCV/object_detection/eval.py]
[start of PaddleCV/object_detection/infer.py]
1 import os
2 import time
3 import numpy as np
4 import argparse
5 import functools
6 from PIL import Image
7 from PIL import ImageDraw
8 from PIL import ImageFont
9
10 import paddle
11 import paddle.fluid as fluid
12 import reader
13 from mobilenet_ssd import mobile_net
14 from utility import add_arguments, print_arguments
15
16 parser = argparse.ArgumentParser(description=__doc__)
17 add_arg = functools.partial(add_arguments, argparser=parser)
18 # yapf: disable
19 add_arg('dataset', str, 'pascalvoc', "coco and pascalvoc.")
20 add_arg('use_gpu', bool, True, "Whether use GPU.")
21 add_arg('image_path', str, '', "The image used to inference and visualize.")
22 add_arg('model_dir', str, '', "The model path.")
23 add_arg('nms_threshold', float, 0.45, "NMS threshold.")
24 add_arg('confs_threshold', float, 0.5, "Confidence threshold to draw bbox.")
25 add_arg('resize_h', int, 300, "The resized image height.")
26 add_arg('resize_w', int, 300, "The resized image height.")
27 add_arg('mean_value_B', float, 127.5, "Mean value for B channel which will be subtracted.") #123.68
28 add_arg('mean_value_G', float, 127.5, "Mean value for G channel which will be subtracted.") #116.78
29 add_arg('mean_value_R', float, 127.5, "Mean value for R channel which will be subtracted.") #103.94
30 # yapf: enable
31
32
33 def infer(args, data_args, image_path, model_dir):
34 image_shape = [3, data_args.resize_h, data_args.resize_w]
35 if 'coco' in data_args.dataset:
36 num_classes = 91
37 # cocoapi
38 from pycocotools.coco import COCO
39 from pycocotools.cocoeval import COCOeval
40 label_fpath = os.path.join(data_dir, label_file)
41 coco = COCO(label_fpath)
42 category_ids = coco.getCatIds()
43 label_list = {
44 item['id']: item['name']
45 for item in coco.loadCats(category_ids)
46 }
47 label_list[0] = ['background']
48 elif 'pascalvoc' in data_args.dataset:
49 num_classes = 21
50 label_list = data_args.label_list
51
52 image = fluid.layers.data(name='image', shape=image_shape, dtype='float32')
53 locs, confs, box, box_var = mobile_net(num_classes, image, image_shape)
54 nmsed_out = fluid.layers.detection_output(
55 locs, confs, box, box_var, nms_threshold=args.nms_threshold)
56
57 place = fluid.CUDAPlace(0) if args.use_gpu else fluid.CPUPlace()
58 exe = fluid.Executor(place)
59 # yapf: disable
60 if model_dir:
61 def if_exist(var):
62 return os.path.exists(os.path.join(model_dir, var.name))
63 fluid.io.load_vars(exe, model_dir, predicate=if_exist)
64 # yapf: enable
65 infer_reader = reader.infer(data_args, image_path)
66 feeder = fluid.DataFeeder(place=place, feed_list=[image])
67
68 data = infer_reader()
69
70 # switch network to test mode (i.e. batch norm test mode)
71 test_program = fluid.default_main_program().clone(for_test=True)
72 nmsed_out_v, = exe.run(test_program,
73 feed=feeder.feed([[data]]),
74 fetch_list=[nmsed_out],
75 return_numpy=False)
76 nmsed_out_v = np.array(nmsed_out_v)
77 draw_bounding_box_on_image(image_path, nmsed_out_v, args.confs_threshold,
78 label_list)
79
80
81 def draw_bounding_box_on_image(image_path, nms_out, confs_threshold,
82 label_list):
83 image = Image.open(image_path)
84 draw = ImageDraw.Draw(image)
85 im_width, im_height = image.size
86
87 for dt in nms_out:
88 if dt[1] < confs_threshold:
89 continue
90 category_id = dt[0]
91 bbox = dt[2:]
92 xmin, ymin, xmax, ymax = clip_bbox(dt[2:])
93 (left, right, top, bottom) = (xmin * im_width, xmax * im_width,
94 ymin * im_height, ymax * im_height)
95 draw.line(
96 [(left, top), (left, bottom), (right, bottom), (right, top),
97 (left, top)],
98 width=4,
99 fill='red')
100 if image.mode == 'RGB':
101 draw.text((left, top), label_list[int(category_id)], (255, 255, 0))
102 image_name = image_path.split('/')[-1]
103 print("image with bbox drawed saved as {}".format(image_name))
104 image.save(image_name)
105
106
107 def clip_bbox(bbox):
108 xmin = max(min(bbox[0], 1.), 0.)
109 ymin = max(min(bbox[1], 1.), 0.)
110 xmax = max(min(bbox[2], 1.), 0.)
111 ymax = max(min(bbox[3], 1.), 0.)
112 return xmin, ymin, xmax, ymax
113
114
115 if __name__ == '__main__':
116 args = parser.parse_args()
117 print_arguments(args)
118
119 data_dir = 'data/pascalvoc'
120 label_file = 'label_list'
121
122 if not os.path.exists(args.model_dir):
123 raise ValueError("The model path [%s] does not exist." %
124 (args.model_dir))
125 if 'coco' in args.dataset:
126 data_dir = 'data/coco'
127 label_file = 'annotations/instances_val2014.json'
128
129 data_args = reader.Settings(
130 dataset=args.dataset,
131 data_dir=data_dir,
132 label_file=label_file,
133 resize_h=args.resize_h,
134 resize_w=args.resize_w,
135 mean_value=[args.mean_value_B, args.mean_value_G, args.mean_value_R],
136 apply_distort=False,
137 apply_expand=False,
138 ap_version='')
139 infer(
140 args,
141 data_args=data_args,
142 image_path=args.image_path,
143 model_dir=args.model_dir)
144
[end of PaddleCV/object_detection/infer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/PaddleCV/object_detection/eval.py b/PaddleCV/object_detection/eval.py
--- a/PaddleCV/object_detection/eval.py
+++ b/PaddleCV/object_detection/eval.py
@@ -8,7 +8,7 @@
import paddle
import paddle.fluid as fluid
import reader
-from mobilenet_ssd import mobile_net
+from mobilenet_ssd import build_mobilenet_ssd
from utility import add_arguments, print_arguments
parser = argparse.ArgumentParser(description=__doc__)
@@ -47,8 +47,8 @@
with fluid.unique_name.guard():
image, gt_box, gt_label, difficult = fluid.layers.read_file(
py_reader)
- locs, confs, box, box_var = mobile_net(num_classes, image,
- image_shape)
+ locs, confs, box, box_var = build_mobilenet_ssd(image, num_classes,
+ image_shape)
nmsed_out = fluid.layers.detection_output(
locs, confs, box, box_var, nms_threshold=args.nms_threshold)
with fluid.program_guard(main_prog):
@@ -67,7 +67,6 @@
def eval(args, data_args, test_list, batch_size, model_dir=None):
startup_prog = fluid.Program()
test_prog = fluid.Program()
-
test_py_reader, map_eval = build_program(
main_prog=test_prog,
startup_prog=startup_prog,
diff --git a/PaddleCV/object_detection/infer.py b/PaddleCV/object_detection/infer.py
--- a/PaddleCV/object_detection/infer.py
+++ b/PaddleCV/object_detection/infer.py
@@ -10,7 +10,7 @@
import paddle
import paddle.fluid as fluid
import reader
-from mobilenet_ssd import mobile_net
+from mobilenet_ssd import build_mobilenet_ssd
from utility import add_arguments, print_arguments
parser = argparse.ArgumentParser(description=__doc__)
@@ -50,7 +50,8 @@
label_list = data_args.label_list
image = fluid.layers.data(name='image', shape=image_shape, dtype='float32')
- locs, confs, box, box_var = mobile_net(num_classes, image, image_shape)
+ locs, confs, box, box_var = build_mobilenet_ssd(image, num_classes,
+ image_shape)
nmsed_out = fluid.layers.detection_output(
locs, confs, box, box_var, nms_threshold=args.nms_threshold)
| {"golden_diff": "diff --git a/PaddleCV/object_detection/eval.py b/PaddleCV/object_detection/eval.py\n--- a/PaddleCV/object_detection/eval.py\n+++ b/PaddleCV/object_detection/eval.py\n@@ -8,7 +8,7 @@\n import paddle\n import paddle.fluid as fluid\n import reader\n-from mobilenet_ssd import mobile_net\n+from mobilenet_ssd import build_mobilenet_ssd\n from utility import add_arguments, print_arguments\n \n parser = argparse.ArgumentParser(description=__doc__)\n@@ -47,8 +47,8 @@\n with fluid.unique_name.guard():\n image, gt_box, gt_label, difficult = fluid.layers.read_file(\n py_reader)\n- locs, confs, box, box_var = mobile_net(num_classes, image,\n- image_shape)\n+ locs, confs, box, box_var = build_mobilenet_ssd(image, num_classes,\n+ image_shape)\n nmsed_out = fluid.layers.detection_output(\n locs, confs, box, box_var, nms_threshold=args.nms_threshold)\n with fluid.program_guard(main_prog):\n@@ -67,7 +67,6 @@\n def eval(args, data_args, test_list, batch_size, model_dir=None):\n startup_prog = fluid.Program()\n test_prog = fluid.Program()\n-\n test_py_reader, map_eval = build_program(\n main_prog=test_prog,\n startup_prog=startup_prog,\ndiff --git a/PaddleCV/object_detection/infer.py b/PaddleCV/object_detection/infer.py\n--- a/PaddleCV/object_detection/infer.py\n+++ b/PaddleCV/object_detection/infer.py\n@@ -10,7 +10,7 @@\n import paddle\n import paddle.fluid as fluid\n import reader\n-from mobilenet_ssd import mobile_net\n+from mobilenet_ssd import build_mobilenet_ssd\n from utility import add_arguments, print_arguments\n \n parser = argparse.ArgumentParser(description=__doc__)\n@@ -50,7 +50,8 @@\n label_list = data_args.label_list\n \n image = fluid.layers.data(name='image', shape=image_shape, dtype='float32')\n- locs, confs, box, box_var = mobile_net(num_classes, image, image_shape)\n+ locs, confs, box, box_var = build_mobilenet_ssd(image, num_classes,\n+ image_shape)\n nmsed_out = fluid.layers.detection_output(\n locs, confs, box, box_var, nms_threshold=args.nms_threshold)\n", "issue": "object detection eval&infer\u62a5\u9519\n\u5bf9object detection\u8bad\u7ec3\u51fa\u7684\u6a21\u578b\u8fdb\u884ceval&infer\u62a5\u9519\u4fe1\u606f\u5982\u4e0b\uff1a\r\n\r\nmobilenet_ssd.py\u6587\u4ef6\u4e2d\u5e76\u6ca1\u6709eval\u548cinfer\u4e2d\u9700\u8981\u7684mobile_net\u51fd\u6570\uff0c\u53ef\u80fd\u9700\u5c06eval.py&infer.py\u4e2d\u7684mobile_net\u4fee\u6539\u4e3abuild_mobilenet_ssd\n", "before_files": [{"content": "import os\nimport time\nimport numpy as np\nimport argparse\nimport functools\nimport math\n\nimport paddle\nimport paddle.fluid as fluid\nimport reader\nfrom mobilenet_ssd import mobile_net\nfrom utility import add_arguments, print_arguments\n\nparser = argparse.ArgumentParser(description=__doc__)\nadd_arg = functools.partial(add_arguments, argparser=parser)\n# yapf: disable\nadd_arg('dataset', str, 'pascalvoc', \"coco2014, coco2017, and pascalvoc.\")\nadd_arg('batch_size', int, 32, \"Minibatch size.\")\nadd_arg('use_gpu', bool, True, \"Whether use GPU.\")\nadd_arg('data_dir', str, '', \"The data root path.\")\nadd_arg('test_list', str, '', \"The testing data lists.\")\nadd_arg('model_dir', str, '', \"The model path.\")\nadd_arg('nms_threshold', float, 0.45, \"NMS threshold.\")\nadd_arg('ap_version', str, '11point', \"integral, 11point.\")\nadd_arg('resize_h', int, 300, \"The resized image height.\")\nadd_arg('resize_w', int, 300, \"The resized image height.\")\nadd_arg('mean_value_B', float, 127.5, \"Mean value for B channel which will be subtracted.\") #123.68\nadd_arg('mean_value_G', float, 127.5, \"Mean value for G channel which will be subtracted.\") #116.78\nadd_arg('mean_value_R', float, 127.5, \"Mean value for R channel which will be subtracted.\") #103.94\n# yapf: enable\n\n\ndef build_program(main_prog, startup_prog, args, data_args):\n image_shape = [3, data_args.resize_h, data_args.resize_w]\n if 'coco' in data_args.dataset:\n num_classes = 91\n elif 'pascalvoc' in data_args.dataset:\n num_classes = 21\n\n with fluid.program_guard(main_prog, startup_prog):\n py_reader = fluid.layers.py_reader(\n capacity=64,\n shapes=[[-1] + image_shape, [-1, 4], [-1, 1], [-1, 1]],\n lod_levels=[0, 1, 1, 1],\n dtypes=[\"float32\", \"float32\", \"int32\", \"int32\"],\n use_double_buffer=True)\n with fluid.unique_name.guard():\n image, gt_box, gt_label, difficult = fluid.layers.read_file(\n py_reader)\n locs, confs, box, box_var = mobile_net(num_classes, image,\n image_shape)\n nmsed_out = fluid.layers.detection_output(\n locs, confs, box, box_var, nms_threshold=args.nms_threshold)\n with fluid.program_guard(main_prog):\n map = fluid.metrics.DetectionMAP(\n nmsed_out,\n gt_label,\n gt_box,\n difficult,\n num_classes,\n overlap_threshold=0.5,\n evaluate_difficult=False,\n ap_version=args.ap_version)\n return py_reader, map\n\n\ndef eval(args, data_args, test_list, batch_size, model_dir=None):\n startup_prog = fluid.Program()\n test_prog = fluid.Program()\n\n test_py_reader, map_eval = build_program(\n main_prog=test_prog,\n startup_prog=startup_prog,\n args=args,\n data_args=data_args)\n test_prog = test_prog.clone(for_test=True)\n place = fluid.CUDAPlace(0) if args.use_gpu else fluid.CPUPlace()\n exe = fluid.Executor(place)\n exe.run(startup_prog)\n\n def if_exist(var):\n return os.path.exists(os.path.join(model_dir, var.name))\n\n fluid.io.load_vars(\n exe, model_dir, main_program=test_prog, predicate=if_exist)\n\n test_reader = reader.test(data_args, test_list, batch_size=batch_size)\n test_py_reader.decorate_paddle_reader(test_reader)\n\n _, accum_map = map_eval.get_map_var()\n map_eval.reset(exe)\n test_py_reader.start()\n try:\n batch_id = 0\n while True:\n test_map, = exe.run(test_prog, fetch_list=[accum_map])\n if batch_id % 10 == 0:\n print(\"Batch {0}, map {1}\".format(batch_id, test_map))\n batch_id += 1\n except (fluid.core.EOFException, StopIteration):\n test_py_reader.reset()\n print(\"Test model {0}, map {1}\".format(model_dir, test_map))\n\n\nif __name__ == '__main__':\n args = parser.parse_args()\n print_arguments(args)\n\n data_dir = 'data/pascalvoc'\n test_list = 'test.txt'\n label_file = 'label_list'\n\n if not os.path.exists(args.model_dir):\n raise ValueError(\"The model path [%s] does not exist.\" %\n (args.model_dir))\n if 'coco' in args.dataset:\n data_dir = 'data/coco'\n if '2014' in args.dataset:\n test_list = 'annotations/instances_val2014.json'\n elif '2017' in args.dataset:\n test_list = 'annotations/instances_val2017.json'\n\n data_args = reader.Settings(\n dataset=args.dataset,\n data_dir=args.data_dir if len(args.data_dir) > 0 else data_dir,\n label_file=label_file,\n resize_h=args.resize_h,\n resize_w=args.resize_w,\n mean_value=[args.mean_value_B, args.mean_value_G, args.mean_value_R],\n apply_distort=False,\n apply_expand=False,\n ap_version=args.ap_version)\n eval(\n args,\n data_args=data_args,\n test_list=args.test_list if len(args.test_list) > 0 else test_list,\n batch_size=args.batch_size,\n model_dir=args.model_dir)\n", "path": "PaddleCV/object_detection/eval.py"}, {"content": "import os\nimport time\nimport numpy as np\nimport argparse\nimport functools\nfrom PIL import Image\nfrom PIL import ImageDraw\nfrom PIL import ImageFont\n\nimport paddle\nimport paddle.fluid as fluid\nimport reader\nfrom mobilenet_ssd import mobile_net\nfrom utility import add_arguments, print_arguments\n\nparser = argparse.ArgumentParser(description=__doc__)\nadd_arg = functools.partial(add_arguments, argparser=parser)\n# yapf: disable\nadd_arg('dataset', str, 'pascalvoc', \"coco and pascalvoc.\")\nadd_arg('use_gpu', bool, True, \"Whether use GPU.\")\nadd_arg('image_path', str, '', \"The image used to inference and visualize.\")\nadd_arg('model_dir', str, '', \"The model path.\")\nadd_arg('nms_threshold', float, 0.45, \"NMS threshold.\")\nadd_arg('confs_threshold', float, 0.5, \"Confidence threshold to draw bbox.\")\nadd_arg('resize_h', int, 300, \"The resized image height.\")\nadd_arg('resize_w', int, 300, \"The resized image height.\")\nadd_arg('mean_value_B', float, 127.5, \"Mean value for B channel which will be subtracted.\") #123.68\nadd_arg('mean_value_G', float, 127.5, \"Mean value for G channel which will be subtracted.\") #116.78\nadd_arg('mean_value_R', float, 127.5, \"Mean value for R channel which will be subtracted.\") #103.94\n# yapf: enable\n\n\ndef infer(args, data_args, image_path, model_dir):\n image_shape = [3, data_args.resize_h, data_args.resize_w]\n if 'coco' in data_args.dataset:\n num_classes = 91\n # cocoapi\n from pycocotools.coco import COCO\n from pycocotools.cocoeval import COCOeval\n label_fpath = os.path.join(data_dir, label_file)\n coco = COCO(label_fpath)\n category_ids = coco.getCatIds()\n label_list = {\n item['id']: item['name']\n for item in coco.loadCats(category_ids)\n }\n label_list[0] = ['background']\n elif 'pascalvoc' in data_args.dataset:\n num_classes = 21\n label_list = data_args.label_list\n\n image = fluid.layers.data(name='image', shape=image_shape, dtype='float32')\n locs, confs, box, box_var = mobile_net(num_classes, image, image_shape)\n nmsed_out = fluid.layers.detection_output(\n locs, confs, box, box_var, nms_threshold=args.nms_threshold)\n\n place = fluid.CUDAPlace(0) if args.use_gpu else fluid.CPUPlace()\n exe = fluid.Executor(place)\n # yapf: disable\n if model_dir:\n def if_exist(var):\n return os.path.exists(os.path.join(model_dir, var.name))\n fluid.io.load_vars(exe, model_dir, predicate=if_exist)\n # yapf: enable\n infer_reader = reader.infer(data_args, image_path)\n feeder = fluid.DataFeeder(place=place, feed_list=[image])\n\n data = infer_reader()\n\n # switch network to test mode (i.e. batch norm test mode)\n test_program = fluid.default_main_program().clone(for_test=True)\n nmsed_out_v, = exe.run(test_program,\n feed=feeder.feed([[data]]),\n fetch_list=[nmsed_out],\n return_numpy=False)\n nmsed_out_v = np.array(nmsed_out_v)\n draw_bounding_box_on_image(image_path, nmsed_out_v, args.confs_threshold,\n label_list)\n\n\ndef draw_bounding_box_on_image(image_path, nms_out, confs_threshold,\n label_list):\n image = Image.open(image_path)\n draw = ImageDraw.Draw(image)\n im_width, im_height = image.size\n\n for dt in nms_out:\n if dt[1] < confs_threshold:\n continue\n category_id = dt[0]\n bbox = dt[2:]\n xmin, ymin, xmax, ymax = clip_bbox(dt[2:])\n (left, right, top, bottom) = (xmin * im_width, xmax * im_width,\n ymin * im_height, ymax * im_height)\n draw.line(\n [(left, top), (left, bottom), (right, bottom), (right, top),\n (left, top)],\n width=4,\n fill='red')\n if image.mode == 'RGB':\n draw.text((left, top), label_list[int(category_id)], (255, 255, 0))\n image_name = image_path.split('/')[-1]\n print(\"image with bbox drawed saved as {}\".format(image_name))\n image.save(image_name)\n\n\ndef clip_bbox(bbox):\n xmin = max(min(bbox[0], 1.), 0.)\n ymin = max(min(bbox[1], 1.), 0.)\n xmax = max(min(bbox[2], 1.), 0.)\n ymax = max(min(bbox[3], 1.), 0.)\n return xmin, ymin, xmax, ymax\n\n\nif __name__ == '__main__':\n args = parser.parse_args()\n print_arguments(args)\n\n data_dir = 'data/pascalvoc'\n label_file = 'label_list'\n\n if not os.path.exists(args.model_dir):\n raise ValueError(\"The model path [%s] does not exist.\" %\n (args.model_dir))\n if 'coco' in args.dataset:\n data_dir = 'data/coco'\n label_file = 'annotations/instances_val2014.json'\n\n data_args = reader.Settings(\n dataset=args.dataset,\n data_dir=data_dir,\n label_file=label_file,\n resize_h=args.resize_h,\n resize_w=args.resize_w,\n mean_value=[args.mean_value_B, args.mean_value_G, args.mean_value_R],\n apply_distort=False,\n apply_expand=False,\n ap_version='')\n infer(\n args,\n data_args=data_args,\n image_path=args.image_path,\n model_dir=args.model_dir)\n", "path": "PaddleCV/object_detection/infer.py"}]} | 4,070 | 546 |
gh_patches_debug_218 | rasdani/github-patches | git_diff | TheAlgorithms__Python-7054 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add typing to maths/segmented_sieve.py
### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [ ] This pull request is all my own work -- I have not plagiarized.
* [ ] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
</issue>
<code>
[start of maths/segmented_sieve.py]
1 """Segmented Sieve."""
2
3 import math
4
5
6 def sieve(n):
7 """Segmented Sieve."""
8 in_prime = []
9 start = 2
10 end = int(math.sqrt(n)) # Size of every segment
11 temp = [True] * (end + 1)
12 prime = []
13
14 while start <= end:
15 if temp[start] is True:
16 in_prime.append(start)
17 for i in range(start * start, end + 1, start):
18 temp[i] = False
19 start += 1
20 prime += in_prime
21
22 low = end + 1
23 high = min(2 * end, n)
24
25 while low <= n:
26 temp = [True] * (high - low + 1)
27 for each in in_prime:
28
29 t = math.floor(low / each) * each
30 if t < low:
31 t += each
32
33 for j in range(t, high + 1, each):
34 temp[j - low] = False
35
36 for j in range(len(temp)):
37 if temp[j] is True:
38 prime.append(j + low)
39
40 low = high + 1
41 high = min(high + end, n)
42
43 return prime
44
45
46 print(sieve(10**6))
47
[end of maths/segmented_sieve.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/maths/segmented_sieve.py b/maths/segmented_sieve.py
--- a/maths/segmented_sieve.py
+++ b/maths/segmented_sieve.py
@@ -3,7 +3,7 @@
import math
-def sieve(n):
+def sieve(n: int) -> list[int]:
"""Segmented Sieve."""
in_prime = []
start = 2
| {"golden_diff": "diff --git a/maths/segmented_sieve.py b/maths/segmented_sieve.py\n--- a/maths/segmented_sieve.py\n+++ b/maths/segmented_sieve.py\n@@ -3,7 +3,7 @@\n import math\r\n \r\n \r\n-def sieve(n):\r\n+def sieve(n: int) -> list[int]:\r\n \"\"\"Segmented Sieve.\"\"\"\r\n in_prime = []\r\n start = 2\n", "issue": "Add typing to maths/segmented_sieve.py\n### Describe your change:\r\n\r\n\r\n\r\n* [ ] Add an algorithm?\r\n* [ ] Fix a bug or typo in an existing algorithm?\r\n* [ ] Documentation change?\r\n\r\n### Checklist:\r\n* [ ] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).\r\n* [ ] This pull request is all my own work -- I have not plagiarized.\r\n* [ ] I know that pull requests will not be merged if they fail the automated tests.\r\n* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.\r\n* [ ] All new Python files are placed inside an existing directory.\r\n* [ ] All filenames are in all lowercase characters with no spaces or dashes.\r\n* [ ] All functions and variable names follow Python naming conventions.\r\n* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).\r\n* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.\r\n* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.\r\n* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.\r\n\n", "before_files": [{"content": "\"\"\"Segmented Sieve.\"\"\"\r\n\r\nimport math\r\n\r\n\r\ndef sieve(n):\r\n \"\"\"Segmented Sieve.\"\"\"\r\n in_prime = []\r\n start = 2\r\n end = int(math.sqrt(n)) # Size of every segment\r\n temp = [True] * (end + 1)\r\n prime = []\r\n\r\n while start <= end:\r\n if temp[start] is True:\r\n in_prime.append(start)\r\n for i in range(start * start, end + 1, start):\r\n temp[i] = False\r\n start += 1\r\n prime += in_prime\r\n\r\n low = end + 1\r\n high = min(2 * end, n)\r\n\r\n while low <= n:\r\n temp = [True] * (high - low + 1)\r\n for each in in_prime:\r\n\r\n t = math.floor(low / each) * each\r\n if t < low:\r\n t += each\r\n\r\n for j in range(t, high + 1, each):\r\n temp[j - low] = False\r\n\r\n for j in range(len(temp)):\r\n if temp[j] is True:\r\n prime.append(j + low)\r\n\r\n low = high + 1\r\n high = min(high + end, n)\r\n\r\n return prime\r\n\r\n\r\nprint(sieve(10**6))\r\n", "path": "maths/segmented_sieve.py"}]} | 1,197 | 92 |
gh_patches_debug_11701 | rasdani/github-patches | git_diff | buildbot__buildbot-4048 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Maximum recursion depth exceeded while calling a Python object
Noticed this exception during buildbot start. I am using Buildbot 1.1.1
2018-04-11 21:06:54-0700 [-] Unhandled error in Deferred:
2018-04-11 21:06:54-0700 [-] Unhandled Error
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1442, in gotResult
_inlineCallbacks(r, g, deferred)
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1429, in _inlineCallbacks
deferred.callback(e.value)
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 459, in callback
self._startRunCallbacks(result)
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 567, in _startRunCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/usr/lib64/python2.7/site-packages/twisted/spread/pb.py", line 1117, in _sendFailureOrError
self._sendFailure(fail, requestID)
File "/usr/lib64/python2.7/site-packages/twisted/spread/pb.py", line 1129, in _sendFailure
log.msg("Peer will receive following PB traceback:")
File "/usr/lib64/python2.7/site-packages/twisted/python/threadable.py", line 53, in sync
return function(self, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/twisted/python/log.py", line 286, in msg
_publishNew(self._publishPublisher, actualEventDict, textFromEventDict)
File "/usr/lib64/python2.7/site-packages/twisted/logger/_legacy.py", line 154, in publishToNewObserver
observer(eventDict)
File "/usr/lib64/python2.7/site-packages/twisted/logger/_observer.py", line 133, in __call__
brokenObservers.append((observer, Failure()))
exceptions.RuntimeError: maximum recursion depth exceeded while calling a Python object
</issue>
<code>
[start of master/buildbot/pbmanager.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 from __future__ import absolute_import
17 from __future__ import print_function
18
19 from twisted.application import strports
20 from twisted.cred import checkers
21 from twisted.cred import credentials
22 from twisted.cred import error
23 from twisted.cred import portal
24 from twisted.internet import defer
25 from twisted.python import log
26 from twisted.spread import pb
27 from zope.interface import implementer
28
29 from buildbot.process.properties import Properties
30 from buildbot.util import bytes2unicode
31 from buildbot.util import service
32 from buildbot.util import unicode2bytes
33
34 debug = False
35
36
37 class PBManager(service.AsyncMultiService):
38
39 """
40 A centralized manager for PB ports and authentication on them.
41
42 Allows various pieces of code to request a (port, username) combo, along
43 with a password and a perspective factory.
44 """
45
46 def __init__(self):
47 service.AsyncMultiService.__init__(self)
48 self.setName('pbmanager')
49 self.dispatchers = {}
50
51 def register(self, portstr, username, password, pfactory):
52 """
53 Register a perspective factory PFACTORY to be executed when a PB
54 connection arrives on PORTSTR with USERNAME/PASSWORD. Returns a
55 Registration object which can be used to unregister later.
56 """
57 # do some basic normalization of portstrs
58 if isinstance(portstr, type(0)) or ':' not in portstr:
59 portstr = "tcp:%s" % portstr
60
61 reg = Registration(self, portstr, username)
62
63 if portstr not in self.dispatchers:
64 disp = self.dispatchers[portstr] = Dispatcher(portstr)
65 disp.setServiceParent(self)
66 else:
67 disp = self.dispatchers[portstr]
68
69 disp.register(username, password, pfactory)
70
71 return reg
72
73 def _unregister(self, registration):
74 disp = self.dispatchers[registration.portstr]
75 disp.unregister(registration.username)
76 registration.username = None
77 if not disp.users:
78 disp = self.dispatchers[registration.portstr]
79 del self.dispatchers[registration.portstr]
80 return defer.maybeDeferred(disp.disownServiceParent)
81 return defer.succeed(None)
82
83
84 class Registration(object):
85
86 def __init__(self, pbmanager, portstr, username):
87 self.portstr = portstr
88 "portstr this registration is active on"
89 self.username = username
90 "username of this registration"
91
92 self.pbmanager = pbmanager
93
94 def __repr__(self):
95 return "<pbmanager.Registration for %s on %s>" % \
96 (self.username, self.portstr)
97
98 def unregister(self):
99 """
100 Unregister this registration, removing the username from the port, and
101 closing the port if there are no more users left. Returns a Deferred.
102 """
103 return self.pbmanager._unregister(self)
104
105 def getPort(self):
106 """
107 Helper method for testing; returns the TCP port used for this
108 registration, even if it was specified as 0 and thus allocated by the
109 OS.
110 """
111 disp = self.pbmanager.dispatchers[self.portstr]
112 return disp.port.getHost().port
113
114
115 @implementer(portal.IRealm, checkers.ICredentialsChecker)
116 class Dispatcher(service.AsyncService):
117
118 credentialInterfaces = [credentials.IUsernamePassword,
119 credentials.IUsernameHashedPassword]
120
121 def __init__(self, portstr):
122 self.portstr = portstr
123 self.users = {}
124
125 # there's lots of stuff to set up for a PB connection!
126 self.portal = portal.Portal(self)
127 self.portal.registerChecker(self)
128 self.serverFactory = pb.PBServerFactory(self.portal)
129 self.serverFactory.unsafeTracebacks = True
130 self.port = None
131
132 def __repr__(self):
133 return "<pbmanager.Dispatcher for %s on %s>" % \
134 (", ".join(list(self.users)), self.portstr)
135
136 def startService(self):
137 assert not self.port
138 self.port = strports.listen(self.portstr, self.serverFactory)
139 return service.AsyncService.startService(self)
140
141 def stopService(self):
142 # stop listening on the port when shut down
143 assert self.port
144 port, self.port = self.port, None
145 d = defer.maybeDeferred(port.stopListening)
146 d.addCallback(lambda _: service.AsyncService.stopService(self))
147 return d
148
149 def register(self, username, password, pfactory):
150 if debug:
151 log.msg("registering username '%s' on pb port %s: %s"
152 % (username, self.portstr, pfactory))
153 if username in self.users:
154 raise KeyError("username '%s' is already registered on PB port %s"
155 % (username, self.portstr))
156 self.users[username] = (password, pfactory)
157
158 def unregister(self, username):
159 if debug:
160 log.msg("unregistering username '%s' on pb port %s"
161 % (username, self.portstr))
162 del self.users[username]
163
164 # IRealm
165
166 def requestAvatar(self, username, mind, interface):
167 assert interface == pb.IPerspective
168 username = bytes2unicode(username)
169 if username not in self.users:
170 d = defer.succeed(None) # no perspective
171 else:
172 _, afactory = self.users.get(username)
173 d = defer.maybeDeferred(afactory, mind, username)
174
175 # check that we got a perspective
176 @d.addCallback
177 def check(persp):
178 if not persp:
179 raise ValueError("no perspective for '%s'" % username)
180 return persp
181
182 # call the perspective's attached(mind)
183 @d.addCallback
184 def call_attached(persp):
185 d = defer.maybeDeferred(persp.attached, mind)
186 d.addCallback(lambda _: persp) # keep returning the perspective
187 return d
188
189 # return the tuple requestAvatar is expected to return
190 @d.addCallback
191 def done(persp):
192 return (pb.IPerspective, persp, lambda: persp.detached(mind))
193
194 return d
195
196 # ICredentialsChecker
197
198 @defer.inlineCallbacks
199 def requestAvatarId(self, creds):
200 p = Properties()
201 p.master = self.master
202 username = bytes2unicode(creds.username)
203 try:
204 yield self.master.initLock.acquire()
205 if username in self.users:
206 password, _ = self.users[username]
207 password = yield p.render(password)
208 matched = yield defer.maybeDeferred(
209 creds.checkPassword, unicode2bytes(password))
210 if not matched:
211 log.msg("invalid login from user '{}'".format(username))
212 raise error.UnauthorizedLogin()
213 defer.returnValue(creds.username)
214 log.msg("invalid login from unknown user '{}'".format(username))
215 raise error.UnauthorizedLogin()
216 finally:
217 yield self.master.initLock.release()
218
[end of master/buildbot/pbmanager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/master/buildbot/pbmanager.py b/master/buildbot/pbmanager.py
--- a/master/buildbot/pbmanager.py
+++ b/master/buildbot/pbmanager.py
@@ -30,6 +30,7 @@
from buildbot.util import bytes2unicode
from buildbot.util import service
from buildbot.util import unicode2bytes
+from buildbot.util.eventual import eventually
debug = False
@@ -214,4 +215,4 @@
log.msg("invalid login from unknown user '%s'" % creds.username)
raise error.UnauthorizedLogin()
finally:
- yield self.master.initLock.release()
+ eventually(self.master.initLock.release)
| {"golden_diff": "diff --git a/master/buildbot/pbmanager.py b/master/buildbot/pbmanager.py\n--- a/master/buildbot/pbmanager.py\n+++ b/master/buildbot/pbmanager.py\n@@ -30,6 +30,7 @@\n from buildbot.util import bytes2unicode\n from buildbot.util import service\n from buildbot.util import unicode2bytes\n+from buildbot.util.eventual import eventually\n \n debug = False\n \n@@ -214,4 +215,4 @@\n log.msg(\"invalid login from unknown user '%s'\" % creds.username)\n raise error.UnauthorizedLogin()\n finally:\n- yield self.master.initLock.release()\n+ eventually(self.master.initLock.release)\n", "issue": "Maximum recursion depth exceeded while calling a Python object\nNoticed this exception during buildbot start. I am using Buildbot 1.1.1\r\n\r\n2018-04-11 21:06:54-0700 [-] Unhandled error in Deferred:\r\n2018-04-11 21:06:54-0700 [-] Unhandled Error\r\n Traceback (most recent call last):\r\n File \"/usr/lib64/python2.7/site-packages/twisted/internet/defer.py\", line 1442, in gotResult\r\n _inlineCallbacks(r, g, deferred)\r\n File \"/usr/lib64/python2.7/site-packages/twisted/internet/defer.py\", line 1429, in _inlineCallbacks\r\n deferred.callback(e.value)\r\n File \"/usr/lib64/python2.7/site-packages/twisted/internet/defer.py\", line 459, in callback\r\n self._startRunCallbacks(result)\r\n File \"/usr/lib64/python2.7/site-packages/twisted/internet/defer.py\", line 567, in _startRunCallbacks\r\n self._runCallbacks()\r\n --- <exception caught here> ---\r\n File \"/usr/lib64/python2.7/site-packages/twisted/internet/defer.py\", line 653, in _runCallbacks\r\n current.result = callback(current.result, *args, **kw)\r\n File \"/usr/lib64/python2.7/site-packages/twisted/spread/pb.py\", line 1117, in _sendFailureOrError\r\n self._sendFailure(fail, requestID)\r\n File \"/usr/lib64/python2.7/site-packages/twisted/spread/pb.py\", line 1129, in _sendFailure\r\n log.msg(\"Peer will receive following PB traceback:\")\r\n File \"/usr/lib64/python2.7/site-packages/twisted/python/threadable.py\", line 53, in sync \r\n return function(self, *args, **kwargs)\r\n File \"/usr/lib64/python2.7/site-packages/twisted/python/log.py\", line 286, in msg\r\n _publishNew(self._publishPublisher, actualEventDict, textFromEventDict)\r\n File \"/usr/lib64/python2.7/site-packages/twisted/logger/_legacy.py\", line 154, in publishToNewObserver\r\n observer(eventDict)\r\n File \"/usr/lib64/python2.7/site-packages/twisted/logger/_observer.py\", line 133, in __call__\r\n brokenObservers.append((observer, Failure()))\r\n exceptions.RuntimeError: maximum recursion depth exceeded while calling a Python object\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nfrom twisted.application import strports\nfrom twisted.cred import checkers\nfrom twisted.cred import credentials\nfrom twisted.cred import error\nfrom twisted.cred import portal\nfrom twisted.internet import defer\nfrom twisted.python import log\nfrom twisted.spread import pb\nfrom zope.interface import implementer\n\nfrom buildbot.process.properties import Properties\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import service\nfrom buildbot.util import unicode2bytes\n\ndebug = False\n\n\nclass PBManager(service.AsyncMultiService):\n\n \"\"\"\n A centralized manager for PB ports and authentication on them.\n\n Allows various pieces of code to request a (port, username) combo, along\n with a password and a perspective factory.\n \"\"\"\n\n def __init__(self):\n service.AsyncMultiService.__init__(self)\n self.setName('pbmanager')\n self.dispatchers = {}\n\n def register(self, portstr, username, password, pfactory):\n \"\"\"\n Register a perspective factory PFACTORY to be executed when a PB\n connection arrives on PORTSTR with USERNAME/PASSWORD. Returns a\n Registration object which can be used to unregister later.\n \"\"\"\n # do some basic normalization of portstrs\n if isinstance(portstr, type(0)) or ':' not in portstr:\n portstr = \"tcp:%s\" % portstr\n\n reg = Registration(self, portstr, username)\n\n if portstr not in self.dispatchers:\n disp = self.dispatchers[portstr] = Dispatcher(portstr)\n disp.setServiceParent(self)\n else:\n disp = self.dispatchers[portstr]\n\n disp.register(username, password, pfactory)\n\n return reg\n\n def _unregister(self, registration):\n disp = self.dispatchers[registration.portstr]\n disp.unregister(registration.username)\n registration.username = None\n if not disp.users:\n disp = self.dispatchers[registration.portstr]\n del self.dispatchers[registration.portstr]\n return defer.maybeDeferred(disp.disownServiceParent)\n return defer.succeed(None)\n\n\nclass Registration(object):\n\n def __init__(self, pbmanager, portstr, username):\n self.portstr = portstr\n \"portstr this registration is active on\"\n self.username = username\n \"username of this registration\"\n\n self.pbmanager = pbmanager\n\n def __repr__(self):\n return \"<pbmanager.Registration for %s on %s>\" % \\\n (self.username, self.portstr)\n\n def unregister(self):\n \"\"\"\n Unregister this registration, removing the username from the port, and\n closing the port if there are no more users left. Returns a Deferred.\n \"\"\"\n return self.pbmanager._unregister(self)\n\n def getPort(self):\n \"\"\"\n Helper method for testing; returns the TCP port used for this\n registration, even if it was specified as 0 and thus allocated by the\n OS.\n \"\"\"\n disp = self.pbmanager.dispatchers[self.portstr]\n return disp.port.getHost().port\n\n\n@implementer(portal.IRealm, checkers.ICredentialsChecker)\nclass Dispatcher(service.AsyncService):\n\n credentialInterfaces = [credentials.IUsernamePassword,\n credentials.IUsernameHashedPassword]\n\n def __init__(self, portstr):\n self.portstr = portstr\n self.users = {}\n\n # there's lots of stuff to set up for a PB connection!\n self.portal = portal.Portal(self)\n self.portal.registerChecker(self)\n self.serverFactory = pb.PBServerFactory(self.portal)\n self.serverFactory.unsafeTracebacks = True\n self.port = None\n\n def __repr__(self):\n return \"<pbmanager.Dispatcher for %s on %s>\" % \\\n (\", \".join(list(self.users)), self.portstr)\n\n def startService(self):\n assert not self.port\n self.port = strports.listen(self.portstr, self.serverFactory)\n return service.AsyncService.startService(self)\n\n def stopService(self):\n # stop listening on the port when shut down\n assert self.port\n port, self.port = self.port, None\n d = defer.maybeDeferred(port.stopListening)\n d.addCallback(lambda _: service.AsyncService.stopService(self))\n return d\n\n def register(self, username, password, pfactory):\n if debug:\n log.msg(\"registering username '%s' on pb port %s: %s\"\n % (username, self.portstr, pfactory))\n if username in self.users:\n raise KeyError(\"username '%s' is already registered on PB port %s\"\n % (username, self.portstr))\n self.users[username] = (password, pfactory)\n\n def unregister(self, username):\n if debug:\n log.msg(\"unregistering username '%s' on pb port %s\"\n % (username, self.portstr))\n del self.users[username]\n\n # IRealm\n\n def requestAvatar(self, username, mind, interface):\n assert interface == pb.IPerspective\n username = bytes2unicode(username)\n if username not in self.users:\n d = defer.succeed(None) # no perspective\n else:\n _, afactory = self.users.get(username)\n d = defer.maybeDeferred(afactory, mind, username)\n\n # check that we got a perspective\n @d.addCallback\n def check(persp):\n if not persp:\n raise ValueError(\"no perspective for '%s'\" % username)\n return persp\n\n # call the perspective's attached(mind)\n @d.addCallback\n def call_attached(persp):\n d = defer.maybeDeferred(persp.attached, mind)\n d.addCallback(lambda _: persp) # keep returning the perspective\n return d\n\n # return the tuple requestAvatar is expected to return\n @d.addCallback\n def done(persp):\n return (pb.IPerspective, persp, lambda: persp.detached(mind))\n\n return d\n\n # ICredentialsChecker\n\n @defer.inlineCallbacks\n def requestAvatarId(self, creds):\n p = Properties()\n p.master = self.master\n username = bytes2unicode(creds.username)\n try:\n yield self.master.initLock.acquire()\n if username in self.users:\n password, _ = self.users[username]\n password = yield p.render(password)\n matched = yield defer.maybeDeferred(\n creds.checkPassword, unicode2bytes(password))\n if not matched:\n log.msg(\"invalid login from user '{}'\".format(username))\n raise error.UnauthorizedLogin()\n defer.returnValue(creds.username)\n log.msg(\"invalid login from unknown user '{}'\".format(username))\n raise error.UnauthorizedLogin()\n finally:\n yield self.master.initLock.release()\n", "path": "master/buildbot/pbmanager.py"}]} | 3,325 | 149 |
gh_patches_debug_16406 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-1583 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sitecustomize is being run multiple times
When `opentelemetry-instrument` is run, the path of `opentelemetry-python`'s `sitecustomize.py` file is added to `PYTHONPATH`. This works fine unless the command executed by `opentelemetry-instrument` is also calling the `python` executable, which would make this `sitecustomize` be executed more than once. This is bad because this means multiple instrumentations may happen.
I'll be modifying `sitecustomize` to remove its path from `PYTHONPATH` after it has been executed.
</issue>
<code>
[start of opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import sys
16 from logging import getLogger
17 from os import environ, path
18
19 from pkg_resources import iter_entry_points
20
21 from opentelemetry.environment_variables import (
22 OTEL_PYTHON_DISABLED_INSTRUMENTATIONS,
23 )
24
25 logger = getLogger(__file__)
26
27
28 def _load_distros():
29 for entry_point in iter_entry_points("opentelemetry_distro"):
30 try:
31 entry_point.load()().configure() # type: ignore
32 logger.debug("Distribution %s configured", entry_point.name)
33 except Exception as exc: # pylint: disable=broad-except
34 logger.exception(
35 "Distribution %s configuration failed", entry_point.name
36 )
37 raise exc
38
39
40 def _load_instrumentors():
41 package_to_exclude = environ.get(OTEL_PYTHON_DISABLED_INSTRUMENTATIONS, [])
42 if isinstance(package_to_exclude, str):
43 package_to_exclude = package_to_exclude.split(",")
44 # to handle users entering "requests , flask" or "requests, flask" with spaces
45 package_to_exclude = [x.strip() for x in package_to_exclude]
46
47 for entry_point in iter_entry_points("opentelemetry_instrumentor"):
48 try:
49 if entry_point.name in package_to_exclude:
50 logger.debug(
51 "Instrumentation skipped for library %s", entry_point.name
52 )
53 continue
54 entry_point.load()().instrument() # type: ignore
55 logger.debug("Instrumented %s", entry_point.name)
56 except Exception as exc: # pylint: disable=broad-except
57 logger.exception("Instrumenting of %s failed", entry_point.name)
58 raise exc
59
60
61 def _load_configurators():
62 configured = None
63 for entry_point in iter_entry_points("opentelemetry_configurator"):
64 if configured is not None:
65 logger.warning(
66 "Configuration of %s not loaded, %s already loaded",
67 entry_point.name,
68 configured,
69 )
70 continue
71 try:
72 entry_point.load()().configure() # type: ignore
73 configured = entry_point.name
74 except Exception as exc: # pylint: disable=broad-except
75 logger.exception("Configuration of %s failed", entry_point.name)
76 raise exc
77
78
79 def initialize():
80 try:
81 _load_distros()
82 _load_configurators()
83 _load_instrumentors()
84 except Exception: # pylint: disable=broad-except
85 logger.exception("Failed to auto initialize opentelemetry")
86
87
88 if (
89 hasattr(sys, "argv")
90 and sys.argv[0].split(path.sep)[-1] == "celery"
91 and "worker" in sys.argv[1:]
92 ):
93 from celery.signals import worker_process_init # pylint:disable=E0401
94
95 @worker_process_init.connect(weak=False)
96 def init_celery(*args, **kwargs):
97 initialize()
98
99
100 else:
101 initialize()
102
[end of opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py
--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py
+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py
@@ -15,6 +15,8 @@
import sys
from logging import getLogger
from os import environ, path
+from os.path import abspath, dirname, pathsep
+from re import sub
from pkg_resources import iter_entry_points
@@ -83,6 +85,12 @@
_load_instrumentors()
except Exception: # pylint: disable=broad-except
logger.exception("Failed to auto initialize opentelemetry")
+ finally:
+ environ["PYTHONPATH"] = sub(
+ r"{}{}?".format(dirname(abspath(__file__)), pathsep),
+ "",
+ environ["PYTHONPATH"],
+ )
if (
| {"golden_diff": "diff --git a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py\n--- a/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py\n+++ b/opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py\n@@ -15,6 +15,8 @@\n import sys\n from logging import getLogger\n from os import environ, path\n+from os.path import abspath, dirname, pathsep\n+from re import sub\n \n from pkg_resources import iter_entry_points\n \n@@ -83,6 +85,12 @@\n _load_instrumentors()\n except Exception: # pylint: disable=broad-except\n logger.exception(\"Failed to auto initialize opentelemetry\")\n+ finally:\n+ environ[\"PYTHONPATH\"] = sub(\n+ r\"{}{}?\".format(dirname(abspath(__file__)), pathsep),\n+ \"\",\n+ environ[\"PYTHONPATH\"],\n+ )\n \n \n if (\n", "issue": "sitecustomize is being run multiple times\nWhen `opentelemetry-instrument` is run, the path of `opentelemetry-python`'s `sitecustomize.py` file is added to `PYTHONPATH`. This works fine unless the command executed by `opentelemetry-instrument` is also calling the `python` executable, which would make this `sitecustomize` be executed more than once. This is bad because this means multiple instrumentations may happen.\r\n\r\nI'll be modifying `sitecustomize` to remove its path from `PYTHONPATH` after it has been executed.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport sys\nfrom logging import getLogger\nfrom os import environ, path\n\nfrom pkg_resources import iter_entry_points\n\nfrom opentelemetry.environment_variables import (\n OTEL_PYTHON_DISABLED_INSTRUMENTATIONS,\n)\n\nlogger = getLogger(__file__)\n\n\ndef _load_distros():\n for entry_point in iter_entry_points(\"opentelemetry_distro\"):\n try:\n entry_point.load()().configure() # type: ignore\n logger.debug(\"Distribution %s configured\", entry_point.name)\n except Exception as exc: # pylint: disable=broad-except\n logger.exception(\n \"Distribution %s configuration failed\", entry_point.name\n )\n raise exc\n\n\ndef _load_instrumentors():\n package_to_exclude = environ.get(OTEL_PYTHON_DISABLED_INSTRUMENTATIONS, [])\n if isinstance(package_to_exclude, str):\n package_to_exclude = package_to_exclude.split(\",\")\n # to handle users entering \"requests , flask\" or \"requests, flask\" with spaces\n package_to_exclude = [x.strip() for x in package_to_exclude]\n\n for entry_point in iter_entry_points(\"opentelemetry_instrumentor\"):\n try:\n if entry_point.name in package_to_exclude:\n logger.debug(\n \"Instrumentation skipped for library %s\", entry_point.name\n )\n continue\n entry_point.load()().instrument() # type: ignore\n logger.debug(\"Instrumented %s\", entry_point.name)\n except Exception as exc: # pylint: disable=broad-except\n logger.exception(\"Instrumenting of %s failed\", entry_point.name)\n raise exc\n\n\ndef _load_configurators():\n configured = None\n for entry_point in iter_entry_points(\"opentelemetry_configurator\"):\n if configured is not None:\n logger.warning(\n \"Configuration of %s not loaded, %s already loaded\",\n entry_point.name,\n configured,\n )\n continue\n try:\n entry_point.load()().configure() # type: ignore\n configured = entry_point.name\n except Exception as exc: # pylint: disable=broad-except\n logger.exception(\"Configuration of %s failed\", entry_point.name)\n raise exc\n\n\ndef initialize():\n try:\n _load_distros()\n _load_configurators()\n _load_instrumentors()\n except Exception: # pylint: disable=broad-except\n logger.exception(\"Failed to auto initialize opentelemetry\")\n\n\nif (\n hasattr(sys, \"argv\")\n and sys.argv[0].split(path.sep)[-1] == \"celery\"\n and \"worker\" in sys.argv[1:]\n):\n from celery.signals import worker_process_init # pylint:disable=E0401\n\n @worker_process_init.connect(weak=False)\n def init_celery(*args, **kwargs):\n initialize()\n\n\nelse:\n initialize()\n", "path": "opentelemetry-instrumentation/src/opentelemetry/instrumentation/auto_instrumentation/sitecustomize.py"}]} | 1,620 | 241 |
gh_patches_debug_10092 | rasdani/github-patches | git_diff | beeware__toga-645 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ImageView example not working on Linux
## Expected Behavior
<!--- If you're describing a bug, tell us what you expect to happen. -->
ImageView demos display both a local image file and one from a web url
<!--- If you're requesting a new feature, tell us why you'd like this feature. -->
## Current Behavior
<!--- If you're describing a bug, what currently happens? -->
Displays the wrong path variable when image not found
Concatenates local application path and url when attempting to display web url
problem in rehint() function, missing attribute get_height
I tried to address the first two issues with #532, still need to work on the 3rd.
## Steps to reproduce
<!--- Provide a set of steps describing how to reproduce this bug. If you have a live example, provide the link below -->
1. run the application in examples/imageview
2.
3.
## Your Environment
<!--- Provide details on your current environment you found the bug in -->
* Python Version (list the specific version number)
* Operating System and Version (select from the following and list the specific version number; if your OS is not listed, list that as well)
- [ ] macOS - version:
- [ x] Linux - distro: - version: Ubuntu 18.04
- [ ] Windows - version:
- [ ] Other - name: - version:
* Toga Target (the type of app you are trying to generate)
- [ ] android
- [ ] cocoa
- [ ] django
- [x ] gtk
- [ ] iOS
- [ ] tvOS
- [ ] watchOS
- [ ] winforms
- [ ] win32
- [ ] Other (please specify)
</issue>
<code>
[start of examples/imageview/imageview/app.py]
1 import os
2 import toga
3 from toga.style.pack import *
4
5 class ImageViewApp(toga.App):
6 def startup(self):
7 self.main_window = toga.MainWindow(title=self.name)
8
9 box = toga.Box()
10 box.style.padding = 40
11 box.style.update(alignment=CENTER)
12 box.style.update(direction=COLUMN)
13
14 # image from local path
15 # load brutus.png from the package
16 # We set the style width/height parameters for this one
17 image_from_path = toga.Image('resources/brutus.png')
18 imageview_from_path = toga.ImageView(image_from_path)
19 imageview_from_path.style.update(height=72)
20 imageview_from_path.style.update(width=72)
21 box.add(imageview_from_path)
22
23 # image from remote URL
24 # no style parameters - we let Pack determine how to allocate
25 # the space
26 image_from_url = toga.Image('https://pybee.org/project/projects/libraries/toga/toga.png')
27 imageview_from_url = toga.ImageView(image_from_url)
28 box.add(imageview_from_url)
29
30 self.main_window.content = box
31 self.main_window.show()
32
33 def main():
34 return ImageViewApp('ImageView', 'org.pybee.widgets.imageview')
35
36
37 if __name__ == '__main__':
38 app = main()
39 app.main_loop()
40
[end of examples/imageview/imageview/app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/imageview/imageview/app.py b/examples/imageview/imageview/app.py
--- a/examples/imageview/imageview/app.py
+++ b/examples/imageview/imageview/app.py
@@ -14,7 +14,7 @@
# image from local path
# load brutus.png from the package
# We set the style width/height parameters for this one
- image_from_path = toga.Image('resources/brutus.png')
+ image_from_path = toga.Image('../resources/brutus.png')
imageview_from_path = toga.ImageView(image_from_path)
imageview_from_path.style.update(height=72)
imageview_from_path.style.update(width=72)
| {"golden_diff": "diff --git a/examples/imageview/imageview/app.py b/examples/imageview/imageview/app.py\n--- a/examples/imageview/imageview/app.py\n+++ b/examples/imageview/imageview/app.py\n@@ -14,7 +14,7 @@\n # image from local path\n # load brutus.png from the package\n # We set the style width/height parameters for this one\n- image_from_path = toga.Image('resources/brutus.png')\n+ image_from_path = toga.Image('../resources/brutus.png')\n imageview_from_path = toga.ImageView(image_from_path)\n imageview_from_path.style.update(height=72)\n imageview_from_path.style.update(width=72)\n", "issue": "ImageView example not working on Linux\n## Expected Behavior\r\n<!--- If you're describing a bug, tell us what you expect to happen. -->\r\nImageView demos display both a local image file and one from a web url\r\n<!--- If you're requesting a new feature, tell us why you'd like this feature. -->\r\n\r\n\r\n## Current Behavior\r\n<!--- If you're describing a bug, what currently happens? -->\r\nDisplays the wrong path variable when image not found\r\nConcatenates local application path and url when attempting to display web url\r\nproblem in rehint() function, missing attribute get_height\r\n\r\nI tried to address the first two issues with #532, still need to work on the 3rd.\r\n\r\n## Steps to reproduce\r\n<!--- Provide a set of steps describing how to reproduce this bug. If you have a live example, provide the link below -->\r\n1. run the application in examples/imageview\r\n\r\n2.\r\n\r\n3.\r\n\r\n\r\n## Your Environment\r\n<!--- Provide details on your current environment you found the bug in -->\r\n\r\n* Python Version (list the specific version number)\r\n\r\n* Operating System and Version (select from the following and list the specific version number; if your OS is not listed, list that as well)\r\n\r\n - [ ] macOS - version: \r\n - [ x] Linux - distro: - version: Ubuntu 18.04\r\n - [ ] Windows - version:\r\n - [ ] Other - name: - version:\r\n\r\n* Toga Target (the type of app you are trying to generate)\r\n \r\n - [ ] android\r\n - [ ] cocoa\r\n - [ ] django \r\n - [x ] gtk\r\n - [ ] iOS\r\n - [ ] tvOS\r\n - [ ] watchOS\r\n - [ ] winforms \r\n - [ ] win32\r\n - [ ] Other (please specify)\r\n\n", "before_files": [{"content": "import os\nimport toga\nfrom toga.style.pack import *\n\nclass ImageViewApp(toga.App):\n def startup(self):\n self.main_window = toga.MainWindow(title=self.name)\n \n box = toga.Box()\n box.style.padding = 40\n box.style.update(alignment=CENTER)\n box.style.update(direction=COLUMN)\n \n # image from local path\n # load brutus.png from the package\n # We set the style width/height parameters for this one\n image_from_path = toga.Image('resources/brutus.png')\n imageview_from_path = toga.ImageView(image_from_path)\n imageview_from_path.style.update(height=72)\n imageview_from_path.style.update(width=72)\n box.add(imageview_from_path)\n\n # image from remote URL\n # no style parameters - we let Pack determine how to allocate\n # the space\n image_from_url = toga.Image('https://pybee.org/project/projects/libraries/toga/toga.png')\n imageview_from_url = toga.ImageView(image_from_url)\n box.add(imageview_from_url)\n \n self.main_window.content = box\n self.main_window.show()\n\ndef main():\n return ImageViewApp('ImageView', 'org.pybee.widgets.imageview')\n\n\nif __name__ == '__main__':\n app = main()\n app.main_loop()\n", "path": "examples/imageview/imageview/app.py"}]} | 1,289 | 154 |
gh_patches_debug_39176 | rasdani/github-patches | git_diff | pulp__pulpcore-2768 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
As a plugin writer, I want to have a function for touching content units
Author: @lubosmj (lmjachky)
Redmine Issue: 9419, https://pulp.plan.io/issues/9419
---
In the PR https://github.com/pulp/pulpcore/pull/1624, we introduced a method that uses `bulk_touch` for updating timestamps of content units. We should expose this method to all plugin writers (e.g., pulp_container currently implements the same method - this creates unnecessary duplicates).
</issue>
<code>
[start of pulpcore/plugin/actions.py]
1 from gettext import gettext as _
2 from drf_spectacular.utils import extend_schema
3 from rest_framework.decorators import action
4 from rest_framework.serializers import ValidationError
5
6 from pulpcore.app import tasks
7 from pulpcore.app.models import Content, RepositoryVersion
8 from pulpcore.app.response import OperationPostponedResponse
9 from pulpcore.app.serializers import (
10 AsyncOperationResponseSerializer,
11 RepositoryAddRemoveContentSerializer,
12 )
13 from pulpcore.app.viewsets import NamedModelViewSet
14 from pulpcore.tasking.tasks import dispatch
15
16
17 __all__ = ["ModifyRepositoryActionMixin"]
18
19
20 class ModifyRepositoryActionMixin:
21 @extend_schema(
22 description="Trigger an asynchronous task to create a new repository version.",
23 summary="Modify Repository Content",
24 responses={202: AsyncOperationResponseSerializer},
25 )
26 @action(detail=True, methods=["post"], serializer_class=RepositoryAddRemoveContentSerializer)
27 def modify(self, request, pk):
28 """
29 Queues a task that creates a new RepositoryVersion by adding and removing content units
30 """
31 add_content_units = {}
32 remove_content_units = {}
33
34 repository = self.get_object()
35 serializer = self.get_serializer(data=request.data)
36 serializer.is_valid(raise_exception=True)
37
38 if "base_version" in request.data:
39 base_version_pk = self.get_resource(request.data["base_version"], RepositoryVersion).pk
40 else:
41 base_version_pk = None
42
43 if "add_content_units" in request.data:
44 for url in request.data["add_content_units"]:
45 add_content_units[NamedModelViewSet.extract_pk(url)] = url
46
47 content_units_pks = set(add_content_units.keys())
48 existing_content_units = Content.objects.filter(pk__in=content_units_pks)
49 existing_content_units.touch()
50
51 self.verify_content_units(existing_content_units, add_content_units)
52
53 add_content_units = list(add_content_units.keys())
54
55 if "remove_content_units" in request.data:
56 if "*" in request.data["remove_content_units"]:
57 remove_content_units = ["*"]
58 else:
59 for url in request.data["remove_content_units"]:
60 remove_content_units[NamedModelViewSet.extract_pk(url)] = url
61 content_units_pks = set(remove_content_units.keys())
62 existing_content_units = Content.objects.filter(pk__in=content_units_pks)
63 self.verify_content_units(existing_content_units, remove_content_units)
64 remove_content_units = list(remove_content_units.keys())
65
66 task = dispatch(
67 tasks.repository.add_and_remove,
68 exclusive_resources=[repository],
69 kwargs={
70 "repository_pk": pk,
71 "base_version_pk": base_version_pk,
72 "add_content_units": add_content_units,
73 "remove_content_units": remove_content_units,
74 },
75 )
76 return OperationPostponedResponse(task, request)
77
78 def verify_content_units(self, content_units, all_content_units):
79 """Verify referenced content units."""
80 existing_content_units_pks = content_units.values_list("pk", flat=True)
81 existing_content_units_pks = {str(pk) for pk in existing_content_units_pks}
82
83 missing_pks = set(all_content_units.keys()) - existing_content_units_pks
84 if missing_pks:
85 missing_hrefs = [all_content_units[pk] for pk in missing_pks]
86 raise ValidationError(
87 _("Could not find the following content units: {}").format(missing_hrefs)
88 )
89
[end of pulpcore/plugin/actions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/plugin/actions.py b/pulpcore/plugin/actions.py
--- a/pulpcore/plugin/actions.py
+++ b/pulpcore/plugin/actions.py
@@ -1,4 +1,5 @@
from gettext import gettext as _
+
from drf_spectacular.utils import extend_schema
from rest_framework.decorators import action
from rest_framework.serializers import ValidationError
@@ -48,7 +49,7 @@
existing_content_units = Content.objects.filter(pk__in=content_units_pks)
existing_content_units.touch()
- self.verify_content_units(existing_content_units, add_content_units)
+ raise_for_unknown_content_units(existing_content_units, add_content_units)
add_content_units = list(add_content_units.keys())
@@ -60,7 +61,7 @@
remove_content_units[NamedModelViewSet.extract_pk(url)] = url
content_units_pks = set(remove_content_units.keys())
existing_content_units = Content.objects.filter(pk__in=content_units_pks)
- self.verify_content_units(existing_content_units, remove_content_units)
+ raise_for_unknown_content_units(existing_content_units, remove_content_units)
remove_content_units = list(remove_content_units.keys())
task = dispatch(
@@ -75,14 +76,24 @@
)
return OperationPostponedResponse(task, request)
- def verify_content_units(self, content_units, all_content_units):
- """Verify referenced content units."""
- existing_content_units_pks = content_units.values_list("pk", flat=True)
- existing_content_units_pks = {str(pk) for pk in existing_content_units_pks}
-
- missing_pks = set(all_content_units.keys()) - existing_content_units_pks
- if missing_pks:
- missing_hrefs = [all_content_units[pk] for pk in missing_pks]
- raise ValidationError(
- _("Could not find the following content units: {}").format(missing_hrefs)
- )
+
+def raise_for_unknown_content_units(existing_content_units, content_units_pks_hrefs):
+ """Verify if all the specified content units were found in the database.
+
+ Args:
+ existing_content_units (pulpcore.plugin.models.Content): Content filtered by
+ specified_content_units.
+ content_units_pks_hrefs (dict): An original dictionary of pk-href pairs that
+ are used for the verification.
+ Raises:
+ ValidationError: If some of the referenced content units are not present in the database
+ """
+ existing_content_units_pks = existing_content_units.values_list("pk", flat=True)
+ existing_content_units_pks = set(map(str, existing_content_units_pks))
+
+ missing_pks = set(content_units_pks_hrefs.keys()) - existing_content_units_pks
+ if missing_pks:
+ missing_hrefs = [content_units_pks_hrefs[pk] for pk in missing_pks]
+ raise ValidationError(
+ _("Could not find the following content units: {}").format(missing_hrefs)
+ )
| {"golden_diff": "diff --git a/pulpcore/plugin/actions.py b/pulpcore/plugin/actions.py\n--- a/pulpcore/plugin/actions.py\n+++ b/pulpcore/plugin/actions.py\n@@ -1,4 +1,5 @@\n from gettext import gettext as _\n+\n from drf_spectacular.utils import extend_schema\n from rest_framework.decorators import action\n from rest_framework.serializers import ValidationError\n@@ -48,7 +49,7 @@\n existing_content_units = Content.objects.filter(pk__in=content_units_pks)\n existing_content_units.touch()\n \n- self.verify_content_units(existing_content_units, add_content_units)\n+ raise_for_unknown_content_units(existing_content_units, add_content_units)\n \n add_content_units = list(add_content_units.keys())\n \n@@ -60,7 +61,7 @@\n remove_content_units[NamedModelViewSet.extract_pk(url)] = url\n content_units_pks = set(remove_content_units.keys())\n existing_content_units = Content.objects.filter(pk__in=content_units_pks)\n- self.verify_content_units(existing_content_units, remove_content_units)\n+ raise_for_unknown_content_units(existing_content_units, remove_content_units)\n remove_content_units = list(remove_content_units.keys())\n \n task = dispatch(\n@@ -75,14 +76,24 @@\n )\n return OperationPostponedResponse(task, request)\n \n- def verify_content_units(self, content_units, all_content_units):\n- \"\"\"Verify referenced content units.\"\"\"\n- existing_content_units_pks = content_units.values_list(\"pk\", flat=True)\n- existing_content_units_pks = {str(pk) for pk in existing_content_units_pks}\n-\n- missing_pks = set(all_content_units.keys()) - existing_content_units_pks\n- if missing_pks:\n- missing_hrefs = [all_content_units[pk] for pk in missing_pks]\n- raise ValidationError(\n- _(\"Could not find the following content units: {}\").format(missing_hrefs)\n- )\n+\n+def raise_for_unknown_content_units(existing_content_units, content_units_pks_hrefs):\n+ \"\"\"Verify if all the specified content units were found in the database.\n+\n+ Args:\n+ existing_content_units (pulpcore.plugin.models.Content): Content filtered by\n+ specified_content_units.\n+ content_units_pks_hrefs (dict): An original dictionary of pk-href pairs that\n+ are used for the verification.\n+ Raises:\n+ ValidationError: If some of the referenced content units are not present in the database\n+ \"\"\"\n+ existing_content_units_pks = existing_content_units.values_list(\"pk\", flat=True)\n+ existing_content_units_pks = set(map(str, existing_content_units_pks))\n+\n+ missing_pks = set(content_units_pks_hrefs.keys()) - existing_content_units_pks\n+ if missing_pks:\n+ missing_hrefs = [content_units_pks_hrefs[pk] for pk in missing_pks]\n+ raise ValidationError(\n+ _(\"Could not find the following content units: {}\").format(missing_hrefs)\n+ )\n", "issue": "As a plugin writer, I want to have a function for touching content units\nAuthor: @lubosmj (lmjachky)\n\n\nRedmine Issue: 9419, https://pulp.plan.io/issues/9419\n\n---\n\nIn the PR https://github.com/pulp/pulpcore/pull/1624, we introduced a method that uses `bulk_touch` for updating timestamps of content units. We should expose this method to all plugin writers (e.g., pulp_container currently implements the same method - this creates unnecessary duplicates).\n\n\n\n", "before_files": [{"content": "from gettext import gettext as _\nfrom drf_spectacular.utils import extend_schema\nfrom rest_framework.decorators import action\nfrom rest_framework.serializers import ValidationError\n\nfrom pulpcore.app import tasks\nfrom pulpcore.app.models import Content, RepositoryVersion\nfrom pulpcore.app.response import OperationPostponedResponse\nfrom pulpcore.app.serializers import (\n AsyncOperationResponseSerializer,\n RepositoryAddRemoveContentSerializer,\n)\nfrom pulpcore.app.viewsets import NamedModelViewSet\nfrom pulpcore.tasking.tasks import dispatch\n\n\n__all__ = [\"ModifyRepositoryActionMixin\"]\n\n\nclass ModifyRepositoryActionMixin:\n @extend_schema(\n description=\"Trigger an asynchronous task to create a new repository version.\",\n summary=\"Modify Repository Content\",\n responses={202: AsyncOperationResponseSerializer},\n )\n @action(detail=True, methods=[\"post\"], serializer_class=RepositoryAddRemoveContentSerializer)\n def modify(self, request, pk):\n \"\"\"\n Queues a task that creates a new RepositoryVersion by adding and removing content units\n \"\"\"\n add_content_units = {}\n remove_content_units = {}\n\n repository = self.get_object()\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n\n if \"base_version\" in request.data:\n base_version_pk = self.get_resource(request.data[\"base_version\"], RepositoryVersion).pk\n else:\n base_version_pk = None\n\n if \"add_content_units\" in request.data:\n for url in request.data[\"add_content_units\"]:\n add_content_units[NamedModelViewSet.extract_pk(url)] = url\n\n content_units_pks = set(add_content_units.keys())\n existing_content_units = Content.objects.filter(pk__in=content_units_pks)\n existing_content_units.touch()\n\n self.verify_content_units(existing_content_units, add_content_units)\n\n add_content_units = list(add_content_units.keys())\n\n if \"remove_content_units\" in request.data:\n if \"*\" in request.data[\"remove_content_units\"]:\n remove_content_units = [\"*\"]\n else:\n for url in request.data[\"remove_content_units\"]:\n remove_content_units[NamedModelViewSet.extract_pk(url)] = url\n content_units_pks = set(remove_content_units.keys())\n existing_content_units = Content.objects.filter(pk__in=content_units_pks)\n self.verify_content_units(existing_content_units, remove_content_units)\n remove_content_units = list(remove_content_units.keys())\n\n task = dispatch(\n tasks.repository.add_and_remove,\n exclusive_resources=[repository],\n kwargs={\n \"repository_pk\": pk,\n \"base_version_pk\": base_version_pk,\n \"add_content_units\": add_content_units,\n \"remove_content_units\": remove_content_units,\n },\n )\n return OperationPostponedResponse(task, request)\n\n def verify_content_units(self, content_units, all_content_units):\n \"\"\"Verify referenced content units.\"\"\"\n existing_content_units_pks = content_units.values_list(\"pk\", flat=True)\n existing_content_units_pks = {str(pk) for pk in existing_content_units_pks}\n\n missing_pks = set(all_content_units.keys()) - existing_content_units_pks\n if missing_pks:\n missing_hrefs = [all_content_units[pk] for pk in missing_pks]\n raise ValidationError(\n _(\"Could not find the following content units: {}\").format(missing_hrefs)\n )\n", "path": "pulpcore/plugin/actions.py"}]} | 1,529 | 657 |
gh_patches_debug_20885 | rasdani/github-patches | git_diff | nilearn__nilearn-394 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ugly side plots in doc
They mask part of the code they are supposed to illustrate. See an example [here](http://nilearn.github.io/building_blocks/data_preparation.html#computing-the-mask).

</issue>
<code>
[start of examples/connectivity/plot_adhd_covariance.py]
1 """
2 Computation of covariance matrix between brain regions
3 ======================================================
4
5 This example shows how to extract signals from regions defined by an atlas,
6 and to estimate a covariance matrix based on these signals.
7 """
8
9 plotted_subject = 0 # subject to plot
10
11
12 import matplotlib.pyplot as plt
13 import matplotlib
14 # Copied from matplotlib 1.2.0 for matplotlib 0.99 compatibility.
15 _bwr_data = ((0.0, 0.0, 1.0), (1.0, 1.0, 1.0), (1.0, 0.0, 0.0))
16 plt.cm.register_cmap(cmap=matplotlib.colors.LinearSegmentedColormap.from_list(
17 "bwr", _bwr_data))
18
19
20 def plot_matrices(cov, prec, title):
21 """Plot covariance and precision matrices, for a given processing. """
22
23 prec = prec.copy() # avoid side effects
24
25 # Display sparsity pattern
26 sparsity = prec == 0
27 plt.figure()
28 plt.imshow(sparsity, interpolation="nearest")
29 plt.title("%s / sparsity" % title)
30
31 # Put zeros on the diagonal, for graph clarity.
32 size = prec.shape[0]
33 prec[range(size), range(size)] = 0
34 span = max(abs(prec.min()), abs(prec.max()))
35
36 # Display covariance matrix
37 plt.figure()
38 plt.imshow(cov, interpolation="nearest",
39 vmin=-1, vmax=1, cmap=plt.cm.get_cmap("bwr"))
40 plt.colorbar()
41 plt.title("%s / covariance" % title)
42
43 # Display precision matrix
44 plt.figure()
45 plt.imshow(prec, interpolation="nearest",
46 vmin=-span, vmax=span,
47 cmap=plt.cm.get_cmap("bwr"))
48 plt.colorbar()
49 plt.title("%s / precision" % title)
50
51
52 # Fetching datasets ###########################################################
53 print("-- Fetching datasets ...")
54 from nilearn import datasets
55 msdl_atlas_dataset = datasets.fetch_msdl_atlas()
56 adhd_dataset = datasets.fetch_adhd()
57
58 # Extracting region signals ###################################################
59 import nilearn.image
60 import nilearn.input_data
61
62 from sklearn.externals.joblib import Memory
63 mem = Memory(".")
64
65 # Number of subjects to consider for group-sparse covariance
66 n_subjects = 10
67 subjects = []
68
69 func_filenames = adhd_dataset.func
70 confound_filenames = adhd_dataset.confounds
71 for func_filename, confound_filename in zip(func_filenames,
72 confound_filenames):
73 print("Processing file %s" % func_filename)
74
75 print("-- Computing confounds ...")
76 hv_confounds = mem.cache(nilearn.image.high_variance_confounds)(
77 func_filename)
78
79 print("-- Computing region signals ...")
80 masker = nilearn.input_data.NiftiMapsMasker(
81 msdl_atlas_dataset.maps, resampling_target="maps", detrend=True,
82 low_pass=None, high_pass=0.01, t_r=2.5, standardize=True,
83 memory=mem, memory_level=1, verbose=1)
84 region_ts = masker.fit_transform(func_filename,
85 confounds=[hv_confounds,
86 confound_filename])
87 subjects.append(region_ts)
88
89 # Computing group-sparse precision matrices ###################################
90 print("-- Computing group-sparse precision matrices ...")
91 from nilearn.group_sparse_covariance import GroupSparseCovarianceCV
92 gsc = GroupSparseCovarianceCV(verbose=2, n_jobs=3)
93 gsc.fit(subjects)
94
95 print("-- Computing graph-lasso precision matrices ...")
96 from sklearn import covariance
97 gl = covariance.GraphLassoCV(n_jobs=3)
98 gl.fit(subjects[plotted_subject])
99
100 # Displaying results ##########################################################
101 print("-- Displaying results")
102 title = "{0:d} GroupSparseCovariance $\\alpha={1:.2e}$".format(plotted_subject,
103 gsc.alpha_)
104 plot_matrices(gsc.covariances_[..., plotted_subject],
105 gsc.precisions_[..., plotted_subject], title)
106
107 title = "{0:d} GraphLasso $\\alpha={1:.2e}$".format(plotted_subject,
108 gl.alpha_)
109 plot_matrices(gl.covariance_, gl.precision_, title)
110
111 plt.show()
112
[end of examples/connectivity/plot_adhd_covariance.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/connectivity/plot_adhd_covariance.py b/examples/connectivity/plot_adhd_covariance.py
--- a/examples/connectivity/plot_adhd_covariance.py
+++ b/examples/connectivity/plot_adhd_covariance.py
@@ -20,13 +20,11 @@
def plot_matrices(cov, prec, title):
"""Plot covariance and precision matrices, for a given processing. """
+ # Compute sparsity pattern
+ sparsity = (prec == 0)
+
prec = prec.copy() # avoid side effects
- # Display sparsity pattern
- sparsity = prec == 0
- plt.figure()
- plt.imshow(sparsity, interpolation="nearest")
- plt.title("%s / sparsity" % title)
# Put zeros on the diagonal, for graph clarity.
size = prec.shape[0]
@@ -39,6 +37,11 @@
vmin=-1, vmax=1, cmap=plt.cm.get_cmap("bwr"))
plt.colorbar()
plt.title("%s / covariance" % title)
+
+ # Display sparsity pattern
+ plt.figure()
+ plt.imshow(sparsity, interpolation="nearest")
+ plt.title("%s / sparsity" % title)
# Display precision matrix
plt.figure()
| {"golden_diff": "diff --git a/examples/connectivity/plot_adhd_covariance.py b/examples/connectivity/plot_adhd_covariance.py\n--- a/examples/connectivity/plot_adhd_covariance.py\n+++ b/examples/connectivity/plot_adhd_covariance.py\n@@ -20,13 +20,11 @@\n def plot_matrices(cov, prec, title):\n \"\"\"Plot covariance and precision matrices, for a given processing. \"\"\"\n \n+ # Compute sparsity pattern\n+ sparsity = (prec == 0)\n+ \n prec = prec.copy() # avoid side effects\n \n- # Display sparsity pattern\n- sparsity = prec == 0\n- plt.figure()\n- plt.imshow(sparsity, interpolation=\"nearest\")\n- plt.title(\"%s / sparsity\" % title)\n \n # Put zeros on the diagonal, for graph clarity.\n size = prec.shape[0]\n@@ -39,6 +37,11 @@\n vmin=-1, vmax=1, cmap=plt.cm.get_cmap(\"bwr\"))\n plt.colorbar()\n plt.title(\"%s / covariance\" % title)\n+ \n+ # Display sparsity pattern\n+ plt.figure()\n+ plt.imshow(sparsity, interpolation=\"nearest\")\n+ plt.title(\"%s / sparsity\" % title)\n \n # Display precision matrix\n plt.figure()\n", "issue": "Ugly side plots in doc\nThey mask part of the code they are supposed to illustrate. See an example [here](http://nilearn.github.io/building_blocks/data_preparation.html#computing-the-mask).\n\n\n\n", "before_files": [{"content": "\"\"\"\nComputation of covariance matrix between brain regions\n======================================================\n\nThis example shows how to extract signals from regions defined by an atlas,\nand to estimate a covariance matrix based on these signals.\n\"\"\"\n \nplotted_subject = 0 # subject to plot\n\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n# Copied from matplotlib 1.2.0 for matplotlib 0.99 compatibility.\n_bwr_data = ((0.0, 0.0, 1.0), (1.0, 1.0, 1.0), (1.0, 0.0, 0.0))\nplt.cm.register_cmap(cmap=matplotlib.colors.LinearSegmentedColormap.from_list(\n \"bwr\", _bwr_data))\n\n\ndef plot_matrices(cov, prec, title):\n \"\"\"Plot covariance and precision matrices, for a given processing. \"\"\"\n\n prec = prec.copy() # avoid side effects\n\n # Display sparsity pattern\n sparsity = prec == 0\n plt.figure()\n plt.imshow(sparsity, interpolation=\"nearest\")\n plt.title(\"%s / sparsity\" % title)\n\n # Put zeros on the diagonal, for graph clarity.\n size = prec.shape[0]\n prec[range(size), range(size)] = 0\n span = max(abs(prec.min()), abs(prec.max()))\n\n # Display covariance matrix\n plt.figure()\n plt.imshow(cov, interpolation=\"nearest\",\n vmin=-1, vmax=1, cmap=plt.cm.get_cmap(\"bwr\"))\n plt.colorbar()\n plt.title(\"%s / covariance\" % title)\n\n # Display precision matrix\n plt.figure()\n plt.imshow(prec, interpolation=\"nearest\",\n vmin=-span, vmax=span,\n cmap=plt.cm.get_cmap(\"bwr\"))\n plt.colorbar()\n plt.title(\"%s / precision\" % title)\n\n\n# Fetching datasets ###########################################################\nprint(\"-- Fetching datasets ...\")\nfrom nilearn import datasets\nmsdl_atlas_dataset = datasets.fetch_msdl_atlas()\nadhd_dataset = datasets.fetch_adhd()\n\n# Extracting region signals ###################################################\nimport nilearn.image\nimport nilearn.input_data\n\nfrom sklearn.externals.joblib import Memory\nmem = Memory(\".\")\n\n# Number of subjects to consider for group-sparse covariance\nn_subjects = 10\nsubjects = []\n\nfunc_filenames = adhd_dataset.func\nconfound_filenames = adhd_dataset.confounds\nfor func_filename, confound_filename in zip(func_filenames,\n confound_filenames):\n print(\"Processing file %s\" % func_filename)\n\n print(\"-- Computing confounds ...\")\n hv_confounds = mem.cache(nilearn.image.high_variance_confounds)(\n func_filename)\n\n print(\"-- Computing region signals ...\")\n masker = nilearn.input_data.NiftiMapsMasker(\n msdl_atlas_dataset.maps, resampling_target=\"maps\", detrend=True,\n low_pass=None, high_pass=0.01, t_r=2.5, standardize=True,\n memory=mem, memory_level=1, verbose=1)\n region_ts = masker.fit_transform(func_filename,\n confounds=[hv_confounds,\n confound_filename])\n subjects.append(region_ts)\n\n# Computing group-sparse precision matrices ###################################\nprint(\"-- Computing group-sparse precision matrices ...\")\nfrom nilearn.group_sparse_covariance import GroupSparseCovarianceCV\ngsc = GroupSparseCovarianceCV(verbose=2, n_jobs=3)\ngsc.fit(subjects)\n\nprint(\"-- Computing graph-lasso precision matrices ...\")\nfrom sklearn import covariance\ngl = covariance.GraphLassoCV(n_jobs=3)\ngl.fit(subjects[plotted_subject])\n\n# Displaying results ##########################################################\nprint(\"-- Displaying results\")\ntitle = \"{0:d} GroupSparseCovariance $\\\\alpha={1:.2e}$\".format(plotted_subject,\n gsc.alpha_)\nplot_matrices(gsc.covariances_[..., plotted_subject],\n gsc.precisions_[..., plotted_subject], title)\n\ntitle = \"{0:d} GraphLasso $\\\\alpha={1:.2e}$\".format(plotted_subject,\n gl.alpha_)\nplot_matrices(gl.covariance_, gl.precision_, title)\n\nplt.show()\n", "path": "examples/connectivity/plot_adhd_covariance.py"}]} | 1,787 | 289 |
gh_patches_debug_31635 | rasdani/github-patches | git_diff | quantumlib__Cirq-973 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cirq.PauliInteractionGate's __repr__ output is not valid python
It returns things like "+X", which don't parse when you've imported cirq.
</issue>
<code>
[start of cirq/ops/pauli_interaction_gate.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Hashable, List, Optional, Sequence, Tuple, Union
16
17 import numpy as np
18
19 from cirq import value
20 from cirq.ops import raw_types, gate_features, common_gates, eigen_gate, op_tree
21 from cirq.ops.pauli import Pauli
22 from cirq.ops.clifford_gate import SingleQubitCliffordGate
23
24
25 pauli_eigen_map = {
26 Pauli.X: (np.array([[0.5, 0.5 ], [0.5, 0.5]]),
27 np.array([[0.5, -0.5 ], [-0.5, 0.5]])),
28 Pauli.Y: (np.array([[0.5, -0.5j], [0.5j, 0.5]]),
29 np.array([[0.5, 0.5j], [-0.5j, 0.5]])),
30 Pauli.Z: (np.diag([1, 0]),
31 np.diag([0, 1])),
32 }
33
34
35 class PauliInteractionGate(eigen_gate.EigenGate,
36 gate_features.CompositeGate,
37 gate_features.InterchangeableQubitsGate,
38 gate_features.TextDiagrammable):
39 CZ = None # type: PauliInteractionGate
40 CNOT = None # type: PauliInteractionGate
41
42 def __init__(self,
43 pauli0: Pauli, invert0: bool,
44 pauli1: Pauli, invert1: bool,
45 *,
46 half_turns: Optional[Union[value.Symbol, float]] = None,
47 rads: Optional[float] = None,
48 degs: Optional[float] = None) -> None:
49 """At most one angle argument may be specified. If more are specified,
50 the result is considered ambiguous and an error is thrown. If no angle
51 argument is given, the default value of one half turn is used.
52
53 Args:
54 half_turns: Relative phasing of the interaction's eigenstates, in
55 half_turns.
56 rads: Relative phasing of the interaction's eigenstates, in radians.
57 degs: Relative phasing of the interaction's eigenstates, in degrees.
58 """
59 super().__init__(exponent=value.chosen_angle_to_half_turns(
60 half_turns=half_turns,
61 rads=rads,
62 degs=degs))
63 self.pauli0 = pauli0
64 self.invert0 = invert0
65 self.pauli1 = pauli1
66 self.invert1 = invert1
67
68 def _eq_tuple(self) -> Tuple[Hashable, ...]:
69 return (PauliInteractionGate,
70 self.pauli0, self.invert0,
71 self.pauli1, self.invert1,
72 self._exponent)
73
74 def __eq__(self, other):
75 if not isinstance(other, type(self)):
76 return NotImplemented
77 return self._eq_tuple() == other._eq_tuple()
78
79 def __ne__(self, other):
80 return not self == other
81
82 def __hash__(self):
83 return hash(self._eq_tuple())
84
85 def qubit_index_to_equivalence_group_key(self, index: int) -> int:
86 if self.pauli0 == self.pauli1 and self.invert0 == self.invert1:
87 return 0
88 else:
89 return index
90
91 def _canonical_exponent_period(self) -> Optional[float]:
92 return 2
93
94 def _with_exponent(self, exponent: Union[value.Symbol, float]
95 ) -> 'PauliInteractionGate':
96 return PauliInteractionGate(self.pauli0, self.invert0,
97 self.pauli1, self.invert1,
98 half_turns=exponent)
99
100 def _eigen_components(self) -> List[Tuple[float, np.ndarray]]:
101 comp1 = np.kron(pauli_eigen_map[self.pauli0][not self.invert0],
102 pauli_eigen_map[self.pauli1][not self.invert1])
103 comp0 = np.eye(4) - comp1
104 return [(0, comp0), (1, comp1)]
105
106 def default_decompose(self, qubits: Sequence[raw_types.QubitId]
107 ) -> op_tree.OP_TREE:
108 q0, q1 = qubits
109 right_gate0 = SingleQubitCliffordGate.from_single_map(
110 z_to=(self.pauli0, self.invert0))
111 right_gate1 = SingleQubitCliffordGate.from_single_map(
112 z_to=(self.pauli1, self.invert1))
113 left_gate0 = right_gate0.inverse()
114 left_gate1 = right_gate1.inverse()
115 yield left_gate0(q0)
116 yield left_gate1(q1)
117 yield common_gates.Rot11Gate(half_turns=self._exponent)(q0, q1)
118 yield right_gate0(q0)
119 yield right_gate1(q1)
120
121 def text_diagram_info(self, args: gate_features.TextDiagramInfoArgs
122 ) -> gate_features.TextDiagramInfo:
123 labels = {Pauli.X: 'X', Pauli.Y: 'Y', Pauli.Z: '@'}
124 l0 = labels[self.pauli0]
125 l1 = labels[self.pauli1]
126 # Add brackets around letter if inverted
127 l0, l1 = ('(-{})'.format(l) if inv else l
128 for l, inv in ((l0, self.invert0), (l1, self.invert1)))
129 return gate_features.TextDiagramInfo(
130 wire_symbols=(l0, l1),
131 exponent=self._exponent)
132
133 def __repr__(self):
134 return 'cirq.PauliInteractionGate({}{!s}, {}{!s})'.format(
135 '+-'[self.invert0], self.pauli0, '+-'[self.invert1], self.pauli1)
136
137
138 PauliInteractionGate.CZ = PauliInteractionGate(Pauli.Z, False, Pauli.Z, False)
139 PauliInteractionGate.CNOT = PauliInteractionGate(Pauli.Z, False, Pauli.X, False)
140
[end of cirq/ops/pauli_interaction_gate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cirq/ops/pauli_interaction_gate.py b/cirq/ops/pauli_interaction_gate.py
--- a/cirq/ops/pauli_interaction_gate.py
+++ b/cirq/ops/pauli_interaction_gate.py
@@ -23,8 +23,8 @@
pauli_eigen_map = {
- Pauli.X: (np.array([[0.5, 0.5 ], [0.5, 0.5]]),
- np.array([[0.5, -0.5 ], [-0.5, 0.5]])),
+ Pauli.X: (np.array([[0.5, 0.5], [0.5, 0.5]]),
+ np.array([[0.5, -0.5], [-0.5, 0.5]])),
Pauli.Y: (np.array([[0.5, -0.5j], [0.5j, 0.5]]),
np.array([[0.5, 0.5j], [-0.5j, 0.5]])),
Pauli.Z: (np.diag([1, 0]),
@@ -107,9 +107,9 @@
) -> op_tree.OP_TREE:
q0, q1 = qubits
right_gate0 = SingleQubitCliffordGate.from_single_map(
- z_to=(self.pauli0, self.invert0))
+ z_to=(self.pauli0, self.invert0))
right_gate1 = SingleQubitCliffordGate.from_single_map(
- z_to=(self.pauli1, self.invert1))
+ z_to=(self.pauli1, self.invert1))
left_gate0 = right_gate0.inverse()
left_gate1 = right_gate1.inverse()
yield left_gate0(q0)
@@ -131,9 +131,10 @@
exponent=self._exponent)
def __repr__(self):
- return 'cirq.PauliInteractionGate({}{!s}, {}{!s})'.format(
- '+-'[self.invert0], self.pauli0, '+-'[self.invert1], self.pauli1)
+ return 'cirq.PauliInteractionGate(cirq.{}, {!s}, cirq.{}, {!s})'.format(
+ self.pauli0, self.invert0, self.pauli1, self.invert1)
PauliInteractionGate.CZ = PauliInteractionGate(Pauli.Z, False, Pauli.Z, False)
-PauliInteractionGate.CNOT = PauliInteractionGate(Pauli.Z, False, Pauli.X, False)
+PauliInteractionGate.CNOT = PauliInteractionGate(
+ Pauli.Z, False, Pauli.X, False)
| {"golden_diff": "diff --git a/cirq/ops/pauli_interaction_gate.py b/cirq/ops/pauli_interaction_gate.py\n--- a/cirq/ops/pauli_interaction_gate.py\n+++ b/cirq/ops/pauli_interaction_gate.py\n@@ -23,8 +23,8 @@\n \n \n pauli_eigen_map = {\n- Pauli.X: (np.array([[0.5, 0.5 ], [0.5, 0.5]]),\n- np.array([[0.5, -0.5 ], [-0.5, 0.5]])),\n+ Pauli.X: (np.array([[0.5, 0.5], [0.5, 0.5]]),\n+ np.array([[0.5, -0.5], [-0.5, 0.5]])),\n Pauli.Y: (np.array([[0.5, -0.5j], [0.5j, 0.5]]),\n np.array([[0.5, 0.5j], [-0.5j, 0.5]])),\n Pauli.Z: (np.diag([1, 0]),\n@@ -107,9 +107,9 @@\n ) -> op_tree.OP_TREE:\n q0, q1 = qubits\n right_gate0 = SingleQubitCliffordGate.from_single_map(\n- z_to=(self.pauli0, self.invert0))\n+ z_to=(self.pauli0, self.invert0))\n right_gate1 = SingleQubitCliffordGate.from_single_map(\n- z_to=(self.pauli1, self.invert1))\n+ z_to=(self.pauli1, self.invert1))\n left_gate0 = right_gate0.inverse()\n left_gate1 = right_gate1.inverse()\n yield left_gate0(q0)\n@@ -131,9 +131,10 @@\n exponent=self._exponent)\n \n def __repr__(self):\n- return 'cirq.PauliInteractionGate({}{!s}, {}{!s})'.format(\n- '+-'[self.invert0], self.pauli0, '+-'[self.invert1], self.pauli1)\n+ return 'cirq.PauliInteractionGate(cirq.{}, {!s}, cirq.{}, {!s})'.format(\n+ self.pauli0, self.invert0, self.pauli1, self.invert1)\n \n \n PauliInteractionGate.CZ = PauliInteractionGate(Pauli.Z, False, Pauli.Z, False)\n-PauliInteractionGate.CNOT = PauliInteractionGate(Pauli.Z, False, Pauli.X, False)\n+PauliInteractionGate.CNOT = PauliInteractionGate(\n+ Pauli.Z, False, Pauli.X, False)\n", "issue": "cirq.PauliInteractionGate's __repr__ output is not valid python\nIt returns things like \"+X\", which don't parse when you've imported cirq.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Hashable, List, Optional, Sequence, Tuple, Union\n\nimport numpy as np\n\nfrom cirq import value\nfrom cirq.ops import raw_types, gate_features, common_gates, eigen_gate, op_tree\nfrom cirq.ops.pauli import Pauli\nfrom cirq.ops.clifford_gate import SingleQubitCliffordGate\n\n\npauli_eigen_map = {\n Pauli.X: (np.array([[0.5, 0.5 ], [0.5, 0.5]]),\n np.array([[0.5, -0.5 ], [-0.5, 0.5]])),\n Pauli.Y: (np.array([[0.5, -0.5j], [0.5j, 0.5]]),\n np.array([[0.5, 0.5j], [-0.5j, 0.5]])),\n Pauli.Z: (np.diag([1, 0]),\n np.diag([0, 1])),\n}\n\n\nclass PauliInteractionGate(eigen_gate.EigenGate,\n gate_features.CompositeGate,\n gate_features.InterchangeableQubitsGate,\n gate_features.TextDiagrammable):\n CZ = None # type: PauliInteractionGate\n CNOT = None # type: PauliInteractionGate\n\n def __init__(self,\n pauli0: Pauli, invert0: bool,\n pauli1: Pauli, invert1: bool,\n *,\n half_turns: Optional[Union[value.Symbol, float]] = None,\n rads: Optional[float] = None,\n degs: Optional[float] = None) -> None:\n \"\"\"At most one angle argument may be specified. If more are specified,\n the result is considered ambiguous and an error is thrown. If no angle\n argument is given, the default value of one half turn is used.\n\n Args:\n half_turns: Relative phasing of the interaction's eigenstates, in\n half_turns.\n rads: Relative phasing of the interaction's eigenstates, in radians.\n degs: Relative phasing of the interaction's eigenstates, in degrees.\n \"\"\"\n super().__init__(exponent=value.chosen_angle_to_half_turns(\n half_turns=half_turns,\n rads=rads,\n degs=degs))\n self.pauli0 = pauli0\n self.invert0 = invert0\n self.pauli1 = pauli1\n self.invert1 = invert1\n\n def _eq_tuple(self) -> Tuple[Hashable, ...]:\n return (PauliInteractionGate,\n self.pauli0, self.invert0,\n self.pauli1, self.invert1,\n self._exponent)\n\n def __eq__(self, other):\n if not isinstance(other, type(self)):\n return NotImplemented\n return self._eq_tuple() == other._eq_tuple()\n\n def __ne__(self, other):\n return not self == other\n\n def __hash__(self):\n return hash(self._eq_tuple())\n\n def qubit_index_to_equivalence_group_key(self, index: int) -> int:\n if self.pauli0 == self.pauli1 and self.invert0 == self.invert1:\n return 0\n else:\n return index\n\n def _canonical_exponent_period(self) -> Optional[float]:\n return 2\n\n def _with_exponent(self, exponent: Union[value.Symbol, float]\n ) -> 'PauliInteractionGate':\n return PauliInteractionGate(self.pauli0, self.invert0,\n self.pauli1, self.invert1,\n half_turns=exponent)\n\n def _eigen_components(self) -> List[Tuple[float, np.ndarray]]:\n comp1 = np.kron(pauli_eigen_map[self.pauli0][not self.invert0],\n pauli_eigen_map[self.pauli1][not self.invert1])\n comp0 = np.eye(4) - comp1\n return [(0, comp0), (1, comp1)]\n\n def default_decompose(self, qubits: Sequence[raw_types.QubitId]\n ) -> op_tree.OP_TREE:\n q0, q1 = qubits\n right_gate0 = SingleQubitCliffordGate.from_single_map(\n z_to=(self.pauli0, self.invert0))\n right_gate1 = SingleQubitCliffordGate.from_single_map(\n z_to=(self.pauli1, self.invert1))\n left_gate0 = right_gate0.inverse()\n left_gate1 = right_gate1.inverse()\n yield left_gate0(q0)\n yield left_gate1(q1)\n yield common_gates.Rot11Gate(half_turns=self._exponent)(q0, q1)\n yield right_gate0(q0)\n yield right_gate1(q1)\n\n def text_diagram_info(self, args: gate_features.TextDiagramInfoArgs\n ) -> gate_features.TextDiagramInfo:\n labels = {Pauli.X: 'X', Pauli.Y: 'Y', Pauli.Z: '@'}\n l0 = labels[self.pauli0]\n l1 = labels[self.pauli1]\n # Add brackets around letter if inverted\n l0, l1 = ('(-{})'.format(l) if inv else l\n for l, inv in ((l0, self.invert0), (l1, self.invert1)))\n return gate_features.TextDiagramInfo(\n wire_symbols=(l0, l1),\n exponent=self._exponent)\n\n def __repr__(self):\n return 'cirq.PauliInteractionGate({}{!s}, {}{!s})'.format(\n '+-'[self.invert0], self.pauli0, '+-'[self.invert1], self.pauli1)\n\n\nPauliInteractionGate.CZ = PauliInteractionGate(Pauli.Z, False, Pauli.Z, False)\nPauliInteractionGate.CNOT = PauliInteractionGate(Pauli.Z, False, Pauli.X, False)\n", "path": "cirq/ops/pauli_interaction_gate.py"}]} | 2,364 | 630 |
gh_patches_debug_33939 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1978 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
E2504 incorrectly rejects "Iops" property for io2/gp3 volumes
*cfn-lint version: (`cfn-lint --version`)*
cfn-lint 0.44.6
*Description of issue.*
cfn-lint produces an error "E2504: Iops shouldn't be defined for type io2 for Resource ... LaunchConfiguration/Properties/BlockDeviceMappings/0/Ebs/Iops" when setting Iops on a io2 EBS volume.
The Iops property is required for io2 and optional for gp3. [1]
Cfn-lint treats the Iops property as required for io1 and forbidden for all other volume types, which is very much not correct
[1] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-blockdev-template.html#cfn-ec2-blockdev-template-iops
</issue>
<code>
[start of src/cfnlint/rules/resources/ectwo/Ebs.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import re
6 import six
7 from cfnlint.rules import CloudFormationLintRule
8 from cfnlint.rules import RuleMatch
9
10
11 class Ebs(CloudFormationLintRule):
12 """Check if Ec2 Ebs Resource Properties"""
13 id = 'E2504'
14 shortdesc = 'Check Ec2 Ebs Properties'
15 description = 'See if Ec2 Eb2 Properties are valid'
16 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-blockdev-template.html'
17 tags = ['properties', 'ec2', 'ebs']
18
19 def _checkEbs(self, cfn, ebs, path):
20 matches = []
21
22 if isinstance(ebs, dict):
23 volume_types_obj = cfn.get_values(ebs, 'VolumeType')
24 iops_obj = cfn.get_values(ebs, 'Iops')
25 if volume_types_obj is not None:
26 for volume_type_obj in volume_types_obj:
27 volume_type = volume_type_obj.get('Value')
28 if isinstance(volume_type, six.string_types):
29 if volume_type == 'io1':
30 if iops_obj is None:
31 pathmessage = path[:] + ['VolumeType']
32 message = 'VolumeType io1 requires Iops to be specified for {0}'
33 matches.append(
34 RuleMatch(pathmessage, message.format('/'.join(map(str, pathmessage)))))
35 elif volume_type:
36 if iops_obj is not None:
37 pathmessage = path[:] + ['Iops']
38 message = 'Iops shouldn\'t be defined for type {0} for {1}'
39 matches.append(
40 RuleMatch(
41 pathmessage,
42 message.format(volume_type, '/'.join(map(str, pathmessage)))))
43
44 return matches
45
46 def match(self, cfn):
47 """Check Ec2 Ebs Resource Parameters"""
48
49 matches = []
50
51 results = cfn.get_resource_properties(['AWS::EC2::Instance', 'BlockDeviceMappings'])
52 results.extend(cfn.get_resource_properties(
53 ['AWS::AutoScaling::LaunchConfiguration', 'BlockDeviceMappings']))
54 for result in results:
55 path = result['Path']
56 if isinstance(result['Value'], list):
57 for index, properties in enumerate(result['Value']):
58 virtual_name = properties.get('VirtualName')
59 ebs = properties.get('Ebs')
60 if virtual_name:
61 # switch to regex
62 if not re.match(r'^ephemeral([0-9]|[1][0-9]|[2][0-3])$', virtual_name):
63 pathmessage = path[:] + [index, 'VirtualName']
64 message = 'Property VirtualName should be of type ephemeral(n) for {0}'
65 matches.append(
66 RuleMatch(pathmessage, message.format('/'.join(map(str, pathmessage)))))
67 elif ebs:
68 matches.extend(self._checkEbs(cfn, ebs, path[:] + [index, 'Ebs']))
69 return matches
70
[end of src/cfnlint/rules/resources/ectwo/Ebs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/resources/ectwo/Ebs.py b/src/cfnlint/rules/resources/ectwo/Ebs.py
--- a/src/cfnlint/rules/resources/ectwo/Ebs.py
+++ b/src/cfnlint/rules/resources/ectwo/Ebs.py
@@ -9,10 +9,10 @@
class Ebs(CloudFormationLintRule):
- """Check if Ec2 Ebs Resource Properties"""
+ """Check Ec2 Ebs Resource Properties"""
id = 'E2504'
shortdesc = 'Check Ec2 Ebs Properties'
- description = 'See if Ec2 Eb2 Properties are valid'
+ description = 'See if Ec2 Ebs Properties are valid'
source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-blockdev-template.html'
tags = ['properties', 'ec2', 'ebs']
@@ -26,13 +26,15 @@
for volume_type_obj in volume_types_obj:
volume_type = volume_type_obj.get('Value')
if isinstance(volume_type, six.string_types):
- if volume_type == 'io1':
+ if volume_type in ('io1', 'io2'):
if iops_obj is None:
pathmessage = path[:] + ['VolumeType']
- message = 'VolumeType io1 requires Iops to be specified for {0}'
+ message = 'VolumeType {0} requires Iops to be specified for {1}'
matches.append(
- RuleMatch(pathmessage, message.format('/'.join(map(str, pathmessage)))))
- elif volume_type:
+ RuleMatch(
+ pathmessage,
+ message.format(volume_type, '/'.join(map(str, pathmessage)))))
+ elif volume_type in ('gp2', 'st1', 'sc1', 'standard'):
if iops_obj is not None:
pathmessage = path[:] + ['Iops']
message = 'Iops shouldn\'t be defined for type {0} for {1}'
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/ectwo/Ebs.py b/src/cfnlint/rules/resources/ectwo/Ebs.py\n--- a/src/cfnlint/rules/resources/ectwo/Ebs.py\n+++ b/src/cfnlint/rules/resources/ectwo/Ebs.py\n@@ -9,10 +9,10 @@\n \n \n class Ebs(CloudFormationLintRule):\n- \"\"\"Check if Ec2 Ebs Resource Properties\"\"\"\n+ \"\"\"Check Ec2 Ebs Resource Properties\"\"\"\n id = 'E2504'\n shortdesc = 'Check Ec2 Ebs Properties'\n- description = 'See if Ec2 Eb2 Properties are valid'\n+ description = 'See if Ec2 Ebs Properties are valid'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-blockdev-template.html'\n tags = ['properties', 'ec2', 'ebs']\n \n@@ -26,13 +26,15 @@\n for volume_type_obj in volume_types_obj:\n volume_type = volume_type_obj.get('Value')\n if isinstance(volume_type, six.string_types):\n- if volume_type == 'io1':\n+ if volume_type in ('io1', 'io2'):\n if iops_obj is None:\n pathmessage = path[:] + ['VolumeType']\n- message = 'VolumeType io1 requires Iops to be specified for {0}'\n+ message = 'VolumeType {0} requires Iops to be specified for {1}'\n matches.append(\n- RuleMatch(pathmessage, message.format('/'.join(map(str, pathmessage)))))\n- elif volume_type:\n+ RuleMatch(\n+ pathmessage,\n+ message.format(volume_type, '/'.join(map(str, pathmessage)))))\n+ elif volume_type in ('gp2', 'st1', 'sc1', 'standard'):\n if iops_obj is not None:\n pathmessage = path[:] + ['Iops']\n message = 'Iops shouldn\\'t be defined for type {0} for {1}'\n", "issue": "E2504 incorrectly rejects \"Iops\" property for io2/gp3 volumes\n*cfn-lint version: (`cfn-lint --version`)*\r\n\r\ncfn-lint 0.44.6\r\n\r\n*Description of issue.*\r\n\r\ncfn-lint produces an error \"E2504: Iops shouldn't be defined for type io2 for Resource ... LaunchConfiguration/Properties/BlockDeviceMappings/0/Ebs/Iops\" when setting Iops on a io2 EBS volume.\r\n\r\nThe Iops property is required for io2 and optional for gp3. [1]\r\n\r\nCfn-lint treats the Iops property as required for io1 and forbidden for all other volume types, which is very much not correct \r\n\r\n[1] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-blockdev-template.html#cfn-ec2-blockdev-template-iops\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass Ebs(CloudFormationLintRule):\n \"\"\"Check if Ec2 Ebs Resource Properties\"\"\"\n id = 'E2504'\n shortdesc = 'Check Ec2 Ebs Properties'\n description = 'See if Ec2 Eb2 Properties are valid'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-blockdev-template.html'\n tags = ['properties', 'ec2', 'ebs']\n\n def _checkEbs(self, cfn, ebs, path):\n matches = []\n\n if isinstance(ebs, dict):\n volume_types_obj = cfn.get_values(ebs, 'VolumeType')\n iops_obj = cfn.get_values(ebs, 'Iops')\n if volume_types_obj is not None:\n for volume_type_obj in volume_types_obj:\n volume_type = volume_type_obj.get('Value')\n if isinstance(volume_type, six.string_types):\n if volume_type == 'io1':\n if iops_obj is None:\n pathmessage = path[:] + ['VolumeType']\n message = 'VolumeType io1 requires Iops to be specified for {0}'\n matches.append(\n RuleMatch(pathmessage, message.format('/'.join(map(str, pathmessage)))))\n elif volume_type:\n if iops_obj is not None:\n pathmessage = path[:] + ['Iops']\n message = 'Iops shouldn\\'t be defined for type {0} for {1}'\n matches.append(\n RuleMatch(\n pathmessage,\n message.format(volume_type, '/'.join(map(str, pathmessage)))))\n\n return matches\n\n def match(self, cfn):\n \"\"\"Check Ec2 Ebs Resource Parameters\"\"\"\n\n matches = []\n\n results = cfn.get_resource_properties(['AWS::EC2::Instance', 'BlockDeviceMappings'])\n results.extend(cfn.get_resource_properties(\n ['AWS::AutoScaling::LaunchConfiguration', 'BlockDeviceMappings']))\n for result in results:\n path = result['Path']\n if isinstance(result['Value'], list):\n for index, properties in enumerate(result['Value']):\n virtual_name = properties.get('VirtualName')\n ebs = properties.get('Ebs')\n if virtual_name:\n # switch to regex\n if not re.match(r'^ephemeral([0-9]|[1][0-9]|[2][0-3])$', virtual_name):\n pathmessage = path[:] + [index, 'VirtualName']\n message = 'Property VirtualName should be of type ephemeral(n) for {0}'\n matches.append(\n RuleMatch(pathmessage, message.format('/'.join(map(str, pathmessage)))))\n elif ebs:\n matches.extend(self._checkEbs(cfn, ebs, path[:] + [index, 'Ebs']))\n return matches\n", "path": "src/cfnlint/rules/resources/ectwo/Ebs.py"}]} | 1,521 | 442 |
gh_patches_debug_19686 | rasdani/github-patches | git_diff | mirumee__ariadne-145 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: type object 'Callable' has no attribute '_abc_registry'
This problem occurs on Python 3.7 in AWS Lambda (but not locally).
```
[ERROR] AttributeError: type object 'Callable' has no attribute '_abc_registry'
Traceback (most recent call last):
File "/var/lang/lib/python3.7/imp.py", line 244, in load_module
return load_package(name, filename)
File "/var/lang/lib/python3.7/imp.py", line 216, in load_package
return _load(spec)
File "<frozen importlib._bootstrap>", line 696, in _load
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/var/task/itpieces/api/graphql_/__init__.py", line 3, in <module>
from ariadne import graphql_sync
File "/var/task/ariadne/__init__.py", line 1, in <module>
from .enums import EnumType
File "/var/task/ariadne/enums.py", line 3, in <module>
from typing import Any, Dict, Union
File "/var/task/typing.py", line 1356, in <module>
class Callable(extra=collections_abc.Callable, metaclass=CallableMeta):
File "/var/task/typing.py", line 1004, in __new__
self._abc_registry = extra._abc_registry
```
I noticed that Ariadne has `typing` as a dependency. Perhaps this is clashing with `typing` from the Python 3.7 standard library?
#### Suggested solution
Only install `typing` when `python_version<"3.5"` (Reference: [Python typing module](https://github.com/python/typing/issues/573))
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2 import os
3 from setuptools import setup
4
5 CLASSIFIERS = [
6 "Development Status :: 4 - Beta",
7 "Intended Audience :: Developers",
8 "License :: OSI Approved :: BSD License",
9 "Operating System :: OS Independent",
10 "Programming Language :: Python",
11 "Programming Language :: Python :: 3.6",
12 "Programming Language :: Python :: 3.7",
13 "Topic :: Software Development :: Libraries :: Python Modules",
14 ]
15
16 README_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "README.md")
17 with open(README_PATH, "r") as f:
18 README = f.read()
19
20 setup(
21 name="ariadne",
22 author="Mirumee Software",
23 author_email="[email protected]",
24 description="Ariadne is a Python library for implementing GraphQL servers.",
25 long_description=README,
26 long_description_content_type="text/markdown",
27 license="BSD",
28 version="0.3.0",
29 url="https://github.com/mirumee/ariadne",
30 packages=["ariadne"],
31 install_requires=[
32 "graphql-core-next>=1.0.1",
33 "starlette>=0.11.4",
34 "typing>=3.6.0",
35 "typing_extensions>=3.6.0",
36 ],
37 classifiers=CLASSIFIERS,
38 platforms=["any"],
39 )
40
[end of setup.py]
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/master/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 # import os
16 # import sys
17 # sys.path.insert(0, os.path.abspath('.'))
18
19 from datetime import date
20
21 year = date.today().year
22
23
24 # -- Project information -----------------------------------------------------
25
26 project = "Ariadne"
27 copyright = "%s, Mirumee Software" % year
28 author = "Mirumee Software"
29
30 # The short X.Y version
31 version = "3"
32 # The full version, including alpha/beta/rc tags
33 release = "0.3"
34
35
36 # -- General configuration ---------------------------------------------------
37
38 # If your documentation needs a minimal Sphinx version, state it here.
39 #
40 # needs_sphinx = '1.0'
41
42 # Add any Sphinx extension module names here, as strings. They can be
43 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
44 # ones.
45 extensions = []
46
47 # Add any paths that contain templates here, relative to this directory.
48 templates_path = ["_templates"]
49
50 # The suffix(es) of source filenames.
51 # You can specify multiple suffix as a list of string:
52 #
53 # source_suffix = ['.rst', '.md']
54 source_suffix = [".rst", ".md"]
55
56 # The master toctree document.
57 master_doc = "index"
58
59 # The language for content autogenerated by Sphinx. Refer to documentation
60 # for a list of supported languages.
61 #
62 # This is also used if you do content translation via gettext catalogs.
63 # Usually you set "language" from the command line for these cases.
64 language = None
65
66 # List of patterns, relative to source directory, that match files and
67 # directories to ignore when looking for source files.
68 # This pattern also affects html_static_path and html_extra_path.
69 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
70
71 # The name of the Pygments (syntax highlighting) style to use.
72 pygments_style = None
73
74
75 # -- Options for HTML output -------------------------------------------------
76
77 # The theme to use for HTML and HTML Help pages. See the documentation for
78 # a list of builtin themes.
79 #
80 html_theme = "alabaster"
81
82 # Theme options are theme-specific and customize the look and feel of a theme
83 # further. For a list of options available for each theme, see the
84 # documentation.
85 #
86 html_theme_options = {
87 "logo": "logo-vertical.png",
88 "github_user": "mirumee",
89 "github_repo": "ariadne",
90 }
91
92 # Add any paths that contain custom static files (such as style sheets) here,
93 # relative to this directory. They are copied after the builtin static files,
94 # so a file named "default.css" will overwrite the builtin "default.css".
95 html_static_path = ["_static"]
96
97 # Custom sidebar templates, must be a dictionary that maps document names
98 # to template names.
99 #
100 # The default sidebars (for documents that don't match any pattern) are
101 # defined by theme itself. Builtin themes are using these templates by
102 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
103 # 'searchbox.html']``.
104 #
105 # html_sidebars = {}
106
107
108 # -- Options for HTMLHelp output ---------------------------------------------
109
110 # Output file base name for HTML help builder.
111 htmlhelp_basename = "Ariadnedoc"
112
113
114 # -- Options for LaTeX output ------------------------------------------------
115
116 latex_elements = {
117 # The paper size ('letterpaper' or 'a4paper').
118 #
119 # 'papersize': 'letterpaper',
120 # The font size ('10pt', '11pt' or '12pt').
121 #
122 # 'pointsize': '10pt',
123 # Additional stuff for the LaTeX preamble.
124 #
125 # 'preamble': '',
126 # Latex figure (float) alignment
127 #
128 # 'figure_align': 'htbp',
129 }
130
131 # Grouping the document tree into LaTeX files. List of tuples
132 # (source start file, target name, title,
133 # author, documentclass [howto, manual, or own class]).
134 latex_documents = [
135 (master_doc, "Ariadne.tex", "Ariadne Documentation", "Mirumee Software", "manual")
136 ]
137
138
139 # -- Options for manual page output ------------------------------------------
140
141 # One entry per manual page. List of tuples
142 # (source start file, name, description, authors, manual section).
143 man_pages = [(master_doc, "ariadne", "Ariadne Documentation", [author], 1)]
144
145
146 # -- Options for Texinfo output ----------------------------------------------
147
148 # Grouping the document tree into Texinfo files. List of tuples
149 # (source start file, target name, title, author,
150 # dir menu entry, description, category)
151 texinfo_documents = [
152 (
153 master_doc,
154 "Ariadne",
155 "Ariadne",
156 author,
157 "Ariadne",
158 "Ariadne is a Python library for implementing GraphQL servers, inspired by Apollo Server and built with GraphQL-core-next.",
159 "Miscellaneous",
160 )
161 ]
162
163
164 # -- Options for Epub output -------------------------------------------------
165
166 # Bibliographic Dublin Core info.
167 epub_title = project
168
169 # The unique identifier of the text. This can be a ISBN number
170 # or the project homepage.
171 #
172 # epub_identifier = ''
173
174 # A unique identification for the text.
175 #
176 # epub_uid = ''
177
178 # A list of files that should not be packed into the epub file.
179 epub_exclude_files = ["search.html"]
180
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -28,9 +28,9 @@
author = "Mirumee Software"
# The short X.Y version
-version = "3"
+version = "4"
# The full version, including alpha/beta/rc tags
-release = "0.3"
+release = "0.4"
# -- General configuration ---------------------------------------------------
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -25,13 +25,12 @@
long_description=README,
long_description_content_type="text/markdown",
license="BSD",
- version="0.3.0",
+ version="0.4.0",
url="https://github.com/mirumee/ariadne",
packages=["ariadne"],
install_requires=[
"graphql-core-next>=1.0.1",
"starlette>=0.11.4",
- "typing>=3.6.0",
"typing_extensions>=3.6.0",
],
classifiers=CLASSIFIERS,
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -28,9 +28,9 @@\n author = \"Mirumee Software\"\n \n # The short X.Y version\n-version = \"3\"\n+version = \"4\"\n # The full version, including alpha/beta/rc tags\n-release = \"0.3\"\n+release = \"0.4\"\n \n \n # -- General configuration ---------------------------------------------------\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -25,13 +25,12 @@\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n- version=\"0.3.0\",\n+ version=\"0.4.0\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n install_requires=[\n \"graphql-core-next>=1.0.1\",\n \"starlette>=0.11.4\",\n- \"typing>=3.6.0\",\n \"typing_extensions>=3.6.0\",\n ],\n classifiers=CLASSIFIERS,\n", "issue": "AttributeError: type object 'Callable' has no attribute '_abc_registry'\nThis problem occurs on Python 3.7 in AWS Lambda (but not locally).\r\n\r\n```\r\n[ERROR] AttributeError: type object 'Callable' has no attribute '_abc_registry'\r\nTraceback (most recent call last):\r\n File \"/var/lang/lib/python3.7/imp.py\", line 244, in load_module\r\n return load_package(name, filename)\r\n File \"/var/lang/lib/python3.7/imp.py\", line 216, in load_package\r\n return _load(spec)\r\n File \"<frozen importlib._bootstrap>\", line 696, in _load\r\n File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/var/task/itpieces/api/graphql_/__init__.py\", line 3, in <module>\r\n from ariadne import graphql_sync\r\n File \"/var/task/ariadne/__init__.py\", line 1, in <module>\r\n from .enums import EnumType\r\n File \"/var/task/ariadne/enums.py\", line 3, in <module>\r\n from typing import Any, Dict, Union\r\n File \"/var/task/typing.py\", line 1356, in <module>\r\n class Callable(extra=collections_abc.Callable, metaclass=CallableMeta):\r\n File \"/var/task/typing.py\", line 1004, in __new__\r\n self._abc_registry = extra._abc_registry\r\n```\r\n\r\nI noticed that Ariadne has `typing` as a dependency. Perhaps this is clashing with `typing` from the Python 3.7 standard library?\r\n\r\n#### Suggested solution\r\nOnly install `typing` when `python_version<\"3.5\"` (Reference: [Python typing module](https://github.com/python/typing/issues/573))\n", "before_files": [{"content": "#! /usr/bin/env python\nimport os\nfrom setuptools import setup\n\nCLASSIFIERS = [\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nREADME_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"README.md\")\nwith open(README_PATH, \"r\") as f:\n README = f.read()\n\nsetup(\n name=\"ariadne\",\n author=\"Mirumee Software\",\n author_email=\"[email protected]\",\n description=\"Ariadne is a Python library for implementing GraphQL servers.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n license=\"BSD\",\n version=\"0.3.0\",\n url=\"https://github.com/mirumee/ariadne\",\n packages=[\"ariadne\"],\n install_requires=[\n \"graphql-core-next>=1.0.1\",\n \"starlette>=0.11.4\",\n \"typing>=3.6.0\",\n \"typing_extensions>=3.6.0\",\n ],\n classifiers=CLASSIFIERS,\n platforms=[\"any\"],\n)\n", "path": "setup.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nfrom datetime import date\n\nyear = date.today().year\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"Ariadne\"\ncopyright = \"%s, Mirumee Software\" % year\nauthor = \"Mirumee Software\"\n\n# The short X.Y version\nversion = \"3\"\n# The full version, including alpha/beta/rc tags\nrelease = \"0.3\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = []\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = None\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"alabaster\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n \"logo\": \"logo-vertical.png\",\n \"github_user\": \"mirumee\",\n \"github_repo\": \"ariadne\",\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"Ariadnedoc\"\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"Ariadne.tex\", \"Ariadne Documentation\", \"Mirumee Software\", \"manual\")\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"ariadne\", \"Ariadne Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"Ariadne\",\n \"Ariadne\",\n author,\n \"Ariadne\",\n \"Ariadne is a Python library for implementing GraphQL servers, inspired by Apollo Server and built with GraphQL-core-next.\",\n \"Miscellaneous\",\n )\n]\n\n\n# -- Options for Epub output -------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = [\"search.html\"]\n", "path": "docs/conf.py"}]} | 3,054 | 254 |
gh_patches_debug_13880 | rasdani/github-patches | git_diff | kubeflow__pipelines-1154 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Backend Docker build fails with python error in resnet-train-pipeline.py
Ran on master branch (`086d4763d9f163acdeb423bc2bc49ab470442a92`) with `--no-cache` arg and got the following the error:
```
Step 20/31 : RUN find . -maxdepth 2 -name '*.py' -type f | while read pipeline; do dsl-compile --py "$pipeline" --output "$pipeline.tar.gz"; done
---> Running in 3205bef0c5cb
Traceback (most recent call last):
File "/usr/local/bin/dsl-compile", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python3.5/site-packages/kfp/compiler/main.py", line 103, in main
compile_pyfile(args.py, args.function, args.output, not args.disable_type_check)
File "/usr/local/lib/python3.5/site-packages/kfp/compiler/main.py", line 92, in compile_pyfile
_compile_pipeline_function(function_name, output_path, type_check)
File "/usr/local/lib/python3.5/site-packages/kfp/compiler/main.py", line 72, in _compile_pipeline_function
kfp.compiler.Compiler().compile(pipeline_func, output_path, type_check)
File "/usr/local/lib/python3.5/site-packages/kfp/compiler/compiler.py", line 644, in compile
workflow = self._compile(pipeline_func)
File "/usr/local/lib/python3.5/site-packages/kfp/compiler/compiler.py", line 596, in _compile
pipeline_func(*args_list)
File "./resnet-cmle/resnet-train-pipeline.py", line 127, in resnet_train
export_output = os.path.join(str(train.outputs['job-dir']), 'export')
KeyError: 'job-dir'
The command '/bin/sh -c find . -maxdepth 2 -name '*.py' -type f | while read pipeline; do dsl-compile --py "$pipeline" --output "$pipeline.tar.gz"; done' returned a non-zero code: 1
</issue>
<code>
[start of samples/resnet-cmle/resnet-train-pipeline.py]
1 #!/usr/bin/env python3
2 # Copyright 2019 Google LLC
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16
17 import kfp.dsl as dsl
18 import kfp.gcp as gcp
19 import kfp.components as comp
20 import datetime
21 import json
22 import os
23
24 dataflow_python_op = comp.load_component_from_url(
25 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/dataflow/launch_python/component.yaml')
26 cloudml_train_op = comp.load_component_from_url(
27 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/ml_engine/train/component.yaml')
28 cloudml_deploy_op = comp.load_component_from_url(
29 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/ml_engine/deploy/component.yaml')
30
31
32 def resnet_preprocess_op(project_id: 'GcpProject', output: 'GcsUri', staging_dir: 'GcsUri', train_csv: 'GcsUri[text/csv]',
33 validation_csv: 'GcsUri[text/csv]', labels, train_size: 'Integer', validation_size: 'Integer',
34 step_name='preprocess'):
35 return dataflow_python_op(
36 python_file_path='gs://ml-pipeline-playground/samples/ml_engine/resnet-cmle/preprocess/preprocess.py',
37 project_id=project_id,
38 requirements_file_path='gs://ml-pipeline-playground/samples/ml_engine/resnet-cmle/preprocess/requirements.txt',
39 staging_dir=staging_dir,
40 args=json.dumps([
41 '--train_csv', str(train_csv),
42 '--validation_csv', str(validation_csv),
43 '--labels', str(labels),
44 '--output_dir', str(output),
45 '--train_size', str(train_size),
46 '--validation_size', str(validation_size)
47 ])
48 )
49
50
51 def resnet_train_op(project_id, data_dir, output: 'GcsUri', region: 'GcpRegion', depth: int, train_batch_size: int,
52 eval_batch_size: int, steps_per_eval: int, train_steps: int, num_train_images: int,
53 num_eval_images: int, num_label_classes: int, tf_version, step_name='train'):
54 return cloudml_train_op(
55 project_id=project_id,
56 region='us-central1',
57 python_module='trainer.resnet_main',
58 package_uris=json.dumps(
59 ['gs://ml-pipeline-playground/samples/ml_engine/resnet-cmle/trainer/trainer-1.0.tar.gz']),
60 job_dir=output,
61 args=json.dumps([
62 '--data_dir', str(data_dir),
63 '--model_dir', str(output),
64 '--use_tpu', 'True',
65 '--resnet_depth', str(depth),
66 '--train_batch_size', str(train_batch_size),
67 '--eval_batch_size', str(eval_batch_size),
68 '--steps_per_eval', str(steps_per_eval),
69 '--train_steps', str(train_steps),
70 '--num_train_images', str(num_train_images),
71 '--num_eval_images', str(num_eval_images),
72 '--num_label_classes', str(num_label_classes),
73 '--export_dir', '{}/export'.format(str(output))
74 ]),
75 runtime_version=tf_version,
76 training_input=json.dumps({
77 'scaleTier': 'BASIC_TPU'
78 })
79 )
80
81
82 def resnet_deploy_op(model_dir, model, version, project_id: 'GcpProject', region: 'GcpRegion',
83 tf_version, step_name='deploy'):
84 # TODO(hongyes): add region to model payload.
85 return cloudml_deploy_op(
86 model_uri=model_dir,
87 project_id=project_id,
88 model_id=model,
89 version_id=version,
90 runtime_version=tf_version,
91 replace_existing_version='True'
92 )
93
94
95 @dsl.pipeline(
96 name='ResNet_Train_Pipeline',
97 description='Demonstrate the ResNet50 predict.'
98 )
99 def resnet_train(
100 project_id,
101 output,
102 region='us-central1',
103 model='bolts',
104 version='beta1',
105 tf_version='1.12',
106 train_csv='gs://bolts_image_dataset/bolt_images_train.csv',
107 validation_csv='gs://bolts_image_dataset/bolt_images_validate.csv',
108 labels='gs://bolts_image_dataset/labels.txt',
109 depth=50,
110 train_batch_size=1024,
111 eval_batch_size=1024,
112 steps_per_eval=250,
113 train_steps=10000,
114 num_train_images=218593,
115 num_eval_images=54648,
116 num_label_classes=10):
117 output_dir = os.path.join(str(output), '{{workflow.name}}')
118 preprocess_staging = os.path.join(output_dir, 'staging')
119 preprocess_output = os.path.join(output_dir, 'preprocessed_output')
120 train_output = os.path.join(output_dir, 'model')
121 preprocess = resnet_preprocess_op(project_id, preprocess_output, preprocess_staging, train_csv,
122 validation_csv, labels, train_batch_size, eval_batch_size).apply(gcp.use_gcp_secret())
123 train = resnet_train_op(project_id, preprocess_output, train_output, region, depth, train_batch_size,
124 eval_batch_size, steps_per_eval, train_steps, num_train_images, num_eval_images,
125 num_label_classes, tf_version).apply(gcp.use_gcp_secret())
126 train.after(preprocess)
127 export_output = os.path.join(str(train.outputs['job-dir']), 'export')
128 deploy = resnet_deploy_op(export_output, model, version, project_id, region,
129 tf_version).apply(gcp.use_gcp_secret())
130
131
132 if __name__ == '__main__':
133 import kfp.compiler as compiler
134 compiler.Compiler().compile(resnet_train, __file__ + '.zip')
135
[end of samples/resnet-cmle/resnet-train-pipeline.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/samples/resnet-cmle/resnet-train-pipeline.py b/samples/resnet-cmle/resnet-train-pipeline.py
--- a/samples/resnet-cmle/resnet-train-pipeline.py
+++ b/samples/resnet-cmle/resnet-train-pipeline.py
@@ -124,7 +124,7 @@
eval_batch_size, steps_per_eval, train_steps, num_train_images, num_eval_images,
num_label_classes, tf_version).apply(gcp.use_gcp_secret())
train.after(preprocess)
- export_output = os.path.join(str(train.outputs['job-dir']), 'export')
+ export_output = os.path.join(str(train.outputs['job_dir']), 'export')
deploy = resnet_deploy_op(export_output, model, version, project_id, region,
tf_version).apply(gcp.use_gcp_secret())
| {"golden_diff": "diff --git a/samples/resnet-cmle/resnet-train-pipeline.py b/samples/resnet-cmle/resnet-train-pipeline.py\n--- a/samples/resnet-cmle/resnet-train-pipeline.py\n+++ b/samples/resnet-cmle/resnet-train-pipeline.py\n@@ -124,7 +124,7 @@\n eval_batch_size, steps_per_eval, train_steps, num_train_images, num_eval_images,\n num_label_classes, tf_version).apply(gcp.use_gcp_secret())\n train.after(preprocess)\n- export_output = os.path.join(str(train.outputs['job-dir']), 'export')\n+ export_output = os.path.join(str(train.outputs['job_dir']), 'export')\n deploy = resnet_deploy_op(export_output, model, version, project_id, region,\n tf_version).apply(gcp.use_gcp_secret())\n", "issue": "Backend Docker build fails with python error in resnet-train-pipeline.py\nRan on master branch (`086d4763d9f163acdeb423bc2bc49ab470442a92`) with `--no-cache` arg and got the following the error:\r\n```\r\nStep 20/31 : RUN find . -maxdepth 2 -name '*.py' -type f | while read pipeline; do dsl-compile --py \"$pipeline\" --output \"$pipeline.tar.gz\"; done\r\n ---> Running in 3205bef0c5cb\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/dsl-compile\", line 10, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.5/site-packages/kfp/compiler/main.py\", line 103, in main\r\n compile_pyfile(args.py, args.function, args.output, not args.disable_type_check)\r\n File \"/usr/local/lib/python3.5/site-packages/kfp/compiler/main.py\", line 92, in compile_pyfile\r\n _compile_pipeline_function(function_name, output_path, type_check)\r\n File \"/usr/local/lib/python3.5/site-packages/kfp/compiler/main.py\", line 72, in _compile_pipeline_function\r\n kfp.compiler.Compiler().compile(pipeline_func, output_path, type_check)\r\n File \"/usr/local/lib/python3.5/site-packages/kfp/compiler/compiler.py\", line 644, in compile\r\n workflow = self._compile(pipeline_func)\r\n File \"/usr/local/lib/python3.5/site-packages/kfp/compiler/compiler.py\", line 596, in _compile\r\n pipeline_func(*args_list)\r\n File \"./resnet-cmle/resnet-train-pipeline.py\", line 127, in resnet_train\r\n export_output = os.path.join(str(train.outputs['job-dir']), 'export')\r\nKeyError: 'job-dir'\r\nThe command '/bin/sh -c find . -maxdepth 2 -name '*.py' -type f | while read pipeline; do dsl-compile --py \"$pipeline\" --output \"$pipeline.tar.gz\"; done' returned a non-zero code: 1\n", "before_files": [{"content": "#!/usr/bin/env python3\n# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport kfp.dsl as dsl\nimport kfp.gcp as gcp\nimport kfp.components as comp\nimport datetime\nimport json\nimport os\n\ndataflow_python_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/dataflow/launch_python/component.yaml')\ncloudml_train_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/ml_engine/train/component.yaml')\ncloudml_deploy_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/ml_engine/deploy/component.yaml')\n\n\ndef resnet_preprocess_op(project_id: 'GcpProject', output: 'GcsUri', staging_dir: 'GcsUri', train_csv: 'GcsUri[text/csv]',\n validation_csv: 'GcsUri[text/csv]', labels, train_size: 'Integer', validation_size: 'Integer',\n step_name='preprocess'):\n return dataflow_python_op(\n python_file_path='gs://ml-pipeline-playground/samples/ml_engine/resnet-cmle/preprocess/preprocess.py',\n project_id=project_id,\n requirements_file_path='gs://ml-pipeline-playground/samples/ml_engine/resnet-cmle/preprocess/requirements.txt',\n staging_dir=staging_dir,\n args=json.dumps([\n '--train_csv', str(train_csv),\n '--validation_csv', str(validation_csv),\n '--labels', str(labels),\n '--output_dir', str(output),\n '--train_size', str(train_size),\n '--validation_size', str(validation_size)\n ])\n )\n\n\ndef resnet_train_op(project_id, data_dir, output: 'GcsUri', region: 'GcpRegion', depth: int, train_batch_size: int,\n eval_batch_size: int, steps_per_eval: int, train_steps: int, num_train_images: int,\n num_eval_images: int, num_label_classes: int, tf_version, step_name='train'):\n return cloudml_train_op(\n project_id=project_id,\n region='us-central1',\n python_module='trainer.resnet_main',\n package_uris=json.dumps(\n ['gs://ml-pipeline-playground/samples/ml_engine/resnet-cmle/trainer/trainer-1.0.tar.gz']),\n job_dir=output,\n args=json.dumps([\n '--data_dir', str(data_dir),\n '--model_dir', str(output),\n '--use_tpu', 'True',\n '--resnet_depth', str(depth),\n '--train_batch_size', str(train_batch_size),\n '--eval_batch_size', str(eval_batch_size),\n '--steps_per_eval', str(steps_per_eval),\n '--train_steps', str(train_steps),\n '--num_train_images', str(num_train_images),\n '--num_eval_images', str(num_eval_images),\n '--num_label_classes', str(num_label_classes),\n '--export_dir', '{}/export'.format(str(output))\n ]),\n runtime_version=tf_version,\n training_input=json.dumps({\n 'scaleTier': 'BASIC_TPU'\n })\n )\n\n\ndef resnet_deploy_op(model_dir, model, version, project_id: 'GcpProject', region: 'GcpRegion',\n tf_version, step_name='deploy'):\n # TODO(hongyes): add region to model payload.\n return cloudml_deploy_op(\n model_uri=model_dir,\n project_id=project_id,\n model_id=model,\n version_id=version,\n runtime_version=tf_version,\n replace_existing_version='True'\n )\n\n\[email protected](\n name='ResNet_Train_Pipeline',\n description='Demonstrate the ResNet50 predict.'\n)\ndef resnet_train(\n project_id,\n output,\n region='us-central1',\n model='bolts',\n version='beta1',\n tf_version='1.12',\n train_csv='gs://bolts_image_dataset/bolt_images_train.csv',\n validation_csv='gs://bolts_image_dataset/bolt_images_validate.csv',\n labels='gs://bolts_image_dataset/labels.txt',\n depth=50,\n train_batch_size=1024,\n eval_batch_size=1024,\n steps_per_eval=250,\n train_steps=10000,\n num_train_images=218593,\n num_eval_images=54648,\n num_label_classes=10):\n output_dir = os.path.join(str(output), '{{workflow.name}}')\n preprocess_staging = os.path.join(output_dir, 'staging')\n preprocess_output = os.path.join(output_dir, 'preprocessed_output')\n train_output = os.path.join(output_dir, 'model')\n preprocess = resnet_preprocess_op(project_id, preprocess_output, preprocess_staging, train_csv,\n validation_csv, labels, train_batch_size, eval_batch_size).apply(gcp.use_gcp_secret())\n train = resnet_train_op(project_id, preprocess_output, train_output, region, depth, train_batch_size,\n eval_batch_size, steps_per_eval, train_steps, num_train_images, num_eval_images,\n num_label_classes, tf_version).apply(gcp.use_gcp_secret())\n train.after(preprocess)\n export_output = os.path.join(str(train.outputs['job-dir']), 'export')\n deploy = resnet_deploy_op(export_output, model, version, project_id, region,\n tf_version).apply(gcp.use_gcp_secret())\n\n\nif __name__ == '__main__':\n import kfp.compiler as compiler\n compiler.Compiler().compile(resnet_train, __file__ + '.zip')\n", "path": "samples/resnet-cmle/resnet-train-pipeline.py"}]} | 2,766 | 192 |
gh_patches_debug_24419 | rasdani/github-patches | git_diff | scrapy__scrapy-1101 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fetch errors on some https sites
$ scrapy fetch 'https://flixbus.de'
```
2014-12-12 10:01:50+0100 [scrapy] INFO: Scrapy 0.24.4 started (bot: scrapybot)
2014-12-12 10:01:50+0100 [scrapy] INFO: Optional features available: ssl, http11
2014-12-12 10:01:50+0100 [scrapy] INFO: Overridden settings: {}
2014-12-12 10:01:50+0100 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-12-12 10:01:50+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-12-12 10:01:50+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-12-12 10:01:50+0100 [scrapy] INFO: Enabled item pipelines:
2014-12-12 10:01:50+0100 [default] INFO: Spider opened
2014-12-12 10:01:50+0100 [default] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-12-12 10:01:50+0100 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2014-12-12 10:01:50+0100 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2014-12-12 10:01:50+0100 [default] DEBUG: Retrying <GET https://flixbus.de> (failed 1 times): [<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]
2014-12-12 10:01:50+0100 [default] DEBUG: Retrying <GET https://flixbus.de> (failed 2 times): [<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]
2014-12-12 10:01:50+0100 [default] DEBUG: Gave up retrying <GET https://flixbus.de> (failed 3 times): [<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]
2014-12-12 10:01:50+0100 [default] ERROR: Error downloading <GET https://flixbus.de>: [<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]
2014-12-12 10:01:50+0100 [default] INFO: Closing spider (finished)
2014-12-12 10:01:50+0100 [default] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 3,
'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 3,
'downloader/request_bytes': 627,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2014, 12, 12, 9, 1, 50, 463722),
'log_count/DEBUG': 5,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2014, 12, 12, 9, 1, 50, 288873)}
2014-12-12 10:01:50+0100 [default] INFO: Spider closed (finished)
```
$ pip freeze
```
Scrapy==0.24.4
Twisted==14.0.2
argparse==1.2.1
cffi==0.8.6
characteristic==14.2.0
cryptography==0.6.1
cssselect==0.9.1
distribute==0.7.3
lxml==3.4.1
pyOpenSSL==0.14
pyasn1==0.1.7
pyasn1-modules==0.0.5
pycparser==2.10
queuelib==1.2.2
service-identity==14.0.0
six==1.8.0
w3lib==1.10.0
wsgiref==0.1.2
zope.interface==4.1.1
```
Same error on scrapinghub ;)
</issue>
<code>
[start of scrapy/core/downloader/contextfactory.py]
1 from OpenSSL import SSL
2 from twisted.internet.ssl import ClientContextFactory
3
4
5 class ScrapyClientContextFactory(ClientContextFactory):
6 "A SSL context factory which is more permissive against SSL bugs."
7 # see https://github.com/scrapy/scrapy/issues/82
8 # and https://github.com/scrapy/scrapy/issues/26
9
10 def __init__(self):
11 # see this issue on why we use TLSv1_METHOD by default
12 # https://github.com/scrapy/scrapy/issues/194
13 self.method = SSL.TLSv1_METHOD
14
15 def getContext(self, hostname=None, port=None):
16 ctx = ClientContextFactory.getContext(self)
17 # Enable all workarounds to SSL bugs as documented by
18 # http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html
19 ctx.set_options(SSL.OP_ALL)
20 return ctx
21
[end of scrapy/core/downloader/contextfactory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/core/downloader/contextfactory.py b/scrapy/core/downloader/contextfactory.py
--- a/scrapy/core/downloader/contextfactory.py
+++ b/scrapy/core/downloader/contextfactory.py
@@ -1,11 +1,17 @@
from OpenSSL import SSL
from twisted.internet.ssl import ClientContextFactory
+try:
+ # available since twisted 14.0
+ from twisted.internet._sslverify import ClientTLSOptions
+except ImportError:
+ ClientTLSOptions = None
class ScrapyClientContextFactory(ClientContextFactory):
"A SSL context factory which is more permissive against SSL bugs."
# see https://github.com/scrapy/scrapy/issues/82
# and https://github.com/scrapy/scrapy/issues/26
+ # and https://github.com/scrapy/scrapy/issues/981
def __init__(self):
# see this issue on why we use TLSv1_METHOD by default
@@ -17,4 +23,6 @@
# Enable all workarounds to SSL bugs as documented by
# http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html
ctx.set_options(SSL.OP_ALL)
+ if hostname and ClientTLSOptions is not None: # workaround for TLS SNI
+ ClientTLSOptions(hostname, ctx)
return ctx
| {"golden_diff": "diff --git a/scrapy/core/downloader/contextfactory.py b/scrapy/core/downloader/contextfactory.py\n--- a/scrapy/core/downloader/contextfactory.py\n+++ b/scrapy/core/downloader/contextfactory.py\n@@ -1,11 +1,17 @@\n from OpenSSL import SSL\n from twisted.internet.ssl import ClientContextFactory\n+try:\n+ # available since twisted 14.0\n+ from twisted.internet._sslverify import ClientTLSOptions\n+except ImportError:\n+ ClientTLSOptions = None\n \n \n class ScrapyClientContextFactory(ClientContextFactory):\n \"A SSL context factory which is more permissive against SSL bugs.\"\n # see https://github.com/scrapy/scrapy/issues/82\n # and https://github.com/scrapy/scrapy/issues/26\n+ # and https://github.com/scrapy/scrapy/issues/981\n \n def __init__(self):\n # see this issue on why we use TLSv1_METHOD by default\n@@ -17,4 +23,6 @@\n # Enable all workarounds to SSL bugs as documented by\n # http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html\n ctx.set_options(SSL.OP_ALL)\n+ if hostname and ClientTLSOptions is not None: # workaround for TLS SNI\n+ ClientTLSOptions(hostname, ctx)\n return ctx\n", "issue": "fetch errors on some https sites\n$ scrapy fetch 'https://flixbus.de'\n\n```\n2014-12-12 10:01:50+0100 [scrapy] INFO: Scrapy 0.24.4 started (bot: scrapybot)\n2014-12-12 10:01:50+0100 [scrapy] INFO: Optional features available: ssl, http11\n2014-12-12 10:01:50+0100 [scrapy] INFO: Overridden settings: {}\n2014-12-12 10:01:50+0100 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState\n2014-12-12 10:01:50+0100 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats\n2014-12-12 10:01:50+0100 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware\n2014-12-12 10:01:50+0100 [scrapy] INFO: Enabled item pipelines: \n2014-12-12 10:01:50+0100 [default] INFO: Spider opened\n2014-12-12 10:01:50+0100 [default] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)\n2014-12-12 10:01:50+0100 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023\n2014-12-12 10:01:50+0100 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080\n2014-12-12 10:01:50+0100 [default] DEBUG: Retrying <GET https://flixbus.de> (failed 1 times): [<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]\n2014-12-12 10:01:50+0100 [default] DEBUG: Retrying <GET https://flixbus.de> (failed 2 times): [<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]\n2014-12-12 10:01:50+0100 [default] DEBUG: Gave up retrying <GET https://flixbus.de> (failed 3 times): [<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]\n2014-12-12 10:01:50+0100 [default] ERROR: Error downloading <GET https://flixbus.de>: [<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]\n2014-12-12 10:01:50+0100 [default] INFO: Closing spider (finished)\n2014-12-12 10:01:50+0100 [default] INFO: Dumping Scrapy stats:\n {'downloader/exception_count': 3,\n 'downloader/exception_type_count/twisted.web._newclient.ResponseNeverReceived': 3,\n 'downloader/request_bytes': 627,\n 'downloader/request_count': 3,\n 'downloader/request_method_count/GET': 3,\n 'finish_reason': 'finished',\n 'finish_time': datetime.datetime(2014, 12, 12, 9, 1, 50, 463722),\n 'log_count/DEBUG': 5,\n 'log_count/ERROR': 1,\n 'log_count/INFO': 7,\n 'scheduler/dequeued': 3,\n 'scheduler/dequeued/memory': 3,\n 'scheduler/enqueued': 3,\n 'scheduler/enqueued/memory': 3,\n 'start_time': datetime.datetime(2014, 12, 12, 9, 1, 50, 288873)}\n2014-12-12 10:01:50+0100 [default] INFO: Spider closed (finished)\n```\n\n$ pip freeze\n\n```\nScrapy==0.24.4\nTwisted==14.0.2\nargparse==1.2.1\ncffi==0.8.6\ncharacteristic==14.2.0\ncryptography==0.6.1\ncssselect==0.9.1\ndistribute==0.7.3\nlxml==3.4.1\npyOpenSSL==0.14\npyasn1==0.1.7\npyasn1-modules==0.0.5\npycparser==2.10\nqueuelib==1.2.2\nservice-identity==14.0.0\nsix==1.8.0\nw3lib==1.10.0\nwsgiref==0.1.2\nzope.interface==4.1.1\n```\n\nSame error on scrapinghub ;)\n\n", "before_files": [{"content": "from OpenSSL import SSL\nfrom twisted.internet.ssl import ClientContextFactory\n\n\nclass ScrapyClientContextFactory(ClientContextFactory):\n \"A SSL context factory which is more permissive against SSL bugs.\"\n # see https://github.com/scrapy/scrapy/issues/82\n # and https://github.com/scrapy/scrapy/issues/26\n\n def __init__(self):\n # see this issue on why we use TLSv1_METHOD by default\n # https://github.com/scrapy/scrapy/issues/194\n self.method = SSL.TLSv1_METHOD\n\n def getContext(self, hostname=None, port=None):\n ctx = ClientContextFactory.getContext(self)\n # Enable all workarounds to SSL bugs as documented by\n # http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html\n ctx.set_options(SSL.OP_ALL)\n return ctx\n", "path": "scrapy/core/downloader/contextfactory.py"}]} | 2,086 | 294 |
gh_patches_debug_11278 | rasdani/github-patches | git_diff | mindsdb__mindsdb-1893 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tag Docker versions
Currently, GitHub Action will deploy a new version to dockerhub as a `latest`. We need to push the tagged version per MindsDB version e.g 2.62.5 => mindsdb/mindsdb:2.62.5
</issue>
<code>
[start of docker/build.py]
1 import os
2 import sys
3 import requests
4 import subprocess
5
6 installer_version_url = 'https://public.api.mindsdb.com/installer/@@beta_or_release/docker___success___None'
7
8 api_response = requests.get(
9 installer_version_url.replace('@@beta_or_release', sys.argv[1]))
10
11 if api_response.status_code != 200:
12 exit(1)
13
14 installer_version = api_response.text
15
16 os.system('mkdir -p dist')
17
18 if sys.argv[1] == 'release':
19 container_name = 'mindsdb'
20 dockerfile_template = 'dockerfile_release.template'
21
22 elif sys.argv[1] == 'beta':
23 container_name = 'mindsdb_beta'
24 dockerfile_template = 'dockerfile_beta.template'
25
26 with open(dockerfile_template, 'r') as fp:
27 content = fp.read()
28 content = content.replace('@@beta_or_release', sys.argv[1])
29 content = content.replace('@@installer_version', installer_version)
30
31 with open('dist/Dockerfile', 'w') as fp:
32 fp.write(content)
33
34 command = (f"""
35 cd dist &&
36 docker build -t {container_name} . &&
37 docker tag {container_name} mindsdb/{container_name}:latest &&
38 docker tag {container_name} mindsdb/{container_name}:{installer_version} &&
39 docker push mindsdb/{container_name};
40 cd ..
41 """)
42
43 subprocess.run(command, shell=True, check=True)
[end of docker/build.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docker/build.py b/docker/build.py
--- a/docker/build.py
+++ b/docker/build.py
@@ -31,13 +31,11 @@
with open('dist/Dockerfile', 'w') as fp:
fp.write(content)
+print(installer_version)
command = (f"""
cd dist &&
- docker build -t {container_name} . &&
- docker tag {container_name} mindsdb/{container_name}:latest &&
- docker tag {container_name} mindsdb/{container_name}:{installer_version} &&
- docker push mindsdb/{container_name};
- cd ..
+ docker build -t mindsdb/{container_name}:latest -t mindsdb/{container_name}:{installer_version} . &&
+ docker push mindsdb/{container_name} --all-tags
""")
subprocess.run(command, shell=True, check=True)
\ No newline at end of file
| {"golden_diff": "diff --git a/docker/build.py b/docker/build.py\n--- a/docker/build.py\n+++ b/docker/build.py\n@@ -31,13 +31,11 @@\n with open('dist/Dockerfile', 'w') as fp:\n fp.write(content)\n \n+print(installer_version)\n command = (f\"\"\"\n cd dist &&\n- docker build -t {container_name} . &&\n- docker tag {container_name} mindsdb/{container_name}:latest &&\n- docker tag {container_name} mindsdb/{container_name}:{installer_version} &&\n- docker push mindsdb/{container_name};\n- cd ..\n+ docker build -t mindsdb/{container_name}:latest -t mindsdb/{container_name}:{installer_version} . &&\n+ docker push mindsdb/{container_name} --all-tags\n \"\"\")\n \n subprocess.run(command, shell=True, check=True)\n\\ No newline at end of file\n", "issue": "Tag Docker versions\nCurrently, GitHub Action will deploy a new version to dockerhub as a `latest`. We need to push the tagged version per MindsDB version e.g 2.62.5 => mindsdb/mindsdb:2.62.5\n", "before_files": [{"content": "import os\nimport sys\nimport requests\nimport subprocess\n\ninstaller_version_url = 'https://public.api.mindsdb.com/installer/@@beta_or_release/docker___success___None'\n\napi_response = requests.get(\n installer_version_url.replace('@@beta_or_release', sys.argv[1]))\n\nif api_response.status_code != 200:\n exit(1)\n\ninstaller_version = api_response.text\n\nos.system('mkdir -p dist')\n\nif sys.argv[1] == 'release':\n container_name = 'mindsdb'\n dockerfile_template = 'dockerfile_release.template'\n\nelif sys.argv[1] == 'beta':\n container_name = 'mindsdb_beta'\n dockerfile_template = 'dockerfile_beta.template'\n\nwith open(dockerfile_template, 'r') as fp:\n content = fp.read()\n content = content.replace('@@beta_or_release', sys.argv[1])\n content = content.replace('@@installer_version', installer_version)\n\nwith open('dist/Dockerfile', 'w') as fp:\n fp.write(content)\n\ncommand = (f\"\"\"\n cd dist &&\n docker build -t {container_name} . &&\n docker tag {container_name} mindsdb/{container_name}:latest &&\n docker tag {container_name} mindsdb/{container_name}:{installer_version} &&\n docker push mindsdb/{container_name};\n cd ..\n \"\"\")\n\nsubprocess.run(command, shell=True, check=True)", "path": "docker/build.py"}]} | 969 | 195 |
gh_patches_debug_23379 | rasdani/github-patches | git_diff | pre-commit__pre-commit-81 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Occasional flakiness of staged file stasher
It appears `git diff-files` is returning incorrectly in some case that I haven't been able to pinpoint.
It results in something like this (you can see however that all the files are staged):
```
$ pre-commit
[WARNING] Unstaged files detected.
Stashing unstaged files to /home/anthony/workspace/pre-commit/.pre-commit-files/patch1397370090.
Trim Trailing Whitespace............................................Passed
Fix End of Files....................................................Passed
Check Yaml..........................................................Passed
Debug Statements (Python)...........................................Passed
Tests should end in _test.py........................................Passed
Pyflakes............................................................Passed
Validate Pre-Commit Config..........................................Passed
Validate Pre-Commit Manifest........................................Passed
[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...
Traceback (most recent call last):
File "/home/anthony/workspace/pre-commit/venv-pre_commit/bin/pre-commit", line 9, in <module>
load_entry_point('pre-commit==0.0.0', 'console_scripts', 'pre-commit')()
File "/home/anthony/workspace/pre-commit/pre_commit/util.py", line 52, in wrapper
return func(argv)
File "/home/anthony/workspace/pre-commit/pre_commit/run.py", line 143, in run
return _run(runner, args)
File "/home/anthony/workspace/pre-commit/pre_commit/run.py", line 95, in _run
return run_hooks(runner, args)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/home/anthony/workspace/pre-commit/pre_commit/staged_files_only.py", line 51, in staged_files_only
cmd_runner.run(['git', 'apply', patch_filename])
File "/home/anthony/workspace/pre-commit/pre_commit/prefixed_command_runner.py", line 67, in run
returncode, replaced_cmd, retcode, output=(stdout, stderr),
pre_commit.prefixed_command_runner.CalledProcessError: Command: ['git', 'apply', '/home/anthony/workspace/pre-commit/.pre-commit-files/patch1397370090']
Return code: 128
Expected return code: 0
Output: ('', 'fatal: unrecognized input\n')
$ git status
# On branch rebuild_venv
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: .gitignore
# modified: Makefile
#
```
The "stashed diff" is an empty file. I think the "fix" is to check if the diff contains anything before printing the warning message and entering the branch that isn't a noop context manager.
</issue>
<code>
[start of pre_commit/staged_files_only.py]
1 import contextlib
2 import logging
3 import time
4
5 from pre_commit.prefixed_command_runner import CalledProcessError
6
7
8 logger = logging.getLogger('pre_commit')
9
10
11 @contextlib.contextmanager
12 def staged_files_only(cmd_runner):
13 """Clear any unstaged changes from the git working directory inside this
14 context.
15
16 Args:
17 cmd_runner - PrefixedCommandRunner
18 """
19 # Determine if there are unstaged files
20 retcode, _, _ = cmd_runner.run(
21 ['git', 'diff-files', '--quiet'],
22 retcode=None,
23 )
24 if retcode:
25 patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))
26 logger.warning('Unstaged files detected.')
27 logger.info(
28 'Stashing unstaged files to {0}.'.format(patch_filename),
29 )
30 # Save the current unstaged changes as a patch
31 with open(patch_filename, 'w') as patch_file:
32 cmd_runner.run(['git', 'diff', '--binary'], stdout=patch_file)
33
34 # Clear the working directory of unstaged changes
35 cmd_runner.run(['git', 'checkout', '--', '.'])
36 try:
37 yield
38 finally:
39 # Try to apply the patch we saved
40 try:
41 cmd_runner.run(['git', 'apply', patch_filename])
42 except CalledProcessError:
43 logger.warning(
44 'Stashed changes conflicted with hook auto-fixes... '
45 'Rolling back fixes...'
46 )
47 # We failed to apply the patch, presumably due to fixes made
48 # by hooks.
49 # Roll back the changes made by hooks.
50 cmd_runner.run(['git', 'checkout', '--', '.'])
51 cmd_runner.run(['git', 'apply', patch_filename])
52 logger.info('Restored changes from {0}.'.format(patch_filename))
53 else:
54 # There weren't any staged files so we don't need to do anything
55 # special
56 yield
57
[end of pre_commit/staged_files_only.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -17,11 +17,11 @@
cmd_runner - PrefixedCommandRunner
"""
# Determine if there are unstaged files
- retcode, _, _ = cmd_runner.run(
- ['git', 'diff-files', '--quiet'],
+ retcode, diff_stdout, _ = cmd_runner.run(
+ ['git', 'diff', '--ignore-submodules', '--binary', '--exit-code'],
retcode=None,
)
- if retcode:
+ if retcode and diff_stdout.strip():
patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))
logger.warning('Unstaged files detected.')
logger.info(
@@ -29,7 +29,7 @@
)
# Save the current unstaged changes as a patch
with open(patch_filename, 'w') as patch_file:
- cmd_runner.run(['git', 'diff', '--binary'], stdout=patch_file)
+ patch_file.write(diff_stdout)
# Clear the working directory of unstaged changes
cmd_runner.run(['git', 'checkout', '--', '.'])
| {"golden_diff": "diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py\n--- a/pre_commit/staged_files_only.py\n+++ b/pre_commit/staged_files_only.py\n@@ -17,11 +17,11 @@\n cmd_runner - PrefixedCommandRunner\n \"\"\"\n # Determine if there are unstaged files\n- retcode, _, _ = cmd_runner.run(\n- ['git', 'diff-files', '--quiet'],\n+ retcode, diff_stdout, _ = cmd_runner.run(\n+ ['git', 'diff', '--ignore-submodules', '--binary', '--exit-code'],\n retcode=None,\n )\n- if retcode:\n+ if retcode and diff_stdout.strip():\n patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))\n logger.warning('Unstaged files detected.')\n logger.info(\n@@ -29,7 +29,7 @@\n )\n # Save the current unstaged changes as a patch\n with open(patch_filename, 'w') as patch_file:\n- cmd_runner.run(['git', 'diff', '--binary'], stdout=patch_file)\n+ patch_file.write(diff_stdout)\n \n # Clear the working directory of unstaged changes\n cmd_runner.run(['git', 'checkout', '--', '.'])\n", "issue": "Occasional flakiness of staged file stasher\nIt appears `git diff-files` is returning incorrectly in some case that I haven't been able to pinpoint.\n\nIt results in something like this (you can see however that all the files are staged):\n\n```\n$ pre-commit \n[WARNING] Unstaged files detected.\nStashing unstaged files to /home/anthony/workspace/pre-commit/.pre-commit-files/patch1397370090.\nTrim Trailing Whitespace............................................Passed\nFix End of Files....................................................Passed\nCheck Yaml..........................................................Passed\nDebug Statements (Python)...........................................Passed\nTests should end in _test.py........................................Passed\nPyflakes............................................................Passed\nValidate Pre-Commit Config..........................................Passed\nValidate Pre-Commit Manifest........................................Passed\n[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...\nTraceback (most recent call last):\n File \"/home/anthony/workspace/pre-commit/venv-pre_commit/bin/pre-commit\", line 9, in <module>\n load_entry_point('pre-commit==0.0.0', 'console_scripts', 'pre-commit')()\n File \"/home/anthony/workspace/pre-commit/pre_commit/util.py\", line 52, in wrapper\n return func(argv)\n File \"/home/anthony/workspace/pre-commit/pre_commit/run.py\", line 143, in run\n return _run(runner, args)\n File \"/home/anthony/workspace/pre-commit/pre_commit/run.py\", line 95, in _run\n return run_hooks(runner, args)\n File \"/usr/lib/python2.7/contextlib.py\", line 24, in __exit__\n self.gen.next()\n File \"/home/anthony/workspace/pre-commit/pre_commit/staged_files_only.py\", line 51, in staged_files_only\n cmd_runner.run(['git', 'apply', patch_filename])\n File \"/home/anthony/workspace/pre-commit/pre_commit/prefixed_command_runner.py\", line 67, in run\n returncode, replaced_cmd, retcode, output=(stdout, stderr),\npre_commit.prefixed_command_runner.CalledProcessError: Command: ['git', 'apply', '/home/anthony/workspace/pre-commit/.pre-commit-files/patch1397370090']\nReturn code: 128\nExpected return code: 0\nOutput: ('', 'fatal: unrecognized input\\n')\n$ git status\n# On branch rebuild_venv\n# Changes to be committed:\n# (use \"git reset HEAD <file>...\" to unstage)\n#\n# modified: .gitignore\n# modified: Makefile\n#\n```\n\nThe \"stashed diff\" is an empty file. I think the \"fix\" is to check if the diff contains anything before printing the warning message and entering the branch that isn't a noop context manager.\n\n", "before_files": [{"content": "import contextlib\nimport logging\nimport time\n\nfrom pre_commit.prefixed_command_runner import CalledProcessError\n\n\nlogger = logging.getLogger('pre_commit')\n\n\[email protected]\ndef staged_files_only(cmd_runner):\n \"\"\"Clear any unstaged changes from the git working directory inside this\n context.\n\n Args:\n cmd_runner - PrefixedCommandRunner\n \"\"\"\n # Determine if there are unstaged files\n retcode, _, _ = cmd_runner.run(\n ['git', 'diff-files', '--quiet'],\n retcode=None,\n )\n if retcode:\n patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))\n logger.warning('Unstaged files detected.')\n logger.info(\n 'Stashing unstaged files to {0}.'.format(patch_filename),\n )\n # Save the current unstaged changes as a patch\n with open(patch_filename, 'w') as patch_file:\n cmd_runner.run(['git', 'diff', '--binary'], stdout=patch_file)\n\n # Clear the working directory of unstaged changes\n cmd_runner.run(['git', 'checkout', '--', '.'])\n try:\n yield\n finally:\n # Try to apply the patch we saved\n try:\n cmd_runner.run(['git', 'apply', patch_filename])\n except CalledProcessError:\n logger.warning(\n 'Stashed changes conflicted with hook auto-fixes... '\n 'Rolling back fixes...'\n )\n # We failed to apply the patch, presumably due to fixes made\n # by hooks.\n # Roll back the changes made by hooks.\n cmd_runner.run(['git', 'checkout', '--', '.'])\n cmd_runner.run(['git', 'apply', patch_filename])\n logger.info('Restored changes from {0}.'.format(patch_filename))\n else:\n # There weren't any staged files so we don't need to do anything\n # special\n yield\n", "path": "pre_commit/staged_files_only.py"}]} | 1,676 | 279 |
gh_patches_debug_17535 | rasdani/github-patches | git_diff | doccano__doccano-607 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Enhancement request] Avoid duplicate key value error on launching
Enhancement description
---------
I have these errors in log on each start:
```
postgres_1 | 2020-03-04 05:34:30.467 UTC [27] ERROR: duplicate key value violates unique constraint "api_role_name_key"
postgres_1 | 2020-03-04 05:34:30.467 UTC [27] DETAIL: Key (name)=(project_admin) already exists.
postgres_1 | 2020-03-04 05:34:30.467 UTC [27] STATEMENT: INSERT INTO "api_role" ("name", "description", "created_at", "updated_at") VALUES ('project_admin', '', '2020-03-04T05:34:30.460290+00:00'::timestamptz, '2020-03-04T05:34:30.460312+00:00'::timestamptz) RETURNING "api_role"."id"
backend_1 | Datbase Error: "duplicate key value violates unique constraint "api_role_name_key"
backend_1 | DETAIL: Key (name)=(project_admin) already exists.
backend_1 | "
postgres_1 | 2020-03-04 05:34:30.468 UTC [27] ERROR: duplicate key value violates unique constraint "api_role_name_key"
postgres_1 | 2020-03-04 05:34:30.468 UTC [27] DETAIL: Key (name)=(annotator) already exists.
postgres_1 | 2020-03-04 05:34:30.468 UTC [27] STATEMENT: INSERT INTO "api_role" ("name", "description", "created_at", "updated_at") VALUES ('annotator', '', '2020-03-04T05:34:30.467909+00:00'::timestamptz, '2020-03-04T05:34:30.467926+00:00'::timestamptz) RETURNING "api_role"."id"
backend_1 | Datbase Error: "duplicate key value violates unique constraint "api_role_name_key"
backend_1 | DETAIL: Key (name)=(annotator) already exists.
backend_1 | "
postgres_1 | 2020-03-04 05:34:30.468 UTC [27] ERROR: duplicate key value violates unique constraint "api_role_name_key"
postgres_1 | 2020-03-04 05:34:30.468 UTC [27] DETAIL: Key (name)=(annotation_approver) already exists.
postgres_1 | 2020-03-04 05:34:30.468 UTC [27] STATEMENT: INSERT INTO "api_role" ("name", "description", "created_at", "updated_at") VALUES ('annotation_approver', '', '2020-03-04T05:34:30.468689+00:00'::timestamptz, '2020-03-04T05:34:30.468706+00:00'::timestamptz) RETURNING "api_role"."id"
backend_1 | Datbase Error: "duplicate key value violates unique constraint "api_role_name_key"
backend_1 | DETAIL: Key (name)=(annotation_approver) already exists.
backend_1 | "
postgres_1 | 2020-03-04 05:34:32.026 UTC [28] ERROR: duplicate key value violates unique constraint "auth_user_username_key"
postgres_1 | 2020-03-04 05:34:32.026 UTC [28] DETAIL: Key (username)=(admin) already exists.
postgres_1 | 2020-03-04 05:34:32.026 UTC [28] STATEMENT: INSERT INTO "auth_user" ("password", "last_login", "is_superuser", "username", "first_name", "last_name", "email", "is_staff", "is_active", "date_joined") VALUES ('<...>', NULL, true, 'admin', '', '', '[email protected]', true, true, '2020-03-04T05:34:32.023520+00:00'::timestamptz) RETURNING "auth_user"."id"
backend_1 | User admin already exists.
backend_1 | CommandError: Error: That username is already taken.
```
Propose to check existence of specified table's rows before creation to avoid these errors.
</issue>
<code>
[start of app/server/management/commands/create_roles.py]
1 from api.models import Role
2 from django.core.management.base import BaseCommand
3 from django.db import DatabaseError
4 from django.conf import settings
5
6
7 class Command(BaseCommand):
8 help = 'Non-interactively create default roles'
9
10 def handle(self, *args, **options):
11 try:
12 role_names = [settings.ROLE_PROJECT_ADMIN, settings.ROLE_ANNOTATOR, settings.ROLE_ANNOTATION_APPROVER]
13 except KeyError as key_error:
14 self.stderr.write(self.style.ERROR(f'Missing Key: "{key_error}"'))
15 for role_name in role_names:
16 role = Role()
17 role.name = role_name
18 try:
19 role.save()
20 except DatabaseError as db_error:
21 self.stderr.write(self.style.ERROR(f'Datbase Error: "{db_error}"'))
22 else:
23 self.stdout.write(self.style.SUCCESS(f'Role created successfully "{role_name}"'))
24
[end of app/server/management/commands/create_roles.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/server/management/commands/create_roles.py b/app/server/management/commands/create_roles.py
--- a/app/server/management/commands/create_roles.py
+++ b/app/server/management/commands/create_roles.py
@@ -13,11 +13,13 @@
except KeyError as key_error:
self.stderr.write(self.style.ERROR(f'Missing Key: "{key_error}"'))
for role_name in role_names:
+ if Role.objects.filter(name=role_name).exists():
+ continue
role = Role()
role.name = role_name
try:
role.save()
except DatabaseError as db_error:
- self.stderr.write(self.style.ERROR(f'Datbase Error: "{db_error}"'))
+ self.stderr.write(self.style.ERROR(f'Database Error: "{db_error}"'))
else:
self.stdout.write(self.style.SUCCESS(f'Role created successfully "{role_name}"'))
| {"golden_diff": "diff --git a/app/server/management/commands/create_roles.py b/app/server/management/commands/create_roles.py\n--- a/app/server/management/commands/create_roles.py\n+++ b/app/server/management/commands/create_roles.py\n@@ -13,11 +13,13 @@\n except KeyError as key_error:\n self.stderr.write(self.style.ERROR(f'Missing Key: \"{key_error}\"'))\n for role_name in role_names:\n+ if Role.objects.filter(name=role_name).exists():\n+ continue\n role = Role()\n role.name = role_name\n try:\n role.save()\n except DatabaseError as db_error:\n- self.stderr.write(self.style.ERROR(f'Datbase Error: \"{db_error}\"'))\n+ self.stderr.write(self.style.ERROR(f'Database Error: \"{db_error}\"'))\n else:\n self.stdout.write(self.style.SUCCESS(f'Role created successfully \"{role_name}\"'))\n", "issue": "[Enhancement request] Avoid duplicate key value error on launching\nEnhancement description\r\n---------\r\nI have these errors in log on each start:\r\n```\r\npostgres_1 | 2020-03-04 05:34:30.467 UTC [27] ERROR: duplicate key value violates unique constraint \"api_role_name_key\"\r\npostgres_1 | 2020-03-04 05:34:30.467 UTC [27] DETAIL: Key (name)=(project_admin) already exists.\r\npostgres_1 | 2020-03-04 05:34:30.467 UTC [27] STATEMENT: INSERT INTO \"api_role\" (\"name\", \"description\", \"created_at\", \"updated_at\") VALUES ('project_admin', '', '2020-03-04T05:34:30.460290+00:00'::timestamptz, '2020-03-04T05:34:30.460312+00:00'::timestamptz) RETURNING \"api_role\".\"id\"\r\nbackend_1 | Datbase Error: \"duplicate key value violates unique constraint \"api_role_name_key\"\r\nbackend_1 | DETAIL: Key (name)=(project_admin) already exists.\r\nbackend_1 | \"\r\npostgres_1 | 2020-03-04 05:34:30.468 UTC [27] ERROR: duplicate key value violates unique constraint \"api_role_name_key\"\r\npostgres_1 | 2020-03-04 05:34:30.468 UTC [27] DETAIL: Key (name)=(annotator) already exists.\r\npostgres_1 | 2020-03-04 05:34:30.468 UTC [27] STATEMENT: INSERT INTO \"api_role\" (\"name\", \"description\", \"created_at\", \"updated_at\") VALUES ('annotator', '', '2020-03-04T05:34:30.467909+00:00'::timestamptz, '2020-03-04T05:34:30.467926+00:00'::timestamptz) RETURNING \"api_role\".\"id\"\r\nbackend_1 | Datbase Error: \"duplicate key value violates unique constraint \"api_role_name_key\"\r\nbackend_1 | DETAIL: Key (name)=(annotator) already exists.\r\nbackend_1 | \"\r\npostgres_1 | 2020-03-04 05:34:30.468 UTC [27] ERROR: duplicate key value violates unique constraint \"api_role_name_key\"\r\npostgres_1 | 2020-03-04 05:34:30.468 UTC [27] DETAIL: Key (name)=(annotation_approver) already exists.\r\npostgres_1 | 2020-03-04 05:34:30.468 UTC [27] STATEMENT: INSERT INTO \"api_role\" (\"name\", \"description\", \"created_at\", \"updated_at\") VALUES ('annotation_approver', '', '2020-03-04T05:34:30.468689+00:00'::timestamptz, '2020-03-04T05:34:30.468706+00:00'::timestamptz) RETURNING \"api_role\".\"id\"\r\nbackend_1 | Datbase Error: \"duplicate key value violates unique constraint \"api_role_name_key\"\r\nbackend_1 | DETAIL: Key (name)=(annotation_approver) already exists.\r\nbackend_1 | \"\r\npostgres_1 | 2020-03-04 05:34:32.026 UTC [28] ERROR: duplicate key value violates unique constraint \"auth_user_username_key\"\r\npostgres_1 | 2020-03-04 05:34:32.026 UTC [28] DETAIL: Key (username)=(admin) already exists.\r\npostgres_1 | 2020-03-04 05:34:32.026 UTC [28] STATEMENT: INSERT INTO \"auth_user\" (\"password\", \"last_login\", \"is_superuser\", \"username\", \"first_name\", \"last_name\", \"email\", \"is_staff\", \"is_active\", \"date_joined\") VALUES ('<...>', NULL, true, 'admin', '', '', '[email protected]', true, true, '2020-03-04T05:34:32.023520+00:00'::timestamptz) RETURNING \"auth_user\".\"id\"\r\nbackend_1 | User admin already exists.\r\nbackend_1 | CommandError: Error: That username is already taken.\r\n```\r\n\r\nPropose to check existence of specified table's rows before creation to avoid these errors.\n", "before_files": [{"content": "from api.models import Role\nfrom django.core.management.base import BaseCommand\nfrom django.db import DatabaseError\nfrom django.conf import settings\n\n\nclass Command(BaseCommand):\n help = 'Non-interactively create default roles'\n\n def handle(self, *args, **options):\n try:\n role_names = [settings.ROLE_PROJECT_ADMIN, settings.ROLE_ANNOTATOR, settings.ROLE_ANNOTATION_APPROVER]\n except KeyError as key_error:\n self.stderr.write(self.style.ERROR(f'Missing Key: \"{key_error}\"'))\n for role_name in role_names:\n role = Role()\n role.name = role_name\n try:\n role.save()\n except DatabaseError as db_error:\n self.stderr.write(self.style.ERROR(f'Datbase Error: \"{db_error}\"'))\n else:\n self.stdout.write(self.style.SUCCESS(f'Role created successfully \"{role_name}\"'))\n", "path": "app/server/management/commands/create_roles.py"}]} | 1,990 | 195 |
gh_patches_debug_15660 | rasdani/github-patches | git_diff | freqtrade__freqtrade-2694 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
allow extra-parameters (ex : unfilledtimeout.buy) to be defined either in configuration file or strategy
**As** a trader,
**I want** to be able to customize strategy depends parameters (like `unfilledtimeout.buy`) straight in my strategy file
**So that** my configuration file in more generic, not embedded strategy design choices and allow me to reduce the number of config file in dryrun/live.
`unfilledtimeout.buy` is just an example but others parameters can be strategy dependents.
</issue>
<code>
[start of freqtrade/resolvers/strategy_resolver.py]
1 # pragma pylint: disable=attribute-defined-outside-init
2
3 """
4 This module load custom strategies
5 """
6 import logging
7 import tempfile
8 from base64 import urlsafe_b64decode
9 from collections import OrderedDict
10 from inspect import getfullargspec
11 from pathlib import Path
12 from typing import Dict, Optional
13
14 from freqtrade import constants, OperationalException
15 from freqtrade.resolvers import IResolver
16 from freqtrade.strategy.interface import IStrategy
17
18 logger = logging.getLogger(__name__)
19
20
21 class StrategyResolver(IResolver):
22 """
23 This class contains all the logic to load custom strategy class
24 """
25
26 __slots__ = ['strategy']
27
28 def __init__(self, config: Optional[Dict] = None) -> None:
29 """
30 Load the custom class from config parameter
31 :param config: configuration dictionary or None
32 """
33 config = config or {}
34
35 if not config.get('strategy'):
36 raise OperationalException("No strategy set. Please use `--strategy` to specify "
37 "the strategy class to use.")
38
39 strategy_name = config['strategy']
40 self.strategy: IStrategy = self._load_strategy(strategy_name,
41 config=config,
42 extra_dir=config.get('strategy_path'))
43
44 # make sure ask_strategy dict is available
45 if 'ask_strategy' not in config:
46 config['ask_strategy'] = {}
47
48 # Set attributes
49 # Check if we need to override configuration
50 # (Attribute name, default, ask_strategy)
51 attributes = [("minimal_roi", {"0": 10.0}, False),
52 ("ticker_interval", None, False),
53 ("stoploss", None, False),
54 ("trailing_stop", None, False),
55 ("trailing_stop_positive", None, False),
56 ("trailing_stop_positive_offset", 0.0, False),
57 ("trailing_only_offset_is_reached", None, False),
58 ("process_only_new_candles", None, False),
59 ("order_types", None, False),
60 ("order_time_in_force", None, False),
61 ("stake_currency", None, False),
62 ("stake_amount", None, False),
63 ("startup_candle_count", None, False),
64 ("use_sell_signal", True, True),
65 ("sell_profit_only", False, True),
66 ("ignore_roi_if_buy_signal", False, True),
67 ]
68 for attribute, default, ask_strategy in attributes:
69 if ask_strategy:
70 self._override_attribute_helper(config['ask_strategy'], attribute, default)
71 else:
72 self._override_attribute_helper(config, attribute, default)
73
74 # Loop this list again to have output combined
75 for attribute, _, exp in attributes:
76 if exp and attribute in config['ask_strategy']:
77 logger.info("Strategy using %s: %s", attribute, config['ask_strategy'][attribute])
78 elif attribute in config:
79 logger.info("Strategy using %s: %s", attribute, config[attribute])
80
81 # Sort and apply type conversions
82 self.strategy.minimal_roi = OrderedDict(sorted(
83 {int(key): value for (key, value) in self.strategy.minimal_roi.items()}.items(),
84 key=lambda t: t[0]))
85 self.strategy.stoploss = float(self.strategy.stoploss)
86
87 self._strategy_sanity_validations()
88
89 def _override_attribute_helper(self, config, attribute: str, default):
90 """
91 Override attributes in the strategy.
92 Prevalence:
93 - Configuration
94 - Strategy
95 - default (if not None)
96 """
97 if attribute in config:
98 setattr(self.strategy, attribute, config[attribute])
99 logger.info("Override strategy '%s' with value in config file: %s.",
100 attribute, config[attribute])
101 elif hasattr(self.strategy, attribute):
102 val = getattr(self.strategy, attribute)
103 # None's cannot exist in the config, so do not copy them
104 if val is not None:
105 config[attribute] = val
106 # Explicitly check for None here as other "falsy" values are possible
107 elif default is not None:
108 setattr(self.strategy, attribute, default)
109 config[attribute] = default
110
111 def _strategy_sanity_validations(self):
112 if not all(k in self.strategy.order_types for k in constants.REQUIRED_ORDERTYPES):
113 raise ImportError(f"Impossible to load Strategy '{self.strategy.__class__.__name__}'. "
114 f"Order-types mapping is incomplete.")
115
116 if not all(k in self.strategy.order_time_in_force for k in constants.REQUIRED_ORDERTIF):
117 raise ImportError(f"Impossible to load Strategy '{self.strategy.__class__.__name__}'. "
118 f"Order-time-in-force mapping is incomplete.")
119
120 def _load_strategy(
121 self, strategy_name: str, config: dict, extra_dir: Optional[str] = None) -> IStrategy:
122 """
123 Search and loads the specified strategy.
124 :param strategy_name: name of the module to import
125 :param config: configuration for the strategy
126 :param extra_dir: additional directory to search for the given strategy
127 :return: Strategy instance or None
128 """
129 current_path = Path(__file__).parent.parent.joinpath('strategy').resolve()
130
131 abs_paths = self.build_search_paths(config, current_path=current_path,
132 user_subdir=constants.USERPATH_STRATEGY,
133 extra_dir=extra_dir)
134
135 if ":" in strategy_name:
136 logger.info("loading base64 encoded strategy")
137 strat = strategy_name.split(":")
138
139 if len(strat) == 2:
140 temp = Path(tempfile.mkdtemp("freq", "strategy"))
141 name = strat[0] + ".py"
142
143 temp.joinpath(name).write_text(urlsafe_b64decode(strat[1]).decode('utf-8'))
144 temp.joinpath("__init__.py").touch()
145
146 strategy_name = strat[0]
147
148 # register temp path with the bot
149 abs_paths.insert(0, temp.resolve())
150
151 strategy = self._load_object(paths=abs_paths, object_type=IStrategy,
152 object_name=strategy_name, kwargs={'config': config})
153 if strategy:
154 strategy._populate_fun_len = len(getfullargspec(strategy.populate_indicators).args)
155 strategy._buy_fun_len = len(getfullargspec(strategy.populate_buy_trend).args)
156 strategy._sell_fun_len = len(getfullargspec(strategy.populate_sell_trend).args)
157 if any([x == 2 for x in [strategy._populate_fun_len,
158 strategy._buy_fun_len,
159 strategy._sell_fun_len]]):
160 strategy.INTERFACE_VERSION = 1
161
162 return strategy
163
164 raise OperationalException(
165 f"Impossible to load Strategy '{strategy_name}'. This class does not exist "
166 "or contains Python code errors."
167 )
168
[end of freqtrade/resolvers/strategy_resolver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/freqtrade/resolvers/strategy_resolver.py b/freqtrade/resolvers/strategy_resolver.py
--- a/freqtrade/resolvers/strategy_resolver.py
+++ b/freqtrade/resolvers/strategy_resolver.py
@@ -61,6 +61,7 @@
("stake_currency", None, False),
("stake_amount", None, False),
("startup_candle_count", None, False),
+ ("unfilledtimeout", None, False),
("use_sell_signal", True, True),
("sell_profit_only", False, True),
("ignore_roi_if_buy_signal", False, True),
| {"golden_diff": "diff --git a/freqtrade/resolvers/strategy_resolver.py b/freqtrade/resolvers/strategy_resolver.py\n--- a/freqtrade/resolvers/strategy_resolver.py\n+++ b/freqtrade/resolvers/strategy_resolver.py\n@@ -61,6 +61,7 @@\n (\"stake_currency\", None, False),\n (\"stake_amount\", None, False),\n (\"startup_candle_count\", None, False),\n+ (\"unfilledtimeout\", None, False),\n (\"use_sell_signal\", True, True),\n (\"sell_profit_only\", False, True),\n (\"ignore_roi_if_buy_signal\", False, True),\n", "issue": "allow extra-parameters (ex : unfilledtimeout.buy) to be defined either in configuration file or strategy\n**As** a trader,\r\n**I want** to be able to customize strategy depends parameters (like `unfilledtimeout.buy`) straight in my strategy file\r\n**So that** my configuration file in more generic, not embedded strategy design choices and allow me to reduce the number of config file in dryrun/live.\r\n\r\n`unfilledtimeout.buy` is just an example but others parameters can be strategy dependents.\n", "before_files": [{"content": "# pragma pylint: disable=attribute-defined-outside-init\n\n\"\"\"\nThis module load custom strategies\n\"\"\"\nimport logging\nimport tempfile\nfrom base64 import urlsafe_b64decode\nfrom collections import OrderedDict\nfrom inspect import getfullargspec\nfrom pathlib import Path\nfrom typing import Dict, Optional\n\nfrom freqtrade import constants, OperationalException\nfrom freqtrade.resolvers import IResolver\nfrom freqtrade.strategy.interface import IStrategy\n\nlogger = logging.getLogger(__name__)\n\n\nclass StrategyResolver(IResolver):\n \"\"\"\n This class contains all the logic to load custom strategy class\n \"\"\"\n\n __slots__ = ['strategy']\n\n def __init__(self, config: Optional[Dict] = None) -> None:\n \"\"\"\n Load the custom class from config parameter\n :param config: configuration dictionary or None\n \"\"\"\n config = config or {}\n\n if not config.get('strategy'):\n raise OperationalException(\"No strategy set. Please use `--strategy` to specify \"\n \"the strategy class to use.\")\n\n strategy_name = config['strategy']\n self.strategy: IStrategy = self._load_strategy(strategy_name,\n config=config,\n extra_dir=config.get('strategy_path'))\n\n # make sure ask_strategy dict is available\n if 'ask_strategy' not in config:\n config['ask_strategy'] = {}\n\n # Set attributes\n # Check if we need to override configuration\n # (Attribute name, default, ask_strategy)\n attributes = [(\"minimal_roi\", {\"0\": 10.0}, False),\n (\"ticker_interval\", None, False),\n (\"stoploss\", None, False),\n (\"trailing_stop\", None, False),\n (\"trailing_stop_positive\", None, False),\n (\"trailing_stop_positive_offset\", 0.0, False),\n (\"trailing_only_offset_is_reached\", None, False),\n (\"process_only_new_candles\", None, False),\n (\"order_types\", None, False),\n (\"order_time_in_force\", None, False),\n (\"stake_currency\", None, False),\n (\"stake_amount\", None, False),\n (\"startup_candle_count\", None, False),\n (\"use_sell_signal\", True, True),\n (\"sell_profit_only\", False, True),\n (\"ignore_roi_if_buy_signal\", False, True),\n ]\n for attribute, default, ask_strategy in attributes:\n if ask_strategy:\n self._override_attribute_helper(config['ask_strategy'], attribute, default)\n else:\n self._override_attribute_helper(config, attribute, default)\n\n # Loop this list again to have output combined\n for attribute, _, exp in attributes:\n if exp and attribute in config['ask_strategy']:\n logger.info(\"Strategy using %s: %s\", attribute, config['ask_strategy'][attribute])\n elif attribute in config:\n logger.info(\"Strategy using %s: %s\", attribute, config[attribute])\n\n # Sort and apply type conversions\n self.strategy.minimal_roi = OrderedDict(sorted(\n {int(key): value for (key, value) in self.strategy.minimal_roi.items()}.items(),\n key=lambda t: t[0]))\n self.strategy.stoploss = float(self.strategy.stoploss)\n\n self._strategy_sanity_validations()\n\n def _override_attribute_helper(self, config, attribute: str, default):\n \"\"\"\n Override attributes in the strategy.\n Prevalence:\n - Configuration\n - Strategy\n - default (if not None)\n \"\"\"\n if attribute in config:\n setattr(self.strategy, attribute, config[attribute])\n logger.info(\"Override strategy '%s' with value in config file: %s.\",\n attribute, config[attribute])\n elif hasattr(self.strategy, attribute):\n val = getattr(self.strategy, attribute)\n # None's cannot exist in the config, so do not copy them\n if val is not None:\n config[attribute] = val\n # Explicitly check for None here as other \"falsy\" values are possible\n elif default is not None:\n setattr(self.strategy, attribute, default)\n config[attribute] = default\n\n def _strategy_sanity_validations(self):\n if not all(k in self.strategy.order_types for k in constants.REQUIRED_ORDERTYPES):\n raise ImportError(f\"Impossible to load Strategy '{self.strategy.__class__.__name__}'. \"\n f\"Order-types mapping is incomplete.\")\n\n if not all(k in self.strategy.order_time_in_force for k in constants.REQUIRED_ORDERTIF):\n raise ImportError(f\"Impossible to load Strategy '{self.strategy.__class__.__name__}'. \"\n f\"Order-time-in-force mapping is incomplete.\")\n\n def _load_strategy(\n self, strategy_name: str, config: dict, extra_dir: Optional[str] = None) -> IStrategy:\n \"\"\"\n Search and loads the specified strategy.\n :param strategy_name: name of the module to import\n :param config: configuration for the strategy\n :param extra_dir: additional directory to search for the given strategy\n :return: Strategy instance or None\n \"\"\"\n current_path = Path(__file__).parent.parent.joinpath('strategy').resolve()\n\n abs_paths = self.build_search_paths(config, current_path=current_path,\n user_subdir=constants.USERPATH_STRATEGY,\n extra_dir=extra_dir)\n\n if \":\" in strategy_name:\n logger.info(\"loading base64 encoded strategy\")\n strat = strategy_name.split(\":\")\n\n if len(strat) == 2:\n temp = Path(tempfile.mkdtemp(\"freq\", \"strategy\"))\n name = strat[0] + \".py\"\n\n temp.joinpath(name).write_text(urlsafe_b64decode(strat[1]).decode('utf-8'))\n temp.joinpath(\"__init__.py\").touch()\n\n strategy_name = strat[0]\n\n # register temp path with the bot\n abs_paths.insert(0, temp.resolve())\n\n strategy = self._load_object(paths=abs_paths, object_type=IStrategy,\n object_name=strategy_name, kwargs={'config': config})\n if strategy:\n strategy._populate_fun_len = len(getfullargspec(strategy.populate_indicators).args)\n strategy._buy_fun_len = len(getfullargspec(strategy.populate_buy_trend).args)\n strategy._sell_fun_len = len(getfullargspec(strategy.populate_sell_trend).args)\n if any([x == 2 for x in [strategy._populate_fun_len,\n strategy._buy_fun_len,\n strategy._sell_fun_len]]):\n strategy.INTERFACE_VERSION = 1\n\n return strategy\n\n raise OperationalException(\n f\"Impossible to load Strategy '{strategy_name}'. This class does not exist \"\n \"or contains Python code errors.\"\n )\n", "path": "freqtrade/resolvers/strategy_resolver.py"}]} | 2,505 | 148 |
gh_patches_debug_17834 | rasdani/github-patches | git_diff | kserve__kserve-2945 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pydantic ValidationError due to dropped InferRequest id
/kind bug
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
I am using kserve as a wrapper around Torchserve, with the v2 Inference API. A pydantic validation error is thrown when parsing the returned InferenceResponse due to the created InferRequest dropping the id. The downstream torchserve inferene then receives a null id, and returns null id in its response too.
```
2023-05-25 14:39:33.490 16 uvicorn.error ERROR [run_asgi():376] Exception in ASGI application
Traceback (most recent call last):
File "/opt/env/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 373, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/opt/env/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "/opt/env/lib/python3.9/site-packages/fastapi/applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "/opt/env/lib/python3.9/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/opt/env/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/opt/env/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/opt/env/lib/python3.9/site-packages/timing_asgi/middleware.py", line 68, in __call__
await self.app(scope, receive, send_wrapper)
File "/opt/env/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/opt/env/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/opt/env/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/opt/env/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/opt/env/lib/python3.9/site-packages/starlette/routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "/opt/env/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/opt/env/lib/python3.9/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/opt/env/lib/python3.9/site-packages/fastapi/routing.py", line 235, in app
raw_response = await run_endpoint_function(
File "/opt/env/lib/python3.9/site-packages/fastapi/routing.py", line 161, in run_endpoint_function
return await dependant.call(**values)
File "/opt/env/lib/python3.9/site-packages/kserve/protocol/rest/v2_endpoints.py", line 135, in infer
res = InferenceResponse.parse_obj(response)
File "pydantic/main.py", line 526, in pydantic.main.BaseModel.parse_obj
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for InferenceResponse
id
none is not an allowed value (type=type_error.none.not_allowed)
```
I will create a simple PR to fix this.
**What did you expect to happen:**
Inference to succeed, with id passed through.
**What's the InferenceService yaml:**
[To help us debug please run `kubectl get isvc $name -n $namespace -oyaml` and paste the output]
Not applicable. Testing with docker.
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
Not applicable.
</issue>
<code>
[start of python/kserve/kserve/protocol/rest/v2_endpoints.py]
1 # Copyright 2022 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Optional, Dict
16
17 from fastapi.requests import Request
18 from fastapi.responses import Response
19
20 from ..infer_type import InferInput, InferRequest
21 from .v2_datamodels import (
22 InferenceRequest, ServerMetadataResponse, ServerLiveResponse, ServerReadyResponse,
23 ModelMetadataResponse, InferenceResponse, ModelReadyResponse
24 )
25 from ..dataplane import DataPlane
26 from ..model_repository_extension import ModelRepositoryExtension
27 from ...errors import ModelNotReady
28
29
30 class V2Endpoints:
31 """KServe V2 Endpoints
32 """
33
34 def __init__(self, dataplane: DataPlane, model_repository_extension: Optional[ModelRepositoryExtension] = None):
35 self.model_repository_extension = model_repository_extension
36 self.dataplane = dataplane
37
38 async def metadata(self) -> ServerMetadataResponse:
39 """Server metadata endpoint.
40
41 Returns:
42 ServerMetadataResponse: Server metadata JSON object.
43 """
44 return ServerMetadataResponse.parse_obj(self.dataplane.metadata())
45
46 @staticmethod
47 async def live() -> ServerLiveResponse:
48 """Server live endpoint.
49
50 Returns:
51 ServerLiveResponse: Server live message.
52 """
53 return ServerLiveResponse(live=True)
54
55 @staticmethod
56 async def ready() -> ServerReadyResponse:
57 """Server ready endpoint.
58
59 Returns:
60 ServerReadyResponse: Server ready message.
61 """
62 return ServerReadyResponse(ready=True)
63
64 async def model_metadata(self, model_name: str, model_version: Optional[str] = None) -> ModelMetadataResponse:
65 """Model metadata handler. It provides information about a model.
66
67 Args:
68 model_name (str): Model name.
69 model_version (Optional[str]): Model version (optional).
70
71 Returns:
72 ModelMetadataResponse: Model metadata object.
73 """
74 # TODO: support model_version
75 if model_version:
76 raise NotImplementedError("Model versioning not supported yet.")
77
78 metadata = await self.dataplane.model_metadata(model_name)
79 return ModelMetadataResponse.parse_obj(metadata)
80
81 async def model_ready(self, model_name: str, model_version: Optional[str] = None) -> ModelReadyResponse:
82 """Check if a given model is ready.
83
84 Args:
85 model_name (str): Model name.
86 model_version (str): Model version.
87
88 Returns:
89 ModelReadyResponse: Model ready object
90 """
91 # TODO: support model_version
92 if model_version:
93 raise NotImplementedError("Model versioning not supported yet.")
94
95 model_ready = self.dataplane.model_ready(model_name)
96
97 if not model_ready:
98 raise ModelNotReady(model_name)
99
100 return ModelReadyResponse.parse_obj({"name": model_name, "ready": model_ready})
101
102 async def infer(
103 self,
104 raw_request: Request,
105 raw_response: Response,
106 model_name: str,
107 request_body: InferenceRequest,
108 model_version: Optional[str] = None
109 ) -> InferenceResponse:
110 """Infer handler.
111
112 Args:
113 raw_request (Request): fastapi request object,
114 raw_response (Response): fastapi response object,
115 model_name (str): Model name.
116 request_body (InferenceRequest): Inference request body.
117 model_version (Optional[str]): Model version (optional).
118
119 Returns:
120 InferenceResponse: Inference response object.
121 """
122 # TODO: support model_version
123 if model_version:
124 raise NotImplementedError("Model versioning not supported yet.")
125
126 model_ready = self.dataplane.model_ready(model_name)
127
128 if not model_ready:
129 raise ModelNotReady(model_name)
130
131 request_headers = dict(raw_request.headers)
132 infer_inputs = [InferInput(name=input.name, shape=input.shape, datatype=input.datatype,
133 data=input.data,
134 parameters={} if input.parameters is None else input.parameters
135 ) for input in request_body.inputs]
136 infer_request = InferRequest(model_name=model_name, infer_inputs=infer_inputs,
137 parameters=request_body.parameters)
138 response, response_headers = await self.dataplane.infer(model_name=model_name,
139 request=infer_request,
140 headers=request_headers)
141
142 response, response_headers = self.dataplane.encode(model_name=model_name,
143 response=response,
144 headers=response_headers, req_attributes={})
145
146 if response_headers:
147 raw_response.headers.update(response_headers)
148 res = InferenceResponse.parse_obj(response)
149 return res
150
151 async def load(self, model_name: str) -> Dict:
152 """Model load handler.
153
154 Args:
155 model_name (str): Model name.
156
157 Returns:
158 Dict: {"name": model_name, "load": True}
159 """
160 await self.model_repository_extension.load(model_name)
161 return {"name": model_name, "load": True}
162
163 async def unload(self, model_name: str) -> Dict:
164 """Model unload handler.
165
166 Args:
167 model_name (str): Model name.
168
169 Returns:
170 Dict: {"name": model_name, "unload": True}
171 """
172 await self.model_repository_extension.unload(model_name)
173 return {"name": model_name, "unload": True}
174
[end of python/kserve/kserve/protocol/rest/v2_endpoints.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/kserve/kserve/protocol/rest/v2_endpoints.py b/python/kserve/kserve/protocol/rest/v2_endpoints.py
--- a/python/kserve/kserve/protocol/rest/v2_endpoints.py
+++ b/python/kserve/kserve/protocol/rest/v2_endpoints.py
@@ -133,7 +133,7 @@
data=input.data,
parameters={} if input.parameters is None else input.parameters
) for input in request_body.inputs]
- infer_request = InferRequest(model_name=model_name, infer_inputs=infer_inputs,
+ infer_request = InferRequest(request_id=request_body.id, model_name=model_name, infer_inputs=infer_inputs,
parameters=request_body.parameters)
response, response_headers = await self.dataplane.infer(model_name=model_name,
request=infer_request,
| {"golden_diff": "diff --git a/python/kserve/kserve/protocol/rest/v2_endpoints.py b/python/kserve/kserve/protocol/rest/v2_endpoints.py\n--- a/python/kserve/kserve/protocol/rest/v2_endpoints.py\n+++ b/python/kserve/kserve/protocol/rest/v2_endpoints.py\n@@ -133,7 +133,7 @@\n data=input.data,\n parameters={} if input.parameters is None else input.parameters\n ) for input in request_body.inputs]\n- infer_request = InferRequest(model_name=model_name, infer_inputs=infer_inputs,\n+ infer_request = InferRequest(request_id=request_body.id, model_name=model_name, infer_inputs=infer_inputs,\n parameters=request_body.parameters)\n response, response_headers = await self.dataplane.infer(model_name=model_name,\n request=infer_request,\n", "issue": "Pydantic ValidationError due to dropped InferRequest id\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\n[A clear and concise description of what the bug is.]\r\nI am using kserve as a wrapper around Torchserve, with the v2 Inference API. A pydantic validation error is thrown when parsing the returned InferenceResponse due to the created InferRequest dropping the id. The downstream torchserve inferene then receives a null id, and returns null id in its response too.\r\n\r\n```\r\n2023-05-25 14:39:33.490 16 uvicorn.error ERROR [run_asgi():376] Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"/opt/env/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py\", line 373, in run_asgi\r\n result = await app(self.scope, self.receive, self.send)\r\n File \"/opt/env/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py\", line 75, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/opt/env/lib/python3.9/site-packages/fastapi/applications.py\", line 270, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/opt/env/lib/python3.9/site-packages/starlette/applications.py\", line 124, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/opt/env/lib/python3.9/site-packages/starlette/middleware/errors.py\", line 184, in __call__\r\n raise exc\r\n File \"/opt/env/lib/python3.9/site-packages/starlette/middleware/errors.py\", line 162, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/opt/env/lib/python3.9/site-packages/timing_asgi/middleware.py\", line 68, in __call__\r\n await self.app(scope, receive, send_wrapper)\r\n File \"/opt/env/lib/python3.9/site-packages/starlette/middleware/exceptions.py\", line 79, in __call__\r\n raise exc\r\n File \"/opt/env/lib/python3.9/site-packages/starlette/middleware/exceptions.py\", line 68, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"/opt/env/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py\", line 21, in __call__\r\n raise e\r\n File \"/opt/env/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py\", line 18, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/opt/env/lib/python3.9/site-packages/starlette/routing.py\", line 706, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"/opt/env/lib/python3.9/site-packages/starlette/routing.py\", line 276, in handle\r\n await self.app(scope, receive, send)\r\n File \"/opt/env/lib/python3.9/site-packages/starlette/routing.py\", line 66, in app\r\n response = await func(request)\r\n File \"/opt/env/lib/python3.9/site-packages/fastapi/routing.py\", line 235, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/opt/env/lib/python3.9/site-packages/fastapi/routing.py\", line 161, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/opt/env/lib/python3.9/site-packages/kserve/protocol/rest/v2_endpoints.py\", line 135, in infer\r\n res = InferenceResponse.parse_obj(response)\r\n File \"pydantic/main.py\", line 526, in pydantic.main.BaseModel.parse_obj\r\n File \"pydantic/main.py\", line 341, in pydantic.main.BaseModel.__init__\r\npydantic.error_wrappers.ValidationError: 1 validation error for InferenceResponse\r\nid\r\n none is not an allowed value (type=type_error.none.not_allowed)\r\n```\r\n\r\nI will create a simple PR to fix this.\r\n\r\n**What did you expect to happen:**\r\nInference to succeed, with id passed through.\r\n\r\n\r\n**What's the InferenceService yaml:**\r\n[To help us debug please run `kubectl get isvc $name -n $namespace -oyaml` and paste the output]\r\nNot applicable. Testing with docker.\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\nNot applicable.\r\n\n", "before_files": [{"content": "# Copyright 2022 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional, Dict\n\nfrom fastapi.requests import Request\nfrom fastapi.responses import Response\n\nfrom ..infer_type import InferInput, InferRequest\nfrom .v2_datamodels import (\n InferenceRequest, ServerMetadataResponse, ServerLiveResponse, ServerReadyResponse,\n ModelMetadataResponse, InferenceResponse, ModelReadyResponse\n)\nfrom ..dataplane import DataPlane\nfrom ..model_repository_extension import ModelRepositoryExtension\nfrom ...errors import ModelNotReady\n\n\nclass V2Endpoints:\n \"\"\"KServe V2 Endpoints\n \"\"\"\n\n def __init__(self, dataplane: DataPlane, model_repository_extension: Optional[ModelRepositoryExtension] = None):\n self.model_repository_extension = model_repository_extension\n self.dataplane = dataplane\n\n async def metadata(self) -> ServerMetadataResponse:\n \"\"\"Server metadata endpoint.\n\n Returns:\n ServerMetadataResponse: Server metadata JSON object.\n \"\"\"\n return ServerMetadataResponse.parse_obj(self.dataplane.metadata())\n\n @staticmethod\n async def live() -> ServerLiveResponse:\n \"\"\"Server live endpoint.\n\n Returns:\n ServerLiveResponse: Server live message.\n \"\"\"\n return ServerLiveResponse(live=True)\n\n @staticmethod\n async def ready() -> ServerReadyResponse:\n \"\"\"Server ready endpoint.\n\n Returns:\n ServerReadyResponse: Server ready message.\n \"\"\"\n return ServerReadyResponse(ready=True)\n\n async def model_metadata(self, model_name: str, model_version: Optional[str] = None) -> ModelMetadataResponse:\n \"\"\"Model metadata handler. It provides information about a model.\n\n Args:\n model_name (str): Model name.\n model_version (Optional[str]): Model version (optional).\n\n Returns:\n ModelMetadataResponse: Model metadata object.\n \"\"\"\n # TODO: support model_version\n if model_version:\n raise NotImplementedError(\"Model versioning not supported yet.\")\n\n metadata = await self.dataplane.model_metadata(model_name)\n return ModelMetadataResponse.parse_obj(metadata)\n\n async def model_ready(self, model_name: str, model_version: Optional[str] = None) -> ModelReadyResponse:\n \"\"\"Check if a given model is ready.\n\n Args:\n model_name (str): Model name.\n model_version (str): Model version.\n\n Returns:\n ModelReadyResponse: Model ready object\n \"\"\"\n # TODO: support model_version\n if model_version:\n raise NotImplementedError(\"Model versioning not supported yet.\")\n\n model_ready = self.dataplane.model_ready(model_name)\n\n if not model_ready:\n raise ModelNotReady(model_name)\n\n return ModelReadyResponse.parse_obj({\"name\": model_name, \"ready\": model_ready})\n\n async def infer(\n self,\n raw_request: Request,\n raw_response: Response,\n model_name: str,\n request_body: InferenceRequest,\n model_version: Optional[str] = None\n ) -> InferenceResponse:\n \"\"\"Infer handler.\n\n Args:\n raw_request (Request): fastapi request object,\n raw_response (Response): fastapi response object,\n model_name (str): Model name.\n request_body (InferenceRequest): Inference request body.\n model_version (Optional[str]): Model version (optional).\n\n Returns:\n InferenceResponse: Inference response object.\n \"\"\"\n # TODO: support model_version\n if model_version:\n raise NotImplementedError(\"Model versioning not supported yet.\")\n\n model_ready = self.dataplane.model_ready(model_name)\n\n if not model_ready:\n raise ModelNotReady(model_name)\n\n request_headers = dict(raw_request.headers)\n infer_inputs = [InferInput(name=input.name, shape=input.shape, datatype=input.datatype,\n data=input.data,\n parameters={} if input.parameters is None else input.parameters\n ) for input in request_body.inputs]\n infer_request = InferRequest(model_name=model_name, infer_inputs=infer_inputs,\n parameters=request_body.parameters)\n response, response_headers = await self.dataplane.infer(model_name=model_name,\n request=infer_request,\n headers=request_headers)\n\n response, response_headers = self.dataplane.encode(model_name=model_name,\n response=response,\n headers=response_headers, req_attributes={})\n\n if response_headers:\n raw_response.headers.update(response_headers)\n res = InferenceResponse.parse_obj(response)\n return res\n\n async def load(self, model_name: str) -> Dict:\n \"\"\"Model load handler.\n\n Args:\n model_name (str): Model name.\n\n Returns:\n Dict: {\"name\": model_name, \"load\": True}\n \"\"\"\n await self.model_repository_extension.load(model_name)\n return {\"name\": model_name, \"load\": True}\n\n async def unload(self, model_name: str) -> Dict:\n \"\"\"Model unload handler.\n\n Args:\n model_name (str): Model name.\n\n Returns:\n Dict: {\"name\": model_name, \"unload\": True}\n \"\"\"\n await self.model_repository_extension.unload(model_name)\n return {\"name\": model_name, \"unload\": True}\n", "path": "python/kserve/kserve/protocol/rest/v2_endpoints.py"}]} | 3,185 | 176 |
gh_patches_debug_32152 | rasdani/github-patches | git_diff | pre-commit__pre-commit-166 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Choose python more intelligently in the file installed to .git/hooks/pre-commit
Since we know which python we're running as when `pre-commit install` is run (`sys.executable`), let's put that into `.git/hooks/pre-commit` and try that python first.
This will involve bumping the magic number in the resource file, and probably updating the logic to check for the same sys.executable existing inside that file.
</issue>
<code>
[start of pre_commit/output.py]
1 from __future__ import unicode_literals
2
3 import subprocess
4 import sys
5
6 from pre_commit import color
7 from pre_commit import five
8
9
10 # TODO: smell: import side-effects
11 COLS = int(
12 subprocess.Popen(
13 ['tput', 'cols'], stdout=subprocess.PIPE
14 ).communicate()[0] or
15 # Default in the case of no terminal
16 80
17 )
18
19
20 def get_hook_message(
21 start,
22 postfix='',
23 end_msg=None,
24 end_len=0,
25 end_color=None,
26 use_color=None,
27 cols=COLS,
28 ):
29 """Prints a message for running a hook.
30
31 This currently supports three approaches:
32
33 # Print `start` followed by dots, leaving 6 characters at the end
34 >>> print_hook_message('start', end_len=6)
35 start...............................................................
36
37 # Print `start` followed by dots with the end message colored if coloring
38 # is specified and a newline afterwards
39 >>> print_hook_message(
40 'start',
41 end_msg='end',
42 end_color=color.RED,
43 use_color=True,
44 )
45 start...................................................................end
46
47 # Print `start` followed by dots, followed by the `postfix` message
48 # uncolored, followed by the `end_msg` colored if specified and a newline
49 # afterwards
50 >>> print_hook_message(
51 'start',
52 postfix='postfix ',
53 end_msg='end',
54 end_color=color.RED,
55 use_color=True,
56 )
57 start...........................................................postfix end
58 """
59 if bool(end_msg) == bool(end_len):
60 raise ValueError('Expected one of (`end_msg`, `end_len`)')
61 if end_msg is not None and (end_color is None or use_color is None):
62 raise ValueError(
63 '`end_color` and `use_color` are required with `end_msg`'
64 )
65
66 if end_len:
67 return start + '.' * (cols - len(start) - end_len - 1)
68 else:
69 return '{0}{1}{2}{3}\n'.format(
70 start,
71 '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),
72 postfix,
73 color.format_color(end_msg, end_color, use_color),
74 )
75
76
77 def sys_stdout_write_wrapper(s, stream=sys.stdout):
78 """Python 2.6 chokes on unicode being passed to sys.stdout.write.
79
80 This is an adapter because PY2 is ok with bytes and PY3 requires text.
81 """
82 assert type(s) is five.text
83 if five.PY2: # pragma: no cover (PY2)
84 s = s.encode('UTF-8')
85 stream.write(s)
86
[end of pre_commit/output.py]
[start of pre_commit/commands/install_uninstall.py]
1 from __future__ import print_function
2 from __future__ import unicode_literals
3
4 import io
5 import logging
6 import os
7 import os.path
8 import stat
9
10 from pre_commit.logging_handler import LoggingHandler
11 from pre_commit.util import resource_filename
12
13
14 logger = logging.getLogger('pre_commit')
15
16
17 # This is used to identify the hook file we install
18 PREVIOUS_IDENTIFYING_HASHES = [
19 'd8ee923c46731b42cd95cc869add4062',
20 ]
21
22
23 IDENTIFYING_HASH = '4d9958c90bc262f47553e2c073f14cfe'
24
25
26 def is_our_pre_commit(filename):
27 return IDENTIFYING_HASH in io.open(filename).read()
28
29
30 def is_previous_pre_commit(filename):
31 contents = io.open(filename).read()
32 return any(hash in contents for hash in PREVIOUS_IDENTIFYING_HASHES)
33
34
35 def make_executable(filename):
36 original_mode = os.stat(filename).st_mode
37 os.chmod(
38 filename,
39 original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,
40 )
41
42
43 def install(runner, overwrite=False, hooks=False):
44 """Install the pre-commit hooks."""
45 pre_commit_file = resource_filename('pre-commit-hook')
46
47 # If we have an existing hook, move it to pre-commit.legacy
48 if (
49 os.path.exists(runner.pre_commit_path) and
50 not is_our_pre_commit(runner.pre_commit_path) and
51 not is_previous_pre_commit(runner.pre_commit_path)
52 ):
53 os.rename(runner.pre_commit_path, runner.pre_commit_legacy_path)
54
55 # If we specify overwrite, we simply delete the legacy file
56 if overwrite and os.path.exists(runner.pre_commit_legacy_path):
57 os.remove(runner.pre_commit_legacy_path)
58 elif os.path.exists(runner.pre_commit_legacy_path):
59 print(
60 'Running in migration mode with existing hooks at {0}\n'
61 'Use -f to use only pre-commit.'.format(
62 runner.pre_commit_legacy_path,
63 )
64 )
65
66 with open(runner.pre_commit_path, 'w') as pre_commit_file_obj:
67 pre_commit_file_obj.write(open(pre_commit_file).read())
68 make_executable(runner.pre_commit_path)
69
70 print('pre-commit installed at {0}'.format(runner.pre_commit_path))
71
72 # If they requested we install all of the hooks, do so.
73 if hooks:
74 # Set up our logging handler
75 logger.addHandler(LoggingHandler(False))
76 logger.setLevel(logging.INFO)
77 for repository in runner.repositories:
78 repository.require_installed()
79
80 return 0
81
82
83 def uninstall(runner):
84 """Uninstall the pre-commit hooks."""
85 # If our file doesn't exist or it isn't ours, gtfo.
86 if (
87 not os.path.exists(runner.pre_commit_path) or (
88 not is_our_pre_commit(runner.pre_commit_path) and
89 not is_previous_pre_commit(runner.pre_commit_path)
90 )
91 ):
92 return 0
93
94 os.remove(runner.pre_commit_path)
95 print('pre-commit uninstalled')
96
97 if os.path.exists(runner.pre_commit_legacy_path):
98 os.rename(runner.pre_commit_legacy_path, runner.pre_commit_path)
99 print('Restored previous hooks to {0}'.format(runner.pre_commit_path))
100
101 return 0
102
[end of pre_commit/commands/install_uninstall.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -6,6 +6,7 @@
import os
import os.path
import stat
+import sys
from pre_commit.logging_handler import LoggingHandler
from pre_commit.util import resource_filename
@@ -15,12 +16,13 @@
# This is used to identify the hook file we install
-PREVIOUS_IDENTIFYING_HASHES = [
+PREVIOUS_IDENTIFYING_HASHES = (
+ '4d9958c90bc262f47553e2c073f14cfe',
'd8ee923c46731b42cd95cc869add4062',
-]
+)
-IDENTIFYING_HASH = '4d9958c90bc262f47553e2c073f14cfe'
+IDENTIFYING_HASH = '49fd668cb42069aa1b6048464be5d395'
def is_our_pre_commit(filename):
@@ -63,8 +65,11 @@
)
)
- with open(runner.pre_commit_path, 'w') as pre_commit_file_obj:
- pre_commit_file_obj.write(open(pre_commit_file).read())
+ with io.open(runner.pre_commit_path, 'w') as pre_commit_file_obj:
+ contents = io.open(pre_commit_file).read().format(
+ sys_executable=sys.executable,
+ )
+ pre_commit_file_obj.write(contents)
make_executable(runner.pre_commit_path)
print('pre-commit installed at {0}'.format(runner.pre_commit_path))
diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -1,5 +1,6 @@
from __future__ import unicode_literals
+import os
import subprocess
import sys
@@ -10,7 +11,7 @@
# TODO: smell: import side-effects
COLS = int(
subprocess.Popen(
- ['tput', 'cols'], stdout=subprocess.PIPE
+ ['tput', 'cols'], stdout=subprocess.PIPE, stderr=open(os.devnull, 'w'),
).communicate()[0] or
# Default in the case of no terminal
80
| {"golden_diff": "diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py\n--- a/pre_commit/commands/install_uninstall.py\n+++ b/pre_commit/commands/install_uninstall.py\n@@ -6,6 +6,7 @@\n import os\n import os.path\n import stat\n+import sys\n \n from pre_commit.logging_handler import LoggingHandler\n from pre_commit.util import resource_filename\n@@ -15,12 +16,13 @@\n \n \n # This is used to identify the hook file we install\n-PREVIOUS_IDENTIFYING_HASHES = [\n+PREVIOUS_IDENTIFYING_HASHES = (\n+ '4d9958c90bc262f47553e2c073f14cfe',\n 'd8ee923c46731b42cd95cc869add4062',\n-]\n+)\n \n \n-IDENTIFYING_HASH = '4d9958c90bc262f47553e2c073f14cfe'\n+IDENTIFYING_HASH = '49fd668cb42069aa1b6048464be5d395'\n \n \n def is_our_pre_commit(filename):\n@@ -63,8 +65,11 @@\n )\n )\n \n- with open(runner.pre_commit_path, 'w') as pre_commit_file_obj:\n- pre_commit_file_obj.write(open(pre_commit_file).read())\n+ with io.open(runner.pre_commit_path, 'w') as pre_commit_file_obj:\n+ contents = io.open(pre_commit_file).read().format(\n+ sys_executable=sys.executable,\n+ )\n+ pre_commit_file_obj.write(contents)\n make_executable(runner.pre_commit_path)\n \n print('pre-commit installed at {0}'.format(runner.pre_commit_path))\ndiff --git a/pre_commit/output.py b/pre_commit/output.py\n--- a/pre_commit/output.py\n+++ b/pre_commit/output.py\n@@ -1,5 +1,6 @@\n from __future__ import unicode_literals\n \n+import os\n import subprocess\n import sys\n \n@@ -10,7 +11,7 @@\n # TODO: smell: import side-effects\n COLS = int(\n subprocess.Popen(\n- ['tput', 'cols'], stdout=subprocess.PIPE\n+ ['tput', 'cols'], stdout=subprocess.PIPE, stderr=open(os.devnull, 'w'),\n ).communicate()[0] or\n # Default in the case of no terminal\n 80\n", "issue": "Choose python more intelligently in the file installed to .git/hooks/pre-commit\nSince we know which python we're running as when `pre-commit install` is run (`sys.executable`), let's put that into `.git/hooks/pre-commit` and try that python first.\n\nThis will involve bumping the magic number in the resource file, and probably updating the logic to check for the same sys.executable existing inside that file.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport subprocess\nimport sys\n\nfrom pre_commit import color\nfrom pre_commit import five\n\n\n# TODO: smell: import side-effects\nCOLS = int(\n subprocess.Popen(\n ['tput', 'cols'], stdout=subprocess.PIPE\n ).communicate()[0] or\n # Default in the case of no terminal\n 80\n)\n\n\ndef get_hook_message(\n start,\n postfix='',\n end_msg=None,\n end_len=0,\n end_color=None,\n use_color=None,\n cols=COLS,\n):\n \"\"\"Prints a message for running a hook.\n\n This currently supports three approaches:\n\n # Print `start` followed by dots, leaving 6 characters at the end\n >>> print_hook_message('start', end_len=6)\n start...............................................................\n\n # Print `start` followed by dots with the end message colored if coloring\n # is specified and a newline afterwards\n >>> print_hook_message(\n 'start',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...................................................................end\n\n # Print `start` followed by dots, followed by the `postfix` message\n # uncolored, followed by the `end_msg` colored if specified and a newline\n # afterwards\n >>> print_hook_message(\n 'start',\n postfix='postfix ',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...........................................................postfix end\n \"\"\"\n if bool(end_msg) == bool(end_len):\n raise ValueError('Expected one of (`end_msg`, `end_len`)')\n if end_msg is not None and (end_color is None or use_color is None):\n raise ValueError(\n '`end_color` and `use_color` are required with `end_msg`'\n )\n\n if end_len:\n return start + '.' * (cols - len(start) - end_len - 1)\n else:\n return '{0}{1}{2}{3}\\n'.format(\n start,\n '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n\n\ndef sys_stdout_write_wrapper(s, stream=sys.stdout):\n \"\"\"Python 2.6 chokes on unicode being passed to sys.stdout.write.\n\n This is an adapter because PY2 is ok with bytes and PY3 requires text.\n \"\"\"\n assert type(s) is five.text\n if five.PY2: # pragma: no cover (PY2)\n s = s.encode('UTF-8')\n stream.write(s)\n", "path": "pre_commit/output.py"}, {"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport logging\nimport os\nimport os.path\nimport stat\n\nfrom pre_commit.logging_handler import LoggingHandler\nfrom pre_commit.util import resource_filename\n\n\nlogger = logging.getLogger('pre_commit')\n\n\n# This is used to identify the hook file we install\nPREVIOUS_IDENTIFYING_HASHES = [\n 'd8ee923c46731b42cd95cc869add4062',\n]\n\n\nIDENTIFYING_HASH = '4d9958c90bc262f47553e2c073f14cfe'\n\n\ndef is_our_pre_commit(filename):\n return IDENTIFYING_HASH in io.open(filename).read()\n\n\ndef is_previous_pre_commit(filename):\n contents = io.open(filename).read()\n return any(hash in contents for hash in PREVIOUS_IDENTIFYING_HASHES)\n\n\ndef make_executable(filename):\n original_mode = os.stat(filename).st_mode\n os.chmod(\n filename,\n original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,\n )\n\n\ndef install(runner, overwrite=False, hooks=False):\n \"\"\"Install the pre-commit hooks.\"\"\"\n pre_commit_file = resource_filename('pre-commit-hook')\n\n # If we have an existing hook, move it to pre-commit.legacy\n if (\n os.path.exists(runner.pre_commit_path) and\n not is_our_pre_commit(runner.pre_commit_path) and\n not is_previous_pre_commit(runner.pre_commit_path)\n ):\n os.rename(runner.pre_commit_path, runner.pre_commit_legacy_path)\n\n # If we specify overwrite, we simply delete the legacy file\n if overwrite and os.path.exists(runner.pre_commit_legacy_path):\n os.remove(runner.pre_commit_legacy_path)\n elif os.path.exists(runner.pre_commit_legacy_path):\n print(\n 'Running in migration mode with existing hooks at {0}\\n'\n 'Use -f to use only pre-commit.'.format(\n runner.pre_commit_legacy_path,\n )\n )\n\n with open(runner.pre_commit_path, 'w') as pre_commit_file_obj:\n pre_commit_file_obj.write(open(pre_commit_file).read())\n make_executable(runner.pre_commit_path)\n\n print('pre-commit installed at {0}'.format(runner.pre_commit_path))\n\n # If they requested we install all of the hooks, do so.\n if hooks:\n # Set up our logging handler\n logger.addHandler(LoggingHandler(False))\n logger.setLevel(logging.INFO)\n for repository in runner.repositories:\n repository.require_installed()\n\n return 0\n\n\ndef uninstall(runner):\n \"\"\"Uninstall the pre-commit hooks.\"\"\"\n # If our file doesn't exist or it isn't ours, gtfo.\n if (\n not os.path.exists(runner.pre_commit_path) or (\n not is_our_pre_commit(runner.pre_commit_path) and\n not is_previous_pre_commit(runner.pre_commit_path)\n )\n ):\n return 0\n\n os.remove(runner.pre_commit_path)\n print('pre-commit uninstalled')\n\n if os.path.exists(runner.pre_commit_legacy_path):\n os.rename(runner.pre_commit_legacy_path, runner.pre_commit_path)\n print('Restored previous hooks to {0}'.format(runner.pre_commit_path))\n\n return 0\n", "path": "pre_commit/commands/install_uninstall.py"}]} | 2,330 | 563 |
gh_patches_debug_17960 | rasdani/github-patches | git_diff | PaddlePaddle__PaddleSeg-2277 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug] Dice loss bug
环境:aistudio tesla v100
paddlepaddle=2.3.0
paddleseg=2.5.0
bug:dice loss 前向传播时,label转换为one-hot编码前,并未对ignore_index进行处理,当ignore_index值大于num_classes(比如ignore index为255,类别为19)时,报错cuda 719。原因为one-hot转换错误。
代码链接:https://github.com/PaddlePaddle/PaddleSeg/blob/35a4c4d229df2d4a5ca724ad442bf5e0f75b4823/paddleseg/models/losses/dice_loss.py#L46
可将mask部分放到one-hot之前,然后将ignore_index赋值一个小于num_classes的值:
```python
def forward(self, logits, labels):
num_class = logits.shape[1]
if self.weight is not None:
assert num_class == len(self.weight), \
"The lenght of weight should be euqal to the num class"
if logits.shape != labels.shape:
labels = labels.unsqueeze(axis=1)
labels = F.interpolate(labels, size=logits.shape[2:], mode='nearest')
labels = labels.squeeze(axis=1)
logits = F.softmax(logits, axis=1)
mask = labels != self.ignore_index
mask = paddle.cast(paddle.unsqueeze(mask, 1), 'float32')
labels[labels == self.ignore_index] = 0
labels_one_hot = F.one_hot(labels, num_class)
labels_one_hot = paddle.transpose(labels_one_hot, [0, 3, 1, 2])
dice_loss = 0.0
for i in range(num_class):
dice_loss_i = dice_loss_helper(logits[:, i], labels_one_hot[:, i],
mask, self.smooth, self.eps)
if self.weight is not None:
dice_loss_i *= self.weight[i]
dice_loss += dice_loss_i
dice_loss = dice_loss / num_class
return dice_loss
```
</issue>
<code>
[start of paddleseg/models/losses/dice_loss.py]
1 # you may not use this file except in compliance with the License.
2 # You may obtain a copy of the License at
3 #
4 # http://www.apache.org/licenses/LICENSE-2.0
5 #
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 import paddle
13 from paddle import nn
14 import paddle.nn.functional as F
15
16 from paddleseg.cvlibs import manager
17
18
19 @manager.LOSSES.add_component
20 class DiceLoss(nn.Layer):
21 """
22 The implements of the dice loss.
23
24 Args:
25 weight (list[float], optional): The weight for each class. Default: None.
26 ignore_index (int64): ignore_index (int64, optional): Specifies a target value that
27 is ignored and does not contribute to the input gradient. Default ``255``.
28 smooth (float32): Laplace smoothing to smooth dice loss and accelerate convergence.
29 Default: 1.0
30 """
31
32 def __init__(self, weight=None, ignore_index=255, smooth=1.0):
33 super().__init__()
34 self.weight = weight
35 self.ignore_index = ignore_index
36 self.smooth = smooth
37 self.eps = 1e-8
38
39 def forward(self, logits, labels):
40 num_class = logits.shape[1]
41 if self.weight is not None:
42 assert num_class == len(self.weight), \
43 "The lenght of weight should be euqal to the num class"
44
45 logits = F.softmax(logits, axis=1)
46 labels_one_hot = F.one_hot(labels, num_class)
47 labels_one_hot = paddle.transpose(labels_one_hot, [0, 3, 1, 2])
48
49 mask = labels != self.ignore_index
50 mask = paddle.cast(paddle.unsqueeze(mask, 1), 'float32')
51
52 dice_loss = 0.0
53 for i in range(num_class):
54 dice_loss_i = dice_loss_helper(logits[:, i], labels_one_hot[:, i],
55 mask, self.smooth, self.eps)
56 if self.weight is not None:
57 dice_loss_i *= self.weight[i]
58 dice_loss += dice_loss_i
59 dice_loss = dice_loss / num_class
60
61 return dice_loss
62
63
64 def dice_loss_helper(logit, label, mask, smooth, eps):
65 assert logit.shape == label.shape, \
66 "The shape of logit and label should be the same"
67 logit = paddle.reshape(logit, [0, -1])
68 label = paddle.reshape(label, [0, -1])
69 mask = paddle.reshape(mask, [0, -1])
70 logit *= mask
71 label *= mask
72 intersection = paddle.sum(logit * label, axis=1)
73 cardinality = paddle.sum(logit + label, axis=1)
74 dice_loss = 1 - (2 * intersection + smooth) / (cardinality + smooth + eps)
75 dice_loss = dice_loss.mean()
76 return dice_loss
77
[end of paddleseg/models/losses/dice_loss.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/paddleseg/models/losses/dice_loss.py b/paddleseg/models/losses/dice_loss.py
--- a/paddleseg/models/losses/dice_loss.py
+++ b/paddleseg/models/losses/dice_loss.py
@@ -42,13 +42,14 @@
assert num_class == len(self.weight), \
"The lenght of weight should be euqal to the num class"
- logits = F.softmax(logits, axis=1)
- labels_one_hot = F.one_hot(labels, num_class)
- labels_one_hot = paddle.transpose(labels_one_hot, [0, 3, 1, 2])
-
mask = labels != self.ignore_index
mask = paddle.cast(paddle.unsqueeze(mask, 1), 'float32')
+ labels[labels == self.ignore_index] = 0
+ labels_one_hot = F.one_hot(labels, num_class)
+ labels_one_hot = paddle.transpose(labels_one_hot, [0, 3, 1, 2])
+ logits = F.softmax(logits, axis=1)
+
dice_loss = 0.0
for i in range(num_class):
dice_loss_i = dice_loss_helper(logits[:, i], labels_one_hot[:, i],
| {"golden_diff": "diff --git a/paddleseg/models/losses/dice_loss.py b/paddleseg/models/losses/dice_loss.py\n--- a/paddleseg/models/losses/dice_loss.py\n+++ b/paddleseg/models/losses/dice_loss.py\n@@ -42,13 +42,14 @@\n assert num_class == len(self.weight), \\\n \"The lenght of weight should be euqal to the num class\"\n \n- logits = F.softmax(logits, axis=1)\n- labels_one_hot = F.one_hot(labels, num_class)\n- labels_one_hot = paddle.transpose(labels_one_hot, [0, 3, 1, 2])\n-\n mask = labels != self.ignore_index\n mask = paddle.cast(paddle.unsqueeze(mask, 1), 'float32')\n \n+ labels[labels == self.ignore_index] = 0\n+ labels_one_hot = F.one_hot(labels, num_class)\n+ labels_one_hot = paddle.transpose(labels_one_hot, [0, 3, 1, 2])\n+ logits = F.softmax(logits, axis=1)\n+\n dice_loss = 0.0\n for i in range(num_class):\n dice_loss_i = dice_loss_helper(logits[:, i], labels_one_hot[:, i],\n", "issue": "[Bug] Dice loss bug\n\u73af\u5883\uff1aaistudio tesla v100\r\npaddlepaddle=2.3.0\r\npaddleseg=2.5.0\r\n\r\nbug\uff1adice loss \u524d\u5411\u4f20\u64ad\u65f6\uff0clabel\u8f6c\u6362\u4e3aone-hot\u7f16\u7801\u524d\uff0c\u5e76\u672a\u5bf9ignore_index\u8fdb\u884c\u5904\u7406\uff0c\u5f53ignore_index\u503c\u5927\u4e8enum_classes\uff08\u6bd4\u5982ignore index\u4e3a255\uff0c\u7c7b\u522b\u4e3a19\uff09\u65f6\uff0c\u62a5\u9519cuda 719\u3002\u539f\u56e0\u4e3aone-hot\u8f6c\u6362\u9519\u8bef\u3002 \r\n\u4ee3\u7801\u94fe\u63a5\uff1ahttps://github.com/PaddlePaddle/PaddleSeg/blob/35a4c4d229df2d4a5ca724ad442bf5e0f75b4823/paddleseg/models/losses/dice_loss.py#L46\r\n\r\n\u53ef\u5c06mask\u90e8\u5206\u653e\u5230one-hot\u4e4b\u524d\uff0c\u7136\u540e\u5c06ignore_index\u8d4b\u503c\u4e00\u4e2a\u5c0f\u4e8enum_classes\u7684\u503c\uff1a\r\n```python\r\n def forward(self, logits, labels):\r\n num_class = logits.shape[1]\r\n if self.weight is not None:\r\n assert num_class == len(self.weight), \\\r\n \"The lenght of weight should be euqal to the num class\"\r\n if logits.shape != labels.shape:\r\n labels = labels.unsqueeze(axis=1)\r\n labels = F.interpolate(labels, size=logits.shape[2:], mode='nearest')\r\n labels = labels.squeeze(axis=1)\r\n logits = F.softmax(logits, axis=1)\r\n mask = labels != self.ignore_index\r\n mask = paddle.cast(paddle.unsqueeze(mask, 1), 'float32')\r\n labels[labels == self.ignore_index] = 0\r\n labels_one_hot = F.one_hot(labels, num_class)\r\n labels_one_hot = paddle.transpose(labels_one_hot, [0, 3, 1, 2])\r\n \r\n dice_loss = 0.0\r\n for i in range(num_class):\r\n dice_loss_i = dice_loss_helper(logits[:, i], labels_one_hot[:, i],\r\n mask, self.smooth, self.eps)\r\n if self.weight is not None:\r\n dice_loss_i *= self.weight[i]\r\n dice_loss += dice_loss_i\r\n dice_loss = dice_loss / num_class\r\n\r\n return dice_loss\r\n```\r\n\n", "before_files": [{"content": "# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport paddle\nfrom paddle import nn\nimport paddle.nn.functional as F\n\nfrom paddleseg.cvlibs import manager\n\n\[email protected]_component\nclass DiceLoss(nn.Layer):\n \"\"\"\n The implements of the dice loss.\n\n Args:\n weight (list[float], optional): The weight for each class. Default: None.\n ignore_index (int64): ignore_index (int64, optional): Specifies a target value that\n is ignored and does not contribute to the input gradient. Default ``255``.\n smooth (float32): Laplace smoothing to smooth dice loss and accelerate convergence.\n Default: 1.0\n \"\"\"\n\n def __init__(self, weight=None, ignore_index=255, smooth=1.0):\n super().__init__()\n self.weight = weight\n self.ignore_index = ignore_index\n self.smooth = smooth\n self.eps = 1e-8\n\n def forward(self, logits, labels):\n num_class = logits.shape[1]\n if self.weight is not None:\n assert num_class == len(self.weight), \\\n \"The lenght of weight should be euqal to the num class\"\n\n logits = F.softmax(logits, axis=1)\n labels_one_hot = F.one_hot(labels, num_class)\n labels_one_hot = paddle.transpose(labels_one_hot, [0, 3, 1, 2])\n\n mask = labels != self.ignore_index\n mask = paddle.cast(paddle.unsqueeze(mask, 1), 'float32')\n\n dice_loss = 0.0\n for i in range(num_class):\n dice_loss_i = dice_loss_helper(logits[:, i], labels_one_hot[:, i],\n mask, self.smooth, self.eps)\n if self.weight is not None:\n dice_loss_i *= self.weight[i]\n dice_loss += dice_loss_i\n dice_loss = dice_loss / num_class\n\n return dice_loss\n\n\ndef dice_loss_helper(logit, label, mask, smooth, eps):\n assert logit.shape == label.shape, \\\n \"The shape of logit and label should be the same\"\n logit = paddle.reshape(logit, [0, -1])\n label = paddle.reshape(label, [0, -1])\n mask = paddle.reshape(mask, [0, -1])\n logit *= mask\n label *= mask\n intersection = paddle.sum(logit * label, axis=1)\n cardinality = paddle.sum(logit + label, axis=1)\n dice_loss = 1 - (2 * intersection + smooth) / (cardinality + smooth + eps)\n dice_loss = dice_loss.mean()\n return dice_loss\n", "path": "paddleseg/models/losses/dice_loss.py"}]} | 1,839 | 276 |
gh_patches_debug_10457 | rasdani/github-patches | git_diff | pypi__warehouse-12149 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for mathematical expressions in Markdown
**What's the problem this feature will solve?**
GitHub has [started supporting mathematical expressions in Markdown](https://github.blog/2022-05-19-math-support-in-markdown/).
It would be nice to have the same possibility in PyPi, since oftentimes the `README.md` on GitHub is used as `long_description` in `setup.py`.
<!-- A clear and concise description of what the problem is. -->
**Describe the solution you'd like**
Implement the same syntax used by GitHub, described [here](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/writing-mathematical-expressions).
<!-- A clear and concise description of what you want to happen. -->
**Additional context**
Example PyPi package for which math expressions are not rendered: https://pypi.org/project/sourcespec/ (section: "Theoretical background").
The original `README.md` on GitHub: https://github.com/SeismicSource/sourcespec/blob/master/README.md
<!-- Add any other context, links, etc. about the feature here. -->
</issue>
<code>
[start of warehouse/csp.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import collections
14 import copy
15
16 SELF = "'self'"
17 NONE = "'none'"
18
19
20 def _serialize(policy):
21 return "; ".join(
22 [
23 " ".join([k] + [v2 for v2 in v if v2 is not None])
24 for k, v in sorted(policy.items())
25 ]
26 )
27
28
29 def content_security_policy_tween_factory(handler, registry):
30 def content_security_policy_tween(request):
31 resp = handler(request)
32
33 try:
34 policy = request.find_service(name="csp")
35 except LookupError:
36 policy = collections.defaultdict(list)
37
38 # Replace CSP headers on /simple/ pages.
39 if request.path.startswith("/simple/"):
40 policy = collections.defaultdict(list)
41 policy["sandbox"] = ["allow-top-navigation"]
42 policy["default-src"] = [NONE]
43
44 # We don't want to apply our Content Security Policy to the debug
45 # toolbar, that's not part of our application and it doesn't work with
46 # our restrictive CSP.
47 policy = _serialize(policy).format(request=request)
48 if not request.path.startswith("/_debug_toolbar/") and policy:
49 resp.headers["Content-Security-Policy"] = policy
50
51 return resp
52
53 return content_security_policy_tween
54
55
56 class CSPPolicy(collections.defaultdict):
57 def __init__(self, policy=None):
58 super().__init__(list, policy or {})
59
60 def merge(self, policy):
61 for key, attrs in policy.items():
62 self[key].extend(attrs)
63
64
65 def csp_factory(_, request):
66 try:
67 return CSPPolicy(copy.deepcopy(request.registry.settings["csp"]))
68 except KeyError:
69 return CSPPolicy({})
70
71
72 def includeme(config):
73 config.register_service_factory(csp_factory, name="csp")
74 # Enable a Content Security Policy
75 config.add_settings(
76 {
77 "csp": {
78 "base-uri": [SELF],
79 "block-all-mixed-content": [],
80 "connect-src": [
81 SELF,
82 "https://api.github.com/repos/",
83 "fastly-insights.com",
84 "*.fastly-insights.com",
85 "*.ethicalads.io",
86 "https://api.pwnedpasswords.com",
87 # Scoped deeply to prevent other scripts calling other CDN resources
88 "https://cdn.jsdelivr.net/npm/[email protected]/es5/sre/mathmaps/",
89 ]
90 + [
91 item
92 for item in [config.registry.settings.get("statuspage.url")]
93 if item
94 ],
95 "default-src": [NONE],
96 "font-src": [SELF, "fonts.gstatic.com"],
97 "form-action": [SELF],
98 "frame-ancestors": [NONE],
99 "frame-src": [NONE],
100 "img-src": [
101 SELF,
102 config.registry.settings["camo.url"],
103 "www.google-analytics.com",
104 "*.fastly-insights.com",
105 "*.ethicalads.io",
106 ],
107 "script-src": [
108 SELF,
109 "www.googletagmanager.com",
110 "www.google-analytics.com",
111 "*.fastly-insights.com",
112 "*.ethicalads.io",
113 # Hash for v1.4.0 of ethicalads.min.js
114 "'sha256-U3hKDidudIaxBDEzwGJApJgPEf2mWk6cfMWghrAa6i0='",
115 "https://cdn.jsdelivr.net/npm/[email protected]/",
116 # Hash for v3.2.2 of MathJax tex-svg.js
117 "'sha256-1CldwzdEg2k1wTmf7s5RWVd7NMXI/7nxxjJM2C4DqII='",
118 ],
119 "style-src": [
120 SELF,
121 "fonts.googleapis.com",
122 "*.ethicalads.io",
123 # Hashes for inline styles generated by v1.4.0 of ethicalads.min.js
124 "'sha256-2YHqZokjiizkHi1Zt+6ar0XJ0OeEy/egBnlm+MDMtrM='",
125 "'sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='",
126 # Hashes for inline styles generated by v3.2.2 of MathJax tex-svg.js
127 "'sha256-JLEjeN9e5dGsz5475WyRaoA4eQOdNPxDIeUhclnJDCE='",
128 "'sha256-mQyxHEuwZJqpxCw3SLmc4YOySNKXunyu2Oiz1r3/wAE='",
129 "'sha256-OCf+kv5Asiwp++8PIevKBYSgnNLNUZvxAp4a7wMLuKA='",
130 "'sha256-h5LOiLhk6wiJrGsG5ItM0KimwzWQH/yAcmoJDJL//bY='",
131 ],
132 "worker-src": ["*.fastly-insights.com"],
133 }
134 }
135 )
136 config.add_tween("warehouse.csp.content_security_policy_tween_factory")
137
[end of warehouse/csp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/warehouse/csp.py b/warehouse/csp.py
--- a/warehouse/csp.py
+++ b/warehouse/csp.py
@@ -115,6 +115,9 @@
"https://cdn.jsdelivr.net/npm/[email protected]/",
# Hash for v3.2.2 of MathJax tex-svg.js
"'sha256-1CldwzdEg2k1wTmf7s5RWVd7NMXI/7nxxjJM2C4DqII='",
+ # Hash for MathJax inline config
+ # See warehouse/templates/packaging/detail.html
+ "'sha256-0POaN8stWYQxhzjKS+/eOfbbJ/u4YHO5ZagJvLpMypo='",
],
"style-src": [
SELF,
| {"golden_diff": "diff --git a/warehouse/csp.py b/warehouse/csp.py\n--- a/warehouse/csp.py\n+++ b/warehouse/csp.py\n@@ -115,6 +115,9 @@\n \"https://cdn.jsdelivr.net/npm/[email protected]/\",\n # Hash for v3.2.2 of MathJax tex-svg.js\n \"'sha256-1CldwzdEg2k1wTmf7s5RWVd7NMXI/7nxxjJM2C4DqII='\",\n+ # Hash for MathJax inline config\n+ # See warehouse/templates/packaging/detail.html\n+ \"'sha256-0POaN8stWYQxhzjKS+/eOfbbJ/u4YHO5ZagJvLpMypo='\",\n ],\n \"style-src\": [\n SELF,\n", "issue": "Support for mathematical expressions in Markdown\n**What's the problem this feature will solve?**\r\nGitHub has [started supporting mathematical expressions in Markdown](https://github.blog/2022-05-19-math-support-in-markdown/).\r\nIt would be nice to have the same possibility in PyPi, since oftentimes the `README.md` on GitHub is used as `long_description` in `setup.py`. \r\n<!-- A clear and concise description of what the problem is. -->\r\n\r\n**Describe the solution you'd like**\r\nImplement the same syntax used by GitHub, described [here](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/writing-mathematical-expressions).\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\n**Additional context**\r\nExample PyPi package for which math expressions are not rendered: https://pypi.org/project/sourcespec/ (section: \"Theoretical background\").\r\nThe original `README.md` on GitHub: https://github.com/SeismicSource/sourcespec/blob/master/README.md \r\n<!-- Add any other context, links, etc. about the feature here. -->\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\nimport copy\n\nSELF = \"'self'\"\nNONE = \"'none'\"\n\n\ndef _serialize(policy):\n return \"; \".join(\n [\n \" \".join([k] + [v2 for v2 in v if v2 is not None])\n for k, v in sorted(policy.items())\n ]\n )\n\n\ndef content_security_policy_tween_factory(handler, registry):\n def content_security_policy_tween(request):\n resp = handler(request)\n\n try:\n policy = request.find_service(name=\"csp\")\n except LookupError:\n policy = collections.defaultdict(list)\n\n # Replace CSP headers on /simple/ pages.\n if request.path.startswith(\"/simple/\"):\n policy = collections.defaultdict(list)\n policy[\"sandbox\"] = [\"allow-top-navigation\"]\n policy[\"default-src\"] = [NONE]\n\n # We don't want to apply our Content Security Policy to the debug\n # toolbar, that's not part of our application and it doesn't work with\n # our restrictive CSP.\n policy = _serialize(policy).format(request=request)\n if not request.path.startswith(\"/_debug_toolbar/\") and policy:\n resp.headers[\"Content-Security-Policy\"] = policy\n\n return resp\n\n return content_security_policy_tween\n\n\nclass CSPPolicy(collections.defaultdict):\n def __init__(self, policy=None):\n super().__init__(list, policy or {})\n\n def merge(self, policy):\n for key, attrs in policy.items():\n self[key].extend(attrs)\n\n\ndef csp_factory(_, request):\n try:\n return CSPPolicy(copy.deepcopy(request.registry.settings[\"csp\"]))\n except KeyError:\n return CSPPolicy({})\n\n\ndef includeme(config):\n config.register_service_factory(csp_factory, name=\"csp\")\n # Enable a Content Security Policy\n config.add_settings(\n {\n \"csp\": {\n \"base-uri\": [SELF],\n \"block-all-mixed-content\": [],\n \"connect-src\": [\n SELF,\n \"https://api.github.com/repos/\",\n \"fastly-insights.com\",\n \"*.fastly-insights.com\",\n \"*.ethicalads.io\",\n \"https://api.pwnedpasswords.com\",\n # Scoped deeply to prevent other scripts calling other CDN resources\n \"https://cdn.jsdelivr.net/npm/[email protected]/es5/sre/mathmaps/\",\n ]\n + [\n item\n for item in [config.registry.settings.get(\"statuspage.url\")]\n if item\n ],\n \"default-src\": [NONE],\n \"font-src\": [SELF, \"fonts.gstatic.com\"],\n \"form-action\": [SELF],\n \"frame-ancestors\": [NONE],\n \"frame-src\": [NONE],\n \"img-src\": [\n SELF,\n config.registry.settings[\"camo.url\"],\n \"www.google-analytics.com\",\n \"*.fastly-insights.com\",\n \"*.ethicalads.io\",\n ],\n \"script-src\": [\n SELF,\n \"www.googletagmanager.com\",\n \"www.google-analytics.com\",\n \"*.fastly-insights.com\",\n \"*.ethicalads.io\",\n # Hash for v1.4.0 of ethicalads.min.js\n \"'sha256-U3hKDidudIaxBDEzwGJApJgPEf2mWk6cfMWghrAa6i0='\",\n \"https://cdn.jsdelivr.net/npm/[email protected]/\",\n # Hash for v3.2.2 of MathJax tex-svg.js\n \"'sha256-1CldwzdEg2k1wTmf7s5RWVd7NMXI/7nxxjJM2C4DqII='\",\n ],\n \"style-src\": [\n SELF,\n \"fonts.googleapis.com\",\n \"*.ethicalads.io\",\n # Hashes for inline styles generated by v1.4.0 of ethicalads.min.js\n \"'sha256-2YHqZokjiizkHi1Zt+6ar0XJ0OeEy/egBnlm+MDMtrM='\",\n \"'sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='\",\n # Hashes for inline styles generated by v3.2.2 of MathJax tex-svg.js\n \"'sha256-JLEjeN9e5dGsz5475WyRaoA4eQOdNPxDIeUhclnJDCE='\",\n \"'sha256-mQyxHEuwZJqpxCw3SLmc4YOySNKXunyu2Oiz1r3/wAE='\",\n \"'sha256-OCf+kv5Asiwp++8PIevKBYSgnNLNUZvxAp4a7wMLuKA='\",\n \"'sha256-h5LOiLhk6wiJrGsG5ItM0KimwzWQH/yAcmoJDJL//bY='\",\n ],\n \"worker-src\": [\"*.fastly-insights.com\"],\n }\n }\n )\n config.add_tween(\"warehouse.csp.content_security_policy_tween_factory\")\n", "path": "warehouse/csp.py"}]} | 2,334 | 199 |
gh_patches_debug_21289 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-781 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix definition of `version` and `release` in docs conf.py
Update the the definition of sphinx project info `version` and `release`, such that they are automatically updated with package versions. A good example of how to do this can be seen in Astorpy's `conf.py` [here](https://github.com/astropy/astropy/blob/f6a9846b5e7c054878080ba49212a961bab2ff78/docs/conf.py#L111-L114).
```python
# The full version, including alpha/beta/rc tags.
release = pkg_resources.get_distribution(project).version
# The short X.Y version.
version = '.'.join(release.split('.')[:2])
```
This method does create a documentation dependency on `setuptools`.
</issue>
<code>
[start of docs/conf.py]
1 #!/usr/bin/env python3.6
2 # -*- coding: utf-8 -*-
3 #
4 # PlasmaPy documentation build configuration file, created by
5 # sphinx-quickstart on Wed May 31 18:16:46 2017.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20
21 import os
22 import sys
23
24 sys.path.insert(0, os.path.abspath('..'))
25
26 # -- General configuration ------------------------------------------------
27
28 # If your documentation needs a minimal Sphinx version, state it here.
29 #
30 # needs_sphinx = '1.0'
31
32 # Add any Sphinx extension module names here, as strings. They can be
33 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
34 # ones.
35 extensions = [
36 'sphinx.ext.autodoc',
37 'sphinx.ext.intersphinx',
38 'sphinx.ext.graphviz',
39 'sphinx.ext.mathjax',
40 'sphinx.ext.napoleon',
41 'sphinx_automodapi.automodapi',
42 'sphinx_automodapi.smart_resolver',
43 'sphinx_gallery.gen_gallery',
44 ]
45
46
47 intersphinx_mapping = {
48 'python': ('https://docs.python.org/3', None),
49 'numpy': ('https://docs.scipy.org/doc/numpy', None),
50 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),
51 'pandas': ('http://pandas.pydata.org/pandas-docs/stable/', None),
52 'astropy': ('http://docs.astropy.org/en/stable/', None)}
53 # Add any paths that contain templates here, relative to this directory.
54 templates_path = ['_templates']
55
56 # The suffix(es) of source filenames.
57 # You can specify multiple suffix as a list of string:
58 #
59 # source_suffix = ['.rst', '.md']
60 source_suffix = '.rst'
61
62 # The master toctree document.
63 master_doc = 'index'
64
65 # General information about the project.
66 project = 'PlasmaPy'
67 copyright = '2015-2019, PlasmaPy Community'
68 author = 'PlasmaPy Community'
69
70 # The version info for the project you're documenting, acts as replacement for
71 # |version| and |release|, also used in various other places throughout the
72 # built documents.
73 #
74 # The short X.Y version.
75 version = '0.2'
76 # The full version, including alpha/beta/rc tags.
77 release = '0.2.0'
78
79 # The language for content autogenerated by Sphinx. Refer to documentation
80 # for a list of supported languages.
81 #
82 # This is also used if you do content translation via gettext catalogs.
83 # Usually you set "language" from the command line for these cases.
84 language = None
85
86 # List of patterns, relative to source directory, that match files and
87 # directories to ignore when looking for source files.
88 # This patterns also effect to html_static_path and html_extra_path
89 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
90
91 # The name of the Pygments (syntax highlighting) style to use.
92 pygments_style = 'sphinx'
93
94 # If true, `todo` and `todoList` produce output, else they produce nothing.
95 todo_include_todos = False
96
97 default_role = 'obj'
98
99 # -- Options for HTML output ----------------------------------------------
100
101 # The theme to use for HTML and HTML Help pages. See the documentation for
102 # a list of builtin themes.
103 #
104 # html_theme = 'alabaster'
105 # html_theme = 'traditional'
106 # html_theme = 'agogo'
107 html_theme = "sphinx_rtd_theme"
108
109 # Theme options are theme-specific and customize the look and feel of a theme
110 # further. For a list of options available for each theme, see the
111 # documentation.
112 #
113 # html_theme_options = {}
114
115 # Add any paths that contain custom static files (such as style sheets) here,
116 # relative to this directory. They are copied after the builtin static files,
117 # so a file named "default.css" will overwrite the builtin "default.css".
118 # html_static_path = ['_static']
119
120
121 # -- Options for HTMLHelp output ------------------------------------------
122
123 # Output file base name for HTML help builder.
124 htmlhelp_basename = 'PlasmaPydoc'
125
126
127 # -- Options for LaTeX output ---------------------------------------------
128
129 latex_elements = {
130 # The paper size ('letterpaper' or 'a4paper').
131 #
132 # 'papersize': 'letterpaper',
133
134 # The font size ('10pt', '11pt' or '12pt').
135 #
136 # 'pointsize': '10pt',
137
138 # Additional stuff for the LaTeX preamble.
139 #
140 # 'preamble': '',
141
142 # Latex figure (float) alignment
143 #
144 # 'figure_align': 'htbp',
145 }
146
147 # Grouping the document tree into LaTeX files. List of tuples
148 # (source start file, target name, title,
149 # author, documentclass [howto, manual, or own class]).
150 latex_documents = [
151 (master_doc, 'PlasmaPy.tex', 'PlasmaPy Documentation',
152 'PlasmaPy Community', 'manual'),
153 ]
154
155
156 # -- Options for manual page output ---------------------------------------
157
158 # One entry per manual page. List of tuples
159 # (source start file, name, description, authors, manual section).
160 man_pages = [
161 (master_doc, 'plasmapy', 'PlasmaPy Documentation',
162 [author], 1)
163 ]
164
165
166 # -- Options for Texinfo output -------------------------------------------
167
168 # Grouping the document tree into Texinfo files. List of tuples
169 # (source start file, target name, title, author,
170 # dir menu entry, description, category)
171 texinfo_documents = [
172 (master_doc, 'PlasmaPy', 'PlasmaPy Documentation',
173 author, 'PlasmaPy', 'Python package for plasma physics',
174 'Miscellaneous'),
175 ]
176
177 html_favicon = "./_static/icon.ico"
178
179 # -- Options for Sphinx Gallery -----------------
180
181 # Patch sphinx_gallery.binder.gen_binder_rst so as to point to .py file in repository
182 # Original code as per sphinx_gallery, under the BSD 3-Clause license
183
184 import sphinx_gallery.binder
185 def patched_gen_binder_rst(fpath, binder_conf, gallery_conf):
186 """Generate the RST + link for the Binder badge.
187 Parameters
188 ----------
189 fpath: str
190 The path to the `.py` file for which a Binder badge will be generated.
191 binder_conf: dict or None
192 If a dictionary it must have the following keys:
193 'binderhub_url'
194 The URL of the BinderHub instance that's running a Binder service.
195 'org'
196 The GitHub organization to which the documentation will be pushed.
197 'repo'
198 The GitHub repository to which the documentation will be pushed.
199 'branch'
200 The Git branch on which the documentation exists (e.g., gh-pages).
201 'dependencies'
202 A list of paths to dependency files that match the Binderspec.
203 Returns
204 -------
205 rst : str
206 The reStructuredText for the Binder badge that links to this file.
207 """
208 binder_conf = sphinx_gallery.binder.check_binder_conf(binder_conf)
209 binder_url = sphinx_gallery.binder.gen_binder_url(fpath, binder_conf, gallery_conf)
210 binder_url = binder_url.replace(gallery_conf['gallery_dirs'] + os.path.sep, "").replace("ipynb", "py")
211
212 rst = (
213 "\n"
214 " .. container:: binder-badge\n\n"
215 " .. image:: https://mybinder.org/badge_logo.svg\n"
216 " :target: {}\n"
217 " :width: 150 px\n").format(binder_url)
218 return rst
219 sphinx_gallery.binder.gen_binder_rst = patched_gen_binder_rst
220
221 sphinx_gallery_conf = {
222 # path to your examples scripts
223 'examples_dirs': '../plasmapy/examples',
224 # path where to save gallery generated examples
225 'backreferences_dir': 'gen_modules/backreferences',
226 'gallery_dirs': 'auto_examples',
227 'binder': {
228 'org': 'PlasmaPy',
229 'repo': 'PlasmaPy',
230 'branch': 'master',
231 'binderhub_url': 'https://mybinder.org',
232 'dependencies': ['../binder/requirements.txt'],
233 'notebooks_dir': 'plasmapy/examples',
234 }
235 }
236
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -64,17 +64,22 @@
# General information about the project.
project = 'PlasmaPy'
-copyright = '2015-2019, PlasmaPy Community'
+copyright = '2015-2020, PlasmaPy Community'
author = 'PlasmaPy Community'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
-# The short X.Y version.
-version = '0.2'
# The full version, including alpha/beta/rc tags.
-release = '0.2.0'
+# Note: If plasmapy.__version__ can not be defined then it is set to 'unknown'.
+# However, release needs to be a semantic style version number, so set
+# the 'unknown' case to ''.
+from plasmapy import __version__ as release
+release = '' if release == 'unknown' else release
+# The short X.Y version.
+version = '.'.join(release.split('.')[:2])
+
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -64,17 +64,22 @@\n \n # General information about the project.\n project = 'PlasmaPy'\n-copyright = '2015-2019, PlasmaPy Community'\n+copyright = '2015-2020, PlasmaPy Community'\n author = 'PlasmaPy Community'\n \n # The version info for the project you're documenting, acts as replacement for\n # |version| and |release|, also used in various other places throughout the\n # built documents.\n #\n-# The short X.Y version.\n-version = '0.2'\n # The full version, including alpha/beta/rc tags.\n-release = '0.2.0'\n+# Note: If plasmapy.__version__ can not be defined then it is set to 'unknown'.\n+# However, release needs to be a semantic style version number, so set\n+# the 'unknown' case to ''.\n+from plasmapy import __version__ as release\n+release = '' if release == 'unknown' else release\n+# The short X.Y version.\n+version = '.'.join(release.split('.')[:2])\n+\n \n # The language for content autogenerated by Sphinx. Refer to documentation\n # for a list of supported languages.\n", "issue": "Fix definition of `version` and `release` in docs conf.py\nUpdate the the definition of sphinx project info `version` and `release`, such that they are automatically updated with package versions. A good example of how to do this can be seen in Astorpy's `conf.py` [here](https://github.com/astropy/astropy/blob/f6a9846b5e7c054878080ba49212a961bab2ff78/docs/conf.py#L111-L114).\r\n\r\n```python\r\n# The full version, including alpha/beta/rc tags.\r\nrelease = pkg_resources.get_distribution(project).version\r\n# The short X.Y version.\r\nversion = '.'.join(release.split('.')[:2])\r\n```\r\n\r\nThis method does create a documentation dependency on `setuptools`.\n", "before_files": [{"content": "#!/usr/bin/env python3.6\n# -*- coding: utf-8 -*-\n#\n# PlasmaPy documentation build configuration file, created by\n# sphinx-quickstart on Wed May 31 18:16:46 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath('..'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.graphviz',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx_automodapi.automodapi',\n 'sphinx_automodapi.smart_resolver',\n 'sphinx_gallery.gen_gallery',\n]\n\n\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3', None),\n 'numpy': ('https://docs.scipy.org/doc/numpy', None),\n 'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),\n 'pandas': ('http://pandas.pydata.org/pandas-docs/stable/', None),\n 'astropy': ('http://docs.astropy.org/en/stable/', None)}\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'PlasmaPy'\ncopyright = '2015-2019, PlasmaPy Community'\nauthor = 'PlasmaPy Community'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.2'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.2.0'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\ndefault_role = 'obj'\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\n# html_theme = 'traditional'\n# html_theme = 'agogo'\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'PlasmaPydoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'PlasmaPy.tex', 'PlasmaPy Documentation',\n 'PlasmaPy Community', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'plasmapy', 'PlasmaPy Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'PlasmaPy', 'PlasmaPy Documentation',\n author, 'PlasmaPy', 'Python package for plasma physics',\n 'Miscellaneous'),\n]\n\nhtml_favicon = \"./_static/icon.ico\"\n\n# -- Options for Sphinx Gallery -----------------\n\n# Patch sphinx_gallery.binder.gen_binder_rst so as to point to .py file in repository\n# Original code as per sphinx_gallery, under the BSD 3-Clause license\n\nimport sphinx_gallery.binder\ndef patched_gen_binder_rst(fpath, binder_conf, gallery_conf):\n \"\"\"Generate the RST + link for the Binder badge.\n Parameters\n ----------\n fpath: str\n The path to the `.py` file for which a Binder badge will be generated.\n binder_conf: dict or None\n If a dictionary it must have the following keys:\n 'binderhub_url'\n The URL of the BinderHub instance that's running a Binder service.\n 'org'\n The GitHub organization to which the documentation will be pushed.\n 'repo'\n The GitHub repository to which the documentation will be pushed.\n 'branch'\n The Git branch on which the documentation exists (e.g., gh-pages).\n 'dependencies'\n A list of paths to dependency files that match the Binderspec.\n Returns\n -------\n rst : str\n The reStructuredText for the Binder badge that links to this file.\n \"\"\"\n binder_conf = sphinx_gallery.binder.check_binder_conf(binder_conf)\n binder_url = sphinx_gallery.binder.gen_binder_url(fpath, binder_conf, gallery_conf)\n binder_url = binder_url.replace(gallery_conf['gallery_dirs'] + os.path.sep, \"\").replace(\"ipynb\", \"py\")\n\n rst = (\n \"\\n\"\n \" .. container:: binder-badge\\n\\n\"\n \" .. image:: https://mybinder.org/badge_logo.svg\\n\"\n \" :target: {}\\n\"\n \" :width: 150 px\\n\").format(binder_url)\n return rst\nsphinx_gallery.binder.gen_binder_rst = patched_gen_binder_rst\n\nsphinx_gallery_conf = {\n # path to your examples scripts\n 'examples_dirs': '../plasmapy/examples',\n # path where to save gallery generated examples\n 'backreferences_dir': 'gen_modules/backreferences',\n 'gallery_dirs': 'auto_examples',\n 'binder': {\n 'org': 'PlasmaPy',\n 'repo': 'PlasmaPy',\n 'branch': 'master',\n 'binderhub_url': 'https://mybinder.org',\n 'dependencies': ['../binder/requirements.txt'],\n 'notebooks_dir': 'plasmapy/examples',\n }\n}\n", "path": "docs/conf.py"}]} | 3,226 | 293 |
gh_patches_debug_8258 | rasdani/github-patches | git_diff | SCons__scons-3700 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CompilationDatabase tool emits paths into variant directories for source files when not using absolute paths.
**Describe the bug**
When using the CompilationDatabase tool with `COMPILATIONDB_USE_ABSPATH` == False, the `file` field appears to refer into the variant directory, not the source directory:
`"file": "build/cached/third_party/s2/base/int128.cc",`
Instead of
`"file": "src/third_party/s2/base/int128.cc"`
**Required information**
* Link to SCons Users thread discussing your issue.
Discussed directly with @bdbaddog
* Version of SCons
master
* Version of Python
3.7
* How you installed SCons
source
* What Platform are you on? (Linux/Windows and which version)
macOS, but shouldn't matter.
</issue>
<code>
[start of SCons/Tool/compilation_db.py]
1 """
2 Implements the ability for SCons to emit a compilation database for the MongoDB project. See
3 http://clang.llvm.org/docs/JSONCompilationDatabase.html for details on what a compilation
4 database is, and why you might want one. The only user visible entry point here is
5 'env.CompilationDatabase'. This method takes an optional 'target' to name the file that
6 should hold the compilation database, otherwise, the file defaults to compile_commands.json,
7 which is the name that most clang tools search for by default.
8 """
9
10 # Copyright 2020 MongoDB Inc.
11 #
12 # Permission is hereby granted, free of charge, to any person obtaining
13 # a copy of this software and associated documentation files (the
14 # "Software"), to deal in the Software without restriction, including
15 # without limitation the rights to use, copy, modify, merge, publish,
16 # distribute, sublicense, and/or sell copies of the Software, and to
17 # permit persons to whom the Software is furnished to do so, subject to
18 # the following conditions:
19 #
20 # The above copyright notice and this permission notice shall be included
21 # in all copies or substantial portions of the Software.
22 #
23 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
24 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
25 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
26 # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
27 # LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
28 # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
29 # WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
30 #
31
32 import json
33 import itertools
34 import SCons
35
36 from .cxx import CXXSuffixes
37 from .cc import CSuffixes
38 from .asm import ASSuffixes, ASPPSuffixes
39
40 # TODO: Is there a better way to do this than this global? Right now this exists so that the
41 # emitter we add can record all of the things it emits, so that the scanner for the top level
42 # compilation database can access the complete list, and also so that the writer has easy
43 # access to write all of the files. But it seems clunky. How can the emitter and the scanner
44 # communicate more gracefully?
45 __COMPILATION_DB_ENTRIES = []
46
47
48 # We make no effort to avoid rebuilding the entries. Someday, perhaps we could and even
49 # integrate with the cache, but there doesn't seem to be much call for it.
50 class __CompilationDbNode(SCons.Node.Python.Value):
51 def __init__(self, value):
52 SCons.Node.Python.Value.__init__(self, value)
53 self.Decider(changed_since_last_build_node)
54
55
56 def changed_since_last_build_node(child, target, prev_ni, node):
57 """ Dummy decider to force always building"""
58 return True
59
60
61 def make_emit_compilation_DB_entry(comstr):
62 """
63 Effectively this creates a lambda function to capture:
64 * command line
65 * source
66 * target
67 :param comstr: unevaluated command line
68 :return: an emitter which has captured the above
69 """
70 user_action = SCons.Action.Action(comstr)
71
72 def emit_compilation_db_entry(target, source, env):
73 """
74 This emitter will be added to each c/c++ object build to capture the info needed
75 for clang tools
76 :param target: target node(s)
77 :param source: source node(s)
78 :param env: Environment for use building this node
79 :return: target(s), source(s)
80 """
81
82 dbtarget = __CompilationDbNode(source)
83
84 entry = env.__COMPILATIONDB_Entry(
85 target=dbtarget,
86 source=[],
87 __COMPILATIONDB_UOUTPUT=target,
88 __COMPILATIONDB_USOURCE=source,
89 __COMPILATIONDB_UACTION=user_action,
90 __COMPILATIONDB_ENV=env,
91 )
92
93 # TODO: Technically, these next two lines should not be required: it should be fine to
94 # cache the entries. However, they don't seem to update properly. Since they are quick
95 # to re-generate disable caching and sidestep this problem.
96 env.AlwaysBuild(entry)
97 env.NoCache(entry)
98
99 __COMPILATION_DB_ENTRIES.append(dbtarget)
100
101 return target, source
102
103 return emit_compilation_db_entry
104
105
106 def compilation_db_entry_action(target, source, env, **kw):
107 """
108 Create a dictionary with evaluated command line, target, source
109 and store that info as an attribute on the target
110 (Which has been stored in __COMPILATION_DB_ENTRIES array
111 :param target: target node(s)
112 :param source: source node(s)
113 :param env: Environment for use building this node
114 :param kw:
115 :return: None
116 """
117
118 command = env["__COMPILATIONDB_UACTION"].strfunction(
119 target=env["__COMPILATIONDB_UOUTPUT"],
120 source=env["__COMPILATIONDB_USOURCE"],
121 env=env["__COMPILATIONDB_ENV"],
122 )
123
124 entry = {
125 "directory": env.Dir("#").abspath,
126 "command": command,
127 "file": env["__COMPILATIONDB_USOURCE"][0],
128 "output": env['__COMPILATIONDB_UOUTPUT'][0]
129 }
130
131 target[0].write(entry)
132
133
134 def write_compilation_db(target, source, env):
135 entries = []
136
137 use_abspath = env['COMPILATIONDB_USE_ABSPATH'] in [True, 1, 'True', 'true']
138
139 for s in __COMPILATION_DB_ENTRIES:
140 entry = s.read()
141 source_file = entry['file']
142 output_file = entry['output']
143
144 if use_abspath:
145 source_file = source_file.abspath
146 output_file = output_file.abspath
147 else:
148 source_file = source_file.path
149 output_file = output_file.path
150
151 path_entry = {'directory': entry['directory'],
152 'command': entry['command'],
153 'file': source_file,
154 'output': output_file}
155
156 entries.append(path_entry)
157
158 with open(target[0].path, "w") as output_file:
159 json.dump(
160 entries, output_file, sort_keys=True, indent=4, separators=(",", ": ")
161 )
162
163
164 def scan_compilation_db(node, env, path):
165 return __COMPILATION_DB_ENTRIES
166
167
168 def compilation_db_emitter(target, source, env):
169 """ fix up the source/targets """
170
171 # Someone called env.CompilationDatabase('my_targetname.json')
172 if not target and len(source) == 1:
173 target = source
174
175 # Default target name is compilation_db.json
176 if not target:
177 target = ['compile_commands.json', ]
178
179 # No source should have been passed. Drop it.
180 if source:
181 source = []
182
183 return target, source
184
185
186 def generate(env, **kwargs):
187 static_obj, shared_obj = SCons.Tool.createObjBuilders(env)
188
189 env["COMPILATIONDB_COMSTR"] = kwargs.get(
190 "COMPILATIONDB_COMSTR", "Building compilation database $TARGET"
191 )
192
193 components_by_suffix = itertools.chain(
194 itertools.product(
195 CSuffixes,
196 [
197 (static_obj, SCons.Defaults.StaticObjectEmitter, "$CCCOM"),
198 (shared_obj, SCons.Defaults.SharedObjectEmitter, "$SHCCCOM"),
199 ],
200 ),
201 itertools.product(
202 CXXSuffixes,
203 [
204 (static_obj, SCons.Defaults.StaticObjectEmitter, "$CXXCOM"),
205 (shared_obj, SCons.Defaults.SharedObjectEmitter, "$SHCXXCOM"),
206 ],
207 ),
208 itertools.product(
209 ASSuffixes,
210 [(static_obj, SCons.Defaults.StaticObjectEmitter, "$ASCOM")],
211 [(shared_obj, SCons.Defaults.SharedObjectEmitter, "$ASCOM")],
212 ),
213 itertools.product(
214 ASPPSuffixes,
215 [(static_obj, SCons.Defaults.StaticObjectEmitter, "$ASPPCOM")],
216 [(shared_obj, SCons.Defaults.SharedObjectEmitter, "$ASPPCOM")],
217 ),
218 )
219
220 for entry in components_by_suffix:
221 suffix = entry[0]
222 builder, base_emitter, command = entry[1]
223
224 # Assumes a dictionary emitter
225 emitter = builder.emitter.get(suffix, False)
226 if emitter:
227 # We may not have tools installed which initialize all or any of
228 # cxx, cc, or assembly. If not skip resetting the respective emitter.
229 builder.emitter[suffix] = SCons.Builder.ListEmitter(
230 [emitter, make_emit_compilation_DB_entry(command), ]
231 )
232
233 env["BUILDERS"]["__COMPILATIONDB_Entry"] = SCons.Builder.Builder(
234 action=SCons.Action.Action(compilation_db_entry_action, None),
235 )
236
237 env["BUILDERS"]["CompilationDatabase"] = SCons.Builder.Builder(
238 action=SCons.Action.Action(write_compilation_db, "$COMPILATIONDB_COMSTR"),
239 target_scanner=SCons.Scanner.Scanner(
240 function=scan_compilation_db, node_class=None
241 ),
242 emitter=compilation_db_emitter,
243 suffix='json',
244 )
245
246 env['COMPILATIONDB_USE_ABSPATH'] = False
247
248
249 def exists(env):
250 return True
251
[end of SCons/Tool/compilation_db.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/SCons/Tool/compilation_db.py b/SCons/Tool/compilation_db.py
--- a/SCons/Tool/compilation_db.py
+++ b/SCons/Tool/compilation_db.py
@@ -142,10 +142,10 @@
output_file = entry['output']
if use_abspath:
- source_file = source_file.abspath
+ source_file = source_file.srcnode().abspath
output_file = output_file.abspath
else:
- source_file = source_file.path
+ source_file = source_file.srcnode().path
output_file = output_file.path
path_entry = {'directory': entry['directory'],
| {"golden_diff": "diff --git a/SCons/Tool/compilation_db.py b/SCons/Tool/compilation_db.py\n--- a/SCons/Tool/compilation_db.py\n+++ b/SCons/Tool/compilation_db.py\n@@ -142,10 +142,10 @@\n output_file = entry['output']\n \n if use_abspath:\n- source_file = source_file.abspath\n+ source_file = source_file.srcnode().abspath\n output_file = output_file.abspath\n else:\n- source_file = source_file.path\n+ source_file = source_file.srcnode().path\n output_file = output_file.path\n \n path_entry = {'directory': entry['directory'],\n", "issue": "CompilationDatabase tool emits paths into variant directories for source files when not using absolute paths.\n**Describe the bug**\r\nWhen using the CompilationDatabase tool with `COMPILATIONDB_USE_ABSPATH` == False, the `file` field appears to refer into the variant directory, not the source directory:\r\n\r\n`\"file\": \"build/cached/third_party/s2/base/int128.cc\",`\r\n\r\nInstead of \r\n\r\n`\"file\": \"src/third_party/s2/base/int128.cc\"`\r\n\r\n**Required information**\r\n* Link to SCons Users thread discussing your issue.\r\nDiscussed directly with @bdbaddog \r\n\r\n* Version of SCons\r\nmaster\r\n\r\n* Version of Python\r\n3.7\r\n\r\n* How you installed SCons\r\nsource\r\n\r\n* What Platform are you on? (Linux/Windows and which version)\r\nmacOS, but shouldn't matter.\r\n\n", "before_files": [{"content": "\"\"\"\nImplements the ability for SCons to emit a compilation database for the MongoDB project. See\nhttp://clang.llvm.org/docs/JSONCompilationDatabase.html for details on what a compilation\ndatabase is, and why you might want one. The only user visible entry point here is\n'env.CompilationDatabase'. This method takes an optional 'target' to name the file that\nshould hold the compilation database, otherwise, the file defaults to compile_commands.json,\nwhich is the name that most clang tools search for by default.\n\"\"\"\n\n# Copyright 2020 MongoDB Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\n# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE\n# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\n# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n#\n\nimport json\nimport itertools\nimport SCons\n\nfrom .cxx import CXXSuffixes\nfrom .cc import CSuffixes\nfrom .asm import ASSuffixes, ASPPSuffixes\n\n# TODO: Is there a better way to do this than this global? Right now this exists so that the\n# emitter we add can record all of the things it emits, so that the scanner for the top level\n# compilation database can access the complete list, and also so that the writer has easy\n# access to write all of the files. But it seems clunky. How can the emitter and the scanner\n# communicate more gracefully?\n__COMPILATION_DB_ENTRIES = []\n\n\n# We make no effort to avoid rebuilding the entries. Someday, perhaps we could and even\n# integrate with the cache, but there doesn't seem to be much call for it.\nclass __CompilationDbNode(SCons.Node.Python.Value):\n def __init__(self, value):\n SCons.Node.Python.Value.__init__(self, value)\n self.Decider(changed_since_last_build_node)\n\n\ndef changed_since_last_build_node(child, target, prev_ni, node):\n \"\"\" Dummy decider to force always building\"\"\"\n return True\n\n\ndef make_emit_compilation_DB_entry(comstr):\n \"\"\"\n Effectively this creates a lambda function to capture:\n * command line\n * source\n * target\n :param comstr: unevaluated command line\n :return: an emitter which has captured the above\n \"\"\"\n user_action = SCons.Action.Action(comstr)\n\n def emit_compilation_db_entry(target, source, env):\n \"\"\"\n This emitter will be added to each c/c++ object build to capture the info needed\n for clang tools\n :param target: target node(s)\n :param source: source node(s)\n :param env: Environment for use building this node\n :return: target(s), source(s)\n \"\"\"\n\n dbtarget = __CompilationDbNode(source)\n\n entry = env.__COMPILATIONDB_Entry(\n target=dbtarget,\n source=[],\n __COMPILATIONDB_UOUTPUT=target,\n __COMPILATIONDB_USOURCE=source,\n __COMPILATIONDB_UACTION=user_action,\n __COMPILATIONDB_ENV=env,\n )\n\n # TODO: Technically, these next two lines should not be required: it should be fine to\n # cache the entries. However, they don't seem to update properly. Since they are quick\n # to re-generate disable caching and sidestep this problem.\n env.AlwaysBuild(entry)\n env.NoCache(entry)\n\n __COMPILATION_DB_ENTRIES.append(dbtarget)\n\n return target, source\n\n return emit_compilation_db_entry\n\n\ndef compilation_db_entry_action(target, source, env, **kw):\n \"\"\"\n Create a dictionary with evaluated command line, target, source\n and store that info as an attribute on the target\n (Which has been stored in __COMPILATION_DB_ENTRIES array\n :param target: target node(s)\n :param source: source node(s)\n :param env: Environment for use building this node\n :param kw:\n :return: None\n \"\"\"\n\n command = env[\"__COMPILATIONDB_UACTION\"].strfunction(\n target=env[\"__COMPILATIONDB_UOUTPUT\"],\n source=env[\"__COMPILATIONDB_USOURCE\"],\n env=env[\"__COMPILATIONDB_ENV\"],\n )\n\n entry = {\n \"directory\": env.Dir(\"#\").abspath,\n \"command\": command,\n \"file\": env[\"__COMPILATIONDB_USOURCE\"][0],\n \"output\": env['__COMPILATIONDB_UOUTPUT'][0]\n }\n\n target[0].write(entry)\n\n\ndef write_compilation_db(target, source, env):\n entries = []\n\n use_abspath = env['COMPILATIONDB_USE_ABSPATH'] in [True, 1, 'True', 'true']\n\n for s in __COMPILATION_DB_ENTRIES:\n entry = s.read()\n source_file = entry['file']\n output_file = entry['output']\n\n if use_abspath:\n source_file = source_file.abspath\n output_file = output_file.abspath\n else:\n source_file = source_file.path\n output_file = output_file.path\n\n path_entry = {'directory': entry['directory'],\n 'command': entry['command'],\n 'file': source_file,\n 'output': output_file}\n\n entries.append(path_entry)\n\n with open(target[0].path, \"w\") as output_file:\n json.dump(\n entries, output_file, sort_keys=True, indent=4, separators=(\",\", \": \")\n )\n\n\ndef scan_compilation_db(node, env, path):\n return __COMPILATION_DB_ENTRIES\n\n\ndef compilation_db_emitter(target, source, env):\n \"\"\" fix up the source/targets \"\"\"\n\n # Someone called env.CompilationDatabase('my_targetname.json')\n if not target and len(source) == 1:\n target = source\n\n # Default target name is compilation_db.json\n if not target:\n target = ['compile_commands.json', ]\n\n # No source should have been passed. Drop it.\n if source:\n source = []\n\n return target, source\n\n\ndef generate(env, **kwargs):\n static_obj, shared_obj = SCons.Tool.createObjBuilders(env)\n\n env[\"COMPILATIONDB_COMSTR\"] = kwargs.get(\n \"COMPILATIONDB_COMSTR\", \"Building compilation database $TARGET\"\n )\n\n components_by_suffix = itertools.chain(\n itertools.product(\n CSuffixes,\n [\n (static_obj, SCons.Defaults.StaticObjectEmitter, \"$CCCOM\"),\n (shared_obj, SCons.Defaults.SharedObjectEmitter, \"$SHCCCOM\"),\n ],\n ),\n itertools.product(\n CXXSuffixes,\n [\n (static_obj, SCons.Defaults.StaticObjectEmitter, \"$CXXCOM\"),\n (shared_obj, SCons.Defaults.SharedObjectEmitter, \"$SHCXXCOM\"),\n ],\n ),\n itertools.product(\n ASSuffixes,\n [(static_obj, SCons.Defaults.StaticObjectEmitter, \"$ASCOM\")],\n [(shared_obj, SCons.Defaults.SharedObjectEmitter, \"$ASCOM\")],\n ),\n itertools.product(\n ASPPSuffixes,\n [(static_obj, SCons.Defaults.StaticObjectEmitter, \"$ASPPCOM\")],\n [(shared_obj, SCons.Defaults.SharedObjectEmitter, \"$ASPPCOM\")],\n ),\n )\n\n for entry in components_by_suffix:\n suffix = entry[0]\n builder, base_emitter, command = entry[1]\n\n # Assumes a dictionary emitter\n emitter = builder.emitter.get(suffix, False)\n if emitter:\n # We may not have tools installed which initialize all or any of\n # cxx, cc, or assembly. If not skip resetting the respective emitter.\n builder.emitter[suffix] = SCons.Builder.ListEmitter(\n [emitter, make_emit_compilation_DB_entry(command), ]\n )\n\n env[\"BUILDERS\"][\"__COMPILATIONDB_Entry\"] = SCons.Builder.Builder(\n action=SCons.Action.Action(compilation_db_entry_action, None),\n )\n\n env[\"BUILDERS\"][\"CompilationDatabase\"] = SCons.Builder.Builder(\n action=SCons.Action.Action(write_compilation_db, \"$COMPILATIONDB_COMSTR\"),\n target_scanner=SCons.Scanner.Scanner(\n function=scan_compilation_db, node_class=None\n ),\n emitter=compilation_db_emitter,\n suffix='json',\n )\n\n env['COMPILATIONDB_USE_ABSPATH'] = False\n\n\ndef exists(env):\n return True\n", "path": "SCons/Tool/compilation_db.py"}]} | 3,375 | 152 |
gh_patches_debug_27081 | rasdani/github-patches | git_diff | sunpy__sunpy-1768 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LYRALightCurve returns level 2 data even when "lev" kwarg is set to 3.
The LYRALightCurve "lev" kwarg is supposed to allow the user to create a lightcurve object with level 2 (sub-second resolution) or level 3 (1-minute resolution). At the moment, level 2 data is used even if lev=3.
</issue>
<code>
[start of sunpy/lightcurve/sources/lyra.py]
1 # -*- coding: utf-8 -*-
2 """Provides programs to process and analyze PROBA2/LYRA data."""
3 from __future__ import absolute_import
4
5 import datetime
6 import sys
7 from collections import OrderedDict
8
9 from matplotlib import pyplot as plt
10 from astropy.io import fits
11 import pandas
12
13 from sunpy.lightcurve import LightCurve
14 from sunpy.time import parse_time
15
16 from sunpy import config
17
18 from sunpy.extern.six.moves import urllib
19
20 TIME_FORMAT = config.get("general", "time_format")
21
22 __all__ = ['LYRALightCurve']
23
24 class LYRALightCurve(LightCurve):
25 """
26 Proba-2 LYRA LightCurve.
27
28 LYRA (Large Yield RAdiometer) is an ultraviolet irradiance radiometer that
29 observes the Sun in four passbands, chosen for their relevance to
30 solar physics, aeronomy and space weather.
31 LYRA is composed of three (redundant) units, each of them constituted of the
32 same four channels:
33
34 * 120-123 nm Lyman-alpha channel
35 * 190-222 nm Herzberg continuum channel
36 * Aluminium filter channel (17-80 nm + a contribution below 5 nm), including He II at 30.4 nm
37 * Zirconium filter channel (6-20 nm + a contribution below 2 nm), rejecting He II
38
39 LYRA can take data with cadences chosen in the 100Hz to 0.1Hz interval.
40
41 PROBA2 was launched on 2 November 2009.
42
43 Examples
44 --------
45 >>> import sunpy
46 >>> lyra = sunpy.lightcurve.LYRALightCurve.create()
47 >>> lyra = sunpy.lightcurve.LYRALightCurve.create('~/Data/lyra/lyra_20110810-000000_lev2_std.fits') # doctest: +SKIP
48 >>> lyra = sunpy.lightcurve.LYRALightCurve.create('2011/08/10')
49 >>> lyra = sunpy.lightcurve.LYRALightCurve.create("http://proba2.oma.be/lyra/data/bsd/2011/08/10/lyra_20110810-000000_lev2_std.fits")
50 >>> lyra.peek() # doctest: +SKIP
51
52 References
53 ----------
54 * `Proba2 SWAP Science Center <http://proba2.sidc.be/about/SWAP/>`_
55 * `LYRA Data Homepage <http://proba2.sidc.be/data/LYRA>`_
56 * `LYRA Instrument Homepage <http://proba2.sidc.be/about/LYRA>`_
57 """
58
59 def peek(self, names=3, **kwargs):
60 """Plots the LYRA data. An example is shown below.
61
62 .. plot::
63
64 import sunpy.lightcurve
65 from sunpy.data.sample import LYRA_LEVEL3_LIGHTCURVE
66 lyra = sunpy.lightcurve.LYRALightCurve.create(LYRA_LEVEL3_LIGHTCURVE)
67 lyra.peek()
68
69 Parameters
70 ----------
71 names : int
72 The number of columns to plot.
73
74 **kwargs : dict
75 Any additional plot arguments that should be used
76 when plotting.
77
78 Returns
79 -------
80 fig : `~matplotlib.Figure`
81 A plot figure.
82 """
83 lyranames = (('Lyman alpha','Herzberg cont.','Al filter','Zr filter'),
84 ('120-123nm','190-222nm','17-80nm + <5nm','6-20nm + <2nm'))
85
86 # Choose title if none was specified
87 #if not kwargs.has_key("title"):
88 # if len(self.data.columns) > 1:
89 # kwargs['title'] = 'LYRA data'
90 # else:
91 # if self._filename is not None:
92 # base = self._filename
93 # kwargs['title'] = os.path.splitext(base)[0]
94 # else:
95 # kwargs['title'] = 'LYRA data'
96 figure = plt.figure()
97 plt.subplots_adjust(left=0.17,top=0.94,right=0.94,bottom=0.15)
98 axes = plt.gca()
99
100 axes = self.data.plot(ax=axes, subplots=True, sharex=True, **kwargs)
101
102 for i, name in enumerate(self.data.columns):
103 if names < 3:
104 name = lyranames[names][i]
105 else:
106 name = lyranames[0][i] + ' \n (' + lyranames[1][i] + ')'
107 axes[i].set_ylabel( "{name} \n (W/m**2)".format(name=name), fontsize=9.5)
108
109 axes[0].set_title("LYRA ({0:{1}})".format(self.data.index[0],TIME_FORMAT))
110 axes[-1].set_xlabel("Time")
111 for axe in axes:
112 axe.locator_params(axis='y',nbins=6)
113
114 figure.show()
115
116 return figure
117
118 @staticmethod
119 def _get_url_for_date(date, **kwargs):
120 """Returns a URL to the LYRA data for the specified date"""
121 dt = parse_time(date or datetime.datetime.utcnow())
122
123 # Filename
124 filename = "lyra_{0:%Y%m%d-}000000_lev{1:d}_std.fits".format(
125 dt, kwargs.get('level', 2))
126 # URL
127 base_url = "http://proba2.oma.be/lyra/data/bsd/"
128 url_path = urllib.parse.urljoin(dt.strftime('%Y/%m/%d/'), filename)
129 return urllib.parse.urljoin(base_url, url_path)
130
131 @classmethod
132 def _get_default_uri(cls):
133 """Returns URL for latest LYRA data"""
134 return cls._get_url_for_date(datetime.datetime.utcnow())
135
136 @staticmethod
137 def _parse_fits(filepath):
138 """Parses LYRA data from a FITS file"""
139 # Open file with PyFITS
140 hdulist = fits.open(filepath)
141 fits_record = hdulist[1].data
142 # secondary_header = hdulist[1].header
143
144 # Start and end dates. Different LYRA FITS files have
145 # different tags for the date obs.
146 if 'date-obs' in hdulist[0].header:
147 start_str = hdulist[0].header['date-obs']
148 elif 'date_obs' in hdulist[0].header:
149 start_str = hdulist[0].header['date_obs']
150 # end_str = hdulist[0].header['date-end']
151
152 # start = datetime.datetime.strptime(start_str, '%Y-%m-%dT%H:%M:%S.%f')
153 start = parse_time(start_str)
154 # end = datetime.datetime.strptime(end_str, '%Y-%m-%dT%H:%M:%S.%f')
155
156 # First column are times. For level 2 data, the units are [s].
157 # For level 3 data, the units are [min]
158 if hdulist[1].header['TUNIT1'] == 's':
159 times = [start + datetime.timedelta(seconds=n)
160 for n in fits_record.field(0)]
161 elif hdulist[1].header['TUNIT1'] == 'MIN':
162 times = [start + datetime.timedelta(minutes=int(n))
163 for n in fits_record.field(0)]
164 else:
165 raise ValueError("Time unit in LYRA fits file not recognised. "
166 "Value = {0}".format(hdulist[1].header['TUNIT1']))
167
168 # Rest of columns are the data
169 table = {}
170
171 for i, col in enumerate(fits_record.columns[1:-1]):
172 # temporary patch for big-endian data bug on pandas 0.13
173 if fits_record.field(i+1).dtype.byteorder == '>' and sys.byteorder =='little':
174 table[col.name] = fits_record.field(i + 1).byteswap().newbyteorder()
175 else:
176 table[col.name] = fits_record.field(i + 1)
177
178 # Return the header and the data
179 data = pandas.DataFrame(table, index=times)
180 data.sort_index(inplace=True)
181 return OrderedDict(hdulist[0].header), data
182
[end of sunpy/lightcurve/sources/lyra.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sunpy/lightcurve/sources/lyra.py b/sunpy/lightcurve/sources/lyra.py
--- a/sunpy/lightcurve/sources/lyra.py
+++ b/sunpy/lightcurve/sources/lyra.py
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
"""Provides programs to process and analyze PROBA2/LYRA data."""
-from __future__ import absolute_import
+from __future__ import absolute_import, division, print_function
import datetime
import sys
@@ -40,12 +40,18 @@
PROBA2 was launched on 2 November 2009.
+ This class can download and hold either Level 2 data (the default) which
+ has sub-second resolution or Level 3 which is the Level 2 data averaged to
+ one minute cadence. The level can be specified with the ``level`` keyword
+ argument to `~sunpy.lightcurve.LyraLightCurve.create`.
+
Examples
--------
>>> import sunpy
>>> lyra = sunpy.lightcurve.LYRALightCurve.create()
>>> lyra = sunpy.lightcurve.LYRALightCurve.create('~/Data/lyra/lyra_20110810-000000_lev2_std.fits') # doctest: +SKIP
>>> lyra = sunpy.lightcurve.LYRALightCurve.create('2011/08/10')
+ >>> lyra = sunpy.lightcurve.LYRALightCurve.create('2011/08/10', level=3)
>>> lyra = sunpy.lightcurve.LYRALightCurve.create("http://proba2.oma.be/lyra/data/bsd/2011/08/10/lyra_20110810-000000_lev2_std.fits")
>>> lyra.peek() # doctest: +SKIP
| {"golden_diff": "diff --git a/sunpy/lightcurve/sources/lyra.py b/sunpy/lightcurve/sources/lyra.py\n--- a/sunpy/lightcurve/sources/lyra.py\n+++ b/sunpy/lightcurve/sources/lyra.py\n@@ -1,6 +1,6 @@\n # -*- coding: utf-8 -*-\n \"\"\"Provides programs to process and analyze PROBA2/LYRA data.\"\"\"\n-from __future__ import absolute_import\n+from __future__ import absolute_import, division, print_function\n \n import datetime\n import sys\n@@ -40,12 +40,18 @@\n \n PROBA2 was launched on 2 November 2009.\n \n+ This class can download and hold either Level 2 data (the default) which\n+ has sub-second resolution or Level 3 which is the Level 2 data averaged to\n+ one minute cadence. The level can be specified with the ``level`` keyword\n+ argument to `~sunpy.lightcurve.LyraLightCurve.create`.\n+\n Examples\n --------\n >>> import sunpy\n >>> lyra = sunpy.lightcurve.LYRALightCurve.create()\n >>> lyra = sunpy.lightcurve.LYRALightCurve.create('~/Data/lyra/lyra_20110810-000000_lev2_std.fits') # doctest: +SKIP\n >>> lyra = sunpy.lightcurve.LYRALightCurve.create('2011/08/10')\n+ >>> lyra = sunpy.lightcurve.LYRALightCurve.create('2011/08/10', level=3)\n >>> lyra = sunpy.lightcurve.LYRALightCurve.create(\"http://proba2.oma.be/lyra/data/bsd/2011/08/10/lyra_20110810-000000_lev2_std.fits\")\n >>> lyra.peek() # doctest: +SKIP\n", "issue": "LYRALightCurve returns level 2 data even when \"lev\" kwarg is set to 3.\nThe LYRALightCurve \"lev\" kwarg is supposed to allow the user to create a lightcurve object with level 2 (sub-second resolution) or level 3 (1-minute resolution). At the moment, level 2 data is used even if lev=3.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Provides programs to process and analyze PROBA2/LYRA data.\"\"\"\nfrom __future__ import absolute_import\n\nimport datetime\nimport sys\nfrom collections import OrderedDict\n\nfrom matplotlib import pyplot as plt\nfrom astropy.io import fits\nimport pandas\n\nfrom sunpy.lightcurve import LightCurve\nfrom sunpy.time import parse_time\n\nfrom sunpy import config\n\nfrom sunpy.extern.six.moves import urllib\n\nTIME_FORMAT = config.get(\"general\", \"time_format\")\n\n__all__ = ['LYRALightCurve']\n\nclass LYRALightCurve(LightCurve):\n \"\"\"\n Proba-2 LYRA LightCurve.\n\n LYRA (Large Yield RAdiometer) is an ultraviolet irradiance radiometer that\n observes the Sun in four passbands, chosen for their relevance to\n solar physics, aeronomy and space weather.\n LYRA is composed of three (redundant) units, each of them constituted of the\n same four channels:\n\n * 120-123 nm Lyman-alpha channel\n * 190-222 nm Herzberg continuum channel\n * Aluminium filter channel (17-80 nm + a contribution below 5 nm), including He II at 30.4 nm\n * Zirconium filter channel (6-20 nm + a contribution below 2 nm), rejecting He II\n\n LYRA can take data with cadences chosen in the 100Hz to 0.1Hz interval.\n\n PROBA2 was launched on 2 November 2009.\n\n Examples\n --------\n >>> import sunpy\n >>> lyra = sunpy.lightcurve.LYRALightCurve.create()\n >>> lyra = sunpy.lightcurve.LYRALightCurve.create('~/Data/lyra/lyra_20110810-000000_lev2_std.fits') # doctest: +SKIP\n >>> lyra = sunpy.lightcurve.LYRALightCurve.create('2011/08/10')\n >>> lyra = sunpy.lightcurve.LYRALightCurve.create(\"http://proba2.oma.be/lyra/data/bsd/2011/08/10/lyra_20110810-000000_lev2_std.fits\")\n >>> lyra.peek() # doctest: +SKIP\n\n References\n ----------\n * `Proba2 SWAP Science Center <http://proba2.sidc.be/about/SWAP/>`_\n * `LYRA Data Homepage <http://proba2.sidc.be/data/LYRA>`_\n * `LYRA Instrument Homepage <http://proba2.sidc.be/about/LYRA>`_\n \"\"\"\n\n def peek(self, names=3, **kwargs):\n \"\"\"Plots the LYRA data. An example is shown below.\n\n .. plot::\n\n import sunpy.lightcurve\n from sunpy.data.sample import LYRA_LEVEL3_LIGHTCURVE\n lyra = sunpy.lightcurve.LYRALightCurve.create(LYRA_LEVEL3_LIGHTCURVE)\n lyra.peek()\n\n Parameters\n ----------\n names : int\n The number of columns to plot.\n\n **kwargs : dict\n Any additional plot arguments that should be used\n when plotting.\n\n Returns\n -------\n fig : `~matplotlib.Figure`\n A plot figure.\n \"\"\"\n lyranames = (('Lyman alpha','Herzberg cont.','Al filter','Zr filter'),\n ('120-123nm','190-222nm','17-80nm + <5nm','6-20nm + <2nm'))\n\n # Choose title if none was specified\n #if not kwargs.has_key(\"title\"):\n # if len(self.data.columns) > 1:\n # kwargs['title'] = 'LYRA data'\n # else:\n # if self._filename is not None:\n # base = self._filename\n # kwargs['title'] = os.path.splitext(base)[0]\n # else:\n # kwargs['title'] = 'LYRA data'\n figure = plt.figure()\n plt.subplots_adjust(left=0.17,top=0.94,right=0.94,bottom=0.15)\n axes = plt.gca()\n\n axes = self.data.plot(ax=axes, subplots=True, sharex=True, **kwargs)\n\n for i, name in enumerate(self.data.columns):\n if names < 3:\n name = lyranames[names][i]\n else:\n name = lyranames[0][i] + ' \\n (' + lyranames[1][i] + ')'\n axes[i].set_ylabel( \"{name} \\n (W/m**2)\".format(name=name), fontsize=9.5)\n\n axes[0].set_title(\"LYRA ({0:{1}})\".format(self.data.index[0],TIME_FORMAT))\n axes[-1].set_xlabel(\"Time\")\n for axe in axes:\n axe.locator_params(axis='y',nbins=6)\n\n figure.show()\n\n return figure\n\n @staticmethod\n def _get_url_for_date(date, **kwargs):\n \"\"\"Returns a URL to the LYRA data for the specified date\"\"\"\n dt = parse_time(date or datetime.datetime.utcnow())\n\n # Filename\n filename = \"lyra_{0:%Y%m%d-}000000_lev{1:d}_std.fits\".format(\n dt, kwargs.get('level', 2))\n # URL\n base_url = \"http://proba2.oma.be/lyra/data/bsd/\"\n url_path = urllib.parse.urljoin(dt.strftime('%Y/%m/%d/'), filename)\n return urllib.parse.urljoin(base_url, url_path)\n\n @classmethod\n def _get_default_uri(cls):\n \"\"\"Returns URL for latest LYRA data\"\"\"\n return cls._get_url_for_date(datetime.datetime.utcnow())\n\n @staticmethod\n def _parse_fits(filepath):\n \"\"\"Parses LYRA data from a FITS file\"\"\"\n # Open file with PyFITS\n hdulist = fits.open(filepath)\n fits_record = hdulist[1].data\n # secondary_header = hdulist[1].header\n\n # Start and end dates. Different LYRA FITS files have\n # different tags for the date obs.\n if 'date-obs' in hdulist[0].header:\n start_str = hdulist[0].header['date-obs']\n elif 'date_obs' in hdulist[0].header:\n start_str = hdulist[0].header['date_obs']\n # end_str = hdulist[0].header['date-end']\n\n # start = datetime.datetime.strptime(start_str, '%Y-%m-%dT%H:%M:%S.%f')\n start = parse_time(start_str)\n # end = datetime.datetime.strptime(end_str, '%Y-%m-%dT%H:%M:%S.%f')\n\n # First column are times. For level 2 data, the units are [s].\n # For level 3 data, the units are [min]\n if hdulist[1].header['TUNIT1'] == 's':\n times = [start + datetime.timedelta(seconds=n)\n for n in fits_record.field(0)]\n elif hdulist[1].header['TUNIT1'] == 'MIN':\n times = [start + datetime.timedelta(minutes=int(n))\n for n in fits_record.field(0)]\n else:\n raise ValueError(\"Time unit in LYRA fits file not recognised. \"\n \"Value = {0}\".format(hdulist[1].header['TUNIT1']))\n\n # Rest of columns are the data\n table = {}\n\n for i, col in enumerate(fits_record.columns[1:-1]):\n # temporary patch for big-endian data bug on pandas 0.13\n if fits_record.field(i+1).dtype.byteorder == '>' and sys.byteorder =='little':\n table[col.name] = fits_record.field(i + 1).byteswap().newbyteorder()\n else:\n table[col.name] = fits_record.field(i + 1)\n\n # Return the header and the data\n data = pandas.DataFrame(table, index=times)\n data.sort_index(inplace=True)\n return OrderedDict(hdulist[0].header), data\n", "path": "sunpy/lightcurve/sources/lyra.py"}]} | 2,913 | 441 |
gh_patches_debug_531 | rasdani/github-patches | git_diff | joke2k__faker-1569 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
too long iban generated for pl-PL locale
* Faker version: 9.8.2
* OS: MacOs 12.0.1
IBANs generated for pl_PL locales are 30 characters long. This is too many. Valid PL IBAN should have 28 characters (including country code).
### Steps to reproduce
Generate a Polish IBAN with:
```
from faker import Faker
fake=Faker('pl-PL')
print(fake.iban())
```
Copy paste generated string into IBAN Validator at https://www.ibancalculator.com/
### Expected behavior
IBAN should have the correct length and checksum
### Actual behavior
There is an error message that IBAN have too many characters:
"This IBAN cannot be correct because of its length. A Polish IBAN always contains exactly 28 digits and letters ("PL", a 2-digit checksum, and the 24-digit national account number, whose first 8 digits determine the bank and branch). The IBAN you entered is 30 characters long."
too long iban generated for pl-PL locale
* Faker version: 9.8.2
* OS: MacOs 12.0.1
IBANs generated for pl_PL locales are 30 characters long. This is too many. Valid PL IBAN should have 28 characters (including country code).
### Steps to reproduce
Generate a Polish IBAN with:
```
from faker import Faker
fake=Faker('pl-PL')
print(fake.iban())
```
Copy paste generated string into IBAN Validator at https://www.ibancalculator.com/
### Expected behavior
IBAN should have the correct length and checksum
### Actual behavior
There is an error message that IBAN have too many characters:
"This IBAN cannot be correct because of its length. A Polish IBAN always contains exactly 28 digits and letters ("PL", a 2-digit checksum, and the 24-digit national account number, whose first 8 digits determine the bank and branch). The IBAN you entered is 30 characters long."
</issue>
<code>
[start of faker/providers/bank/pl_PL/__init__.py]
1 from .. import Provider as BankProvider
2
3
4 class Provider(BankProvider):
5 """Implement bank provider for ``pl_PL`` locale."""
6
7 bban_format = "#" * 26
8 country_code = "PL"
9
[end of faker/providers/bank/pl_PL/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/faker/providers/bank/pl_PL/__init__.py b/faker/providers/bank/pl_PL/__init__.py
--- a/faker/providers/bank/pl_PL/__init__.py
+++ b/faker/providers/bank/pl_PL/__init__.py
@@ -4,5 +4,5 @@
class Provider(BankProvider):
"""Implement bank provider for ``pl_PL`` locale."""
- bban_format = "#" * 26
+ bban_format = "#" * 24
country_code = "PL"
| {"golden_diff": "diff --git a/faker/providers/bank/pl_PL/__init__.py b/faker/providers/bank/pl_PL/__init__.py\n--- a/faker/providers/bank/pl_PL/__init__.py\n+++ b/faker/providers/bank/pl_PL/__init__.py\n@@ -4,5 +4,5 @@\n class Provider(BankProvider):\n \"\"\"Implement bank provider for ``pl_PL`` locale.\"\"\"\n \n- bban_format = \"#\" * 26\n+ bban_format = \"#\" * 24\n country_code = \"PL\"\n", "issue": "too long iban generated for pl-PL locale\n* Faker version: 9.8.2\r\n* OS: MacOs 12.0.1\r\n\r\nIBANs generated for pl_PL locales are 30 characters long. This is too many. Valid PL IBAN should have 28 characters (including country code).\r\n\r\n### Steps to reproduce\r\nGenerate a Polish IBAN with:\r\n```\r\nfrom faker import Faker\r\n fake=Faker('pl-PL')\r\n print(fake.iban())\r\n```\r\nCopy paste generated string into IBAN Validator at https://www.ibancalculator.com/\r\n### Expected behavior\r\n\r\nIBAN should have the correct length and checksum\r\n\r\n### Actual behavior\r\n\r\nThere is an error message that IBAN have too many characters:\r\n\"This IBAN cannot be correct because of its length. A Polish IBAN always contains exactly 28 digits and letters (\"PL\", a 2-digit checksum, and the 24-digit national account number, whose first 8 digits determine the bank and branch). The IBAN you entered is 30 characters long.\"\r\n\ntoo long iban generated for pl-PL locale\n* Faker version: 9.8.2\r\n* OS: MacOs 12.0.1\r\n\r\nIBANs generated for pl_PL locales are 30 characters long. This is too many. Valid PL IBAN should have 28 characters (including country code).\r\n\r\n### Steps to reproduce\r\nGenerate a Polish IBAN with:\r\n```\r\nfrom faker import Faker\r\n fake=Faker('pl-PL')\r\n print(fake.iban())\r\n```\r\nCopy paste generated string into IBAN Validator at https://www.ibancalculator.com/\r\n### Expected behavior\r\n\r\nIBAN should have the correct length and checksum\r\n\r\n### Actual behavior\r\n\r\nThere is an error message that IBAN have too many characters:\r\n\"This IBAN cannot be correct because of its length. A Polish IBAN always contains exactly 28 digits and letters (\"PL\", a 2-digit checksum, and the 24-digit national account number, whose first 8 digits determine the bank and branch). The IBAN you entered is 30 characters long.\"\r\n\n", "before_files": [{"content": "from .. import Provider as BankProvider\n\n\nclass Provider(BankProvider):\n \"\"\"Implement bank provider for ``pl_PL`` locale.\"\"\"\n\n bban_format = \"#\" * 26\n country_code = \"PL\"\n", "path": "faker/providers/bank/pl_PL/__init__.py"}]} | 1,048 | 116 |
gh_patches_debug_17493 | rasdani/github-patches | git_diff | rpm-software-management__dnf-1956 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`email_port` option supported but not used
It looks like the `email_port` option, while parsed in [dnf/automatic/main.py:200](https://github.com/rpm-software-management/dnf/blob/master/dnf/automatic/main.py#L200), is not actually used anywhere.
</issue>
<code>
[start of dnf/automatic/emitter.py]
1 # emitter.py
2 # Emitters for dnf-automatic.
3 #
4 # Copyright (C) 2014-2016 Red Hat, Inc.
5 #
6 # This copyrighted material is made available to anyone wishing to use,
7 # modify, copy, or redistribute it subject to the terms and conditions of
8 # the GNU General Public License v.2, or (at your option) any later version.
9 # This program is distributed in the hope that it will be useful, but WITHOUT
10 # ANY WARRANTY expressed or implied, including the implied warranties of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
12 # Public License for more details. You should have received a copy of the
13 # GNU General Public License along with this program; if not, write to the
14 # Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
15 # 02110-1301, USA. Any Red Hat trademarks that are incorporated in the
16 # source code or documentation are not subject to the GNU General Public
17 # License and may only be used or replicated with the express permission of
18 # Red Hat, Inc.
19 #
20
21 from __future__ import absolute_import
22 from __future__ import print_function
23 from __future__ import unicode_literals
24 from dnf.i18n import _
25 import logging
26 import dnf.pycomp
27 import smtplib
28 import email.utils
29 import subprocess
30 import time
31
32 APPLIED = _("The following updates have been applied on '%s':")
33 APPLIED_TIMESTAMP = _("Updates completed at %s")
34 AVAILABLE = _("The following updates are available on '%s':")
35 DOWNLOADED = _("The following updates were downloaded on '%s':")
36
37 logger = logging.getLogger('dnf')
38
39
40 class Emitter(object):
41 def __init__(self, system_name):
42 self._applied = False
43 self._available_msg = None
44 self._downloaded = False
45 self._system_name = system_name
46 self._trans_msg = None
47
48 def _prepare_msg(self):
49 msg = []
50 if self._applied:
51 msg.append(APPLIED % self._system_name)
52 msg.append(self._available_msg)
53 msg.append(APPLIED_TIMESTAMP % time.strftime("%c"))
54 elif self._downloaded:
55 msg.append(DOWNLOADED % self._system_name)
56 msg.append(self._available_msg)
57 elif self._available_msg:
58 msg.append(AVAILABLE % self._system_name)
59 msg.append(self._available_msg)
60 else:
61 return None
62 return '\n'.join(msg)
63
64 def notify_applied(self):
65 assert self._available_msg
66 self._applied = True
67
68 def notify_available(self, msg):
69 self._available_msg = msg
70
71 def notify_downloaded(self):
72 assert self._available_msg
73 self._downloaded = True
74
75
76 class EmailEmitter(Emitter):
77 def __init__(self, system_name, conf):
78 super(EmailEmitter, self).__init__(system_name)
79 self._conf = conf
80
81 def _prepare_msg(self):
82 if self._applied:
83 subj = _("Updates applied on '%s'.") % self._system_name
84 elif self._downloaded:
85 subj = _("Updates downloaded on '%s'.") % self._system_name
86 elif self._available_msg:
87 subj = _("Updates available on '%s'.") % self._system_name
88 else:
89 return None, None
90 return subj, super(EmailEmitter, self)._prepare_msg()
91
92 def commit(self):
93 subj, body = self._prepare_msg()
94 message = dnf.pycomp.email_mime(body)
95 message.set_charset('utf-8')
96 email_from = self._conf.email_from
97 email_to = self._conf.email_to
98 message['Date'] = email.utils.formatdate()
99 message['From'] = email_from
100 message['Subject'] = subj
101 message['To'] = ','.join(email_to)
102 message['Message-ID'] = email.utils.make_msgid()
103
104 # Send the email
105 try:
106 smtp = smtplib.SMTP(self._conf.email_host, timeout=300)
107 smtp.sendmail(email_from, email_to, message.as_string())
108 smtp.close()
109 except OSError as exc:
110 msg = _("Failed to send an email via '%s': %s") % (
111 self._conf.email_host, exc)
112 logger.error(msg)
113
114
115 class CommandEmitterMixIn(object):
116 """
117 Executes a desired command, and pushes data into its stdin.
118 Both data and command can be formatted according to user preference.
119 For this reason, this class expects a {str:str} dictionary as _prepare_msg
120 return value.
121 Meant for mixing with Emitter classes, as it does not define any names used
122 for formatting on its own.
123 """
124 def commit(self):
125 command_fmt = self._conf.command_format
126 stdin_fmt = self._conf.stdin_format
127 msg = self._prepare_msg()
128 # all strings passed to shell should be quoted to avoid accidental code
129 # execution
130 quoted_msg = dict((key, dnf.pycomp.shlex_quote(val))
131 for key, val in msg.items())
132 command = command_fmt.format(**quoted_msg)
133 stdin_feed = stdin_fmt.format(**msg).encode('utf-8')
134
135 # Execute the command
136 subp = subprocess.Popen(command, shell=True, stdin=subprocess.PIPE)
137 subp.communicate(stdin_feed)
138 subp.stdin.close()
139 if subp.wait() != 0:
140 msg = _("Failed to execute command '%s': returned %d") \
141 % (command, subp.returncode)
142 logger.error(msg)
143
144
145 class CommandEmitter(CommandEmitterMixIn, Emitter):
146 def __init__(self, system_name, conf):
147 super(CommandEmitter, self).__init__(system_name)
148 self._conf = conf
149
150 def _prepare_msg(self):
151 return {'body': super(CommandEmitter, self)._prepare_msg()}
152
153
154 class CommandEmailEmitter(CommandEmitterMixIn, EmailEmitter):
155 def _prepare_msg(self):
156 subject, body = super(CommandEmailEmitter, self)._prepare_msg()
157 return {'subject': subject,
158 'body': body,
159 'email_from': self._conf.email_from,
160 'email_to': ' '.join(self._conf.email_to)}
161
162
163 class StdIoEmitter(Emitter):
164 def commit(self):
165 msg = self._prepare_msg()
166 print(msg)
167
168
169 class MotdEmitter(Emitter):
170 def commit(self):
171 msg = self._prepare_msg()
172 with open('/etc/motd', 'w') as fobj:
173 fobj.write(msg)
174
175
[end of dnf/automatic/emitter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dnf/automatic/emitter.py b/dnf/automatic/emitter.py
--- a/dnf/automatic/emitter.py
+++ b/dnf/automatic/emitter.py
@@ -95,6 +95,7 @@
message.set_charset('utf-8')
email_from = self._conf.email_from
email_to = self._conf.email_to
+ email_port = self._conf.email_port
message['Date'] = email.utils.formatdate()
message['From'] = email_from
message['Subject'] = subj
@@ -103,7 +104,7 @@
# Send the email
try:
- smtp = smtplib.SMTP(self._conf.email_host, timeout=300)
+ smtp = smtplib.SMTP(self._conf.email_host, self._conf.email_port, timeout=300)
smtp.sendmail(email_from, email_to, message.as_string())
smtp.close()
except OSError as exc:
| {"golden_diff": "diff --git a/dnf/automatic/emitter.py b/dnf/automatic/emitter.py\n--- a/dnf/automatic/emitter.py\n+++ b/dnf/automatic/emitter.py\n@@ -95,6 +95,7 @@\n message.set_charset('utf-8')\n email_from = self._conf.email_from\n email_to = self._conf.email_to\n+ email_port = self._conf.email_port\n message['Date'] = email.utils.formatdate()\n message['From'] = email_from\n message['Subject'] = subj\n@@ -103,7 +104,7 @@\n \n # Send the email\n try:\n- smtp = smtplib.SMTP(self._conf.email_host, timeout=300)\n+ smtp = smtplib.SMTP(self._conf.email_host, self._conf.email_port, timeout=300)\n smtp.sendmail(email_from, email_to, message.as_string())\n smtp.close()\n except OSError as exc:\n", "issue": "`email_port` option supported but not used\nIt looks like the `email_port` option, while parsed in [dnf/automatic/main.py:200](https://github.com/rpm-software-management/dnf/blob/master/dnf/automatic/main.py#L200), is not actually used anywhere.\n", "before_files": [{"content": "# emitter.py\n# Emitters for dnf-automatic.\n#\n# Copyright (C) 2014-2016 Red Hat, Inc.\n#\n# This copyrighted material is made available to anyone wishing to use,\n# modify, copy, or redistribute it subject to the terms and conditions of\n# the GNU General Public License v.2, or (at your option) any later version.\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY expressed or implied, including the implied warranties of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General\n# Public License for more details. You should have received a copy of the\n# GNU General Public License along with this program; if not, write to the\n# Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA\n# 02110-1301, USA. Any Red Hat trademarks that are incorporated in the\n# source code or documentation are not subject to the GNU General Public\n# License and may only be used or replicated with the express permission of\n# Red Hat, Inc.\n#\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\nfrom dnf.i18n import _\nimport logging\nimport dnf.pycomp\nimport smtplib\nimport email.utils\nimport subprocess\nimport time\n\nAPPLIED = _(\"The following updates have been applied on '%s':\")\nAPPLIED_TIMESTAMP = _(\"Updates completed at %s\")\nAVAILABLE = _(\"The following updates are available on '%s':\")\nDOWNLOADED = _(\"The following updates were downloaded on '%s':\")\n\nlogger = logging.getLogger('dnf')\n\n\nclass Emitter(object):\n def __init__(self, system_name):\n self._applied = False\n self._available_msg = None\n self._downloaded = False\n self._system_name = system_name\n self._trans_msg = None\n\n def _prepare_msg(self):\n msg = []\n if self._applied:\n msg.append(APPLIED % self._system_name)\n msg.append(self._available_msg)\n msg.append(APPLIED_TIMESTAMP % time.strftime(\"%c\"))\n elif self._downloaded:\n msg.append(DOWNLOADED % self._system_name)\n msg.append(self._available_msg)\n elif self._available_msg:\n msg.append(AVAILABLE % self._system_name)\n msg.append(self._available_msg)\n else:\n return None\n return '\\n'.join(msg)\n\n def notify_applied(self):\n assert self._available_msg\n self._applied = True\n\n def notify_available(self, msg):\n self._available_msg = msg\n\n def notify_downloaded(self):\n assert self._available_msg\n self._downloaded = True\n\n\nclass EmailEmitter(Emitter):\n def __init__(self, system_name, conf):\n super(EmailEmitter, self).__init__(system_name)\n self._conf = conf\n\n def _prepare_msg(self):\n if self._applied:\n subj = _(\"Updates applied on '%s'.\") % self._system_name\n elif self._downloaded:\n subj = _(\"Updates downloaded on '%s'.\") % self._system_name\n elif self._available_msg:\n subj = _(\"Updates available on '%s'.\") % self._system_name\n else:\n return None, None\n return subj, super(EmailEmitter, self)._prepare_msg()\n\n def commit(self):\n subj, body = self._prepare_msg()\n message = dnf.pycomp.email_mime(body)\n message.set_charset('utf-8')\n email_from = self._conf.email_from\n email_to = self._conf.email_to\n message['Date'] = email.utils.formatdate()\n message['From'] = email_from\n message['Subject'] = subj\n message['To'] = ','.join(email_to)\n message['Message-ID'] = email.utils.make_msgid()\n\n # Send the email\n try:\n smtp = smtplib.SMTP(self._conf.email_host, timeout=300)\n smtp.sendmail(email_from, email_to, message.as_string())\n smtp.close()\n except OSError as exc:\n msg = _(\"Failed to send an email via '%s': %s\") % (\n self._conf.email_host, exc)\n logger.error(msg)\n\n\nclass CommandEmitterMixIn(object):\n \"\"\"\n Executes a desired command, and pushes data into its stdin.\n Both data and command can be formatted according to user preference.\n For this reason, this class expects a {str:str} dictionary as _prepare_msg\n return value.\n Meant for mixing with Emitter classes, as it does not define any names used\n for formatting on its own.\n \"\"\"\n def commit(self):\n command_fmt = self._conf.command_format\n stdin_fmt = self._conf.stdin_format\n msg = self._prepare_msg()\n # all strings passed to shell should be quoted to avoid accidental code\n # execution\n quoted_msg = dict((key, dnf.pycomp.shlex_quote(val))\n for key, val in msg.items())\n command = command_fmt.format(**quoted_msg)\n stdin_feed = stdin_fmt.format(**msg).encode('utf-8')\n\n # Execute the command\n subp = subprocess.Popen(command, shell=True, stdin=subprocess.PIPE)\n subp.communicate(stdin_feed)\n subp.stdin.close()\n if subp.wait() != 0:\n msg = _(\"Failed to execute command '%s': returned %d\") \\\n % (command, subp.returncode)\n logger.error(msg)\n\n\nclass CommandEmitter(CommandEmitterMixIn, Emitter):\n def __init__(self, system_name, conf):\n super(CommandEmitter, self).__init__(system_name)\n self._conf = conf\n\n def _prepare_msg(self):\n return {'body': super(CommandEmitter, self)._prepare_msg()}\n\n\nclass CommandEmailEmitter(CommandEmitterMixIn, EmailEmitter):\n def _prepare_msg(self):\n subject, body = super(CommandEmailEmitter, self)._prepare_msg()\n return {'subject': subject,\n 'body': body,\n 'email_from': self._conf.email_from,\n 'email_to': ' '.join(self._conf.email_to)}\n\n\nclass StdIoEmitter(Emitter):\n def commit(self):\n msg = self._prepare_msg()\n print(msg)\n\n\nclass MotdEmitter(Emitter):\n def commit(self):\n msg = self._prepare_msg()\n with open('/etc/motd', 'w') as fobj:\n fobj.write(msg)\n\n", "path": "dnf/automatic/emitter.py"}]} | 2,444 | 214 |
gh_patches_debug_8864 | rasdani/github-patches | git_diff | mozmeao__snippets-service-1420 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Append entrypoint url argument only to `about:logins` page
</issue>
<code>
[start of snippets/base/util.py]
1 import copy
2 import datetime
3 import re
4 from urllib.parse import ParseResult, urlparse, urlencode
5
6 from django.http import QueryDict
7 from django.utils.encoding import smart_bytes
8
9 from product_details import product_details
10 from product_details.version_compare import version_list
11
12 EPOCH = datetime.datetime.utcfromtimestamp(0)
13
14
15 def get_object_or_none(model_class, **filters):
16 """
17 Identical to Model.get, except instead of throwing exceptions, this returns
18 None.
19 """
20 try:
21 return model_class.objects.get(**filters)
22 except (model_class.DoesNotExist, model_class.MultipleObjectsReturned):
23 return None
24
25
26 def first(collection, callback):
27 """
28 Find the first item in collection that, when passed to callback, returns
29 True. Returns None if no such item is found.
30 """
31 return next((item for item in collection if callback(item)), None)
32
33
34 def create_locales():
35 from snippets.base.models import TargetedLocale
36
37 for code, name in product_details.languages.items():
38 locale = TargetedLocale.objects.get_or_create(code=code.lower())[0]
39 name = name['English']
40 if locale.name != name:
41 locale.name = name
42 locale.save()
43
44
45 def create_countries():
46 from snippets.base.models import TargetedCountry
47
48 for code, name in product_details.get_regions('en-US').items():
49 country = TargetedCountry.objects.get_or_create(code=code.upper())[0]
50 if country.name != name:
51 country.name = name
52 country.save()
53
54
55 def current_firefox_major_version():
56 full_version = version_list(
57 product_details.firefox_history_major_releases)[0]
58
59 return full_version.split('.', 1)[0]
60
61
62 def urlparams(url_, fragment=None, query_dict=None, replace=True, **query):
63 """
64 Add a fragment and/or query parameters to a URL.
65 New query params will be appended to exising parameters, except duplicate
66 names, which will be replaced when replace=True otherwise preserved.
67
68 Copied from mozilla/kuma, modified:
69 - to not always replace vars
70 - to not escape `[]` characters
71 """
72 url_ = urlparse(url_)
73 fragment = fragment if fragment is not None else url_.fragment
74
75 q = url_.query
76 new_query_dict = (QueryDict(smart_bytes(q), mutable=True) if
77 q else QueryDict('', mutable=True))
78 if query_dict:
79 for k, l in query_dict.lists():
80 if not replace and k in new_query_dict:
81 continue
82 new_query_dict[k] = None
83 for v in l:
84 new_query_dict.appendlist(k, v)
85
86 for k, v in query.items():
87 if not replace and k in new_query_dict:
88 continue
89
90 if isinstance(v, list):
91 new_query_dict.setlist(k, v)
92 else:
93 new_query_dict[k] = v
94
95 query_string = urlencode([(k, v) for k, l in new_query_dict.lists() for
96 v in l if v is not None], safe='[]')
97 new = ParseResult(url_.scheme, url_.netloc, url_.path or '/',
98 url_.params, query_string, fragment)
99 return new.geturl()
100
101
102 def convert_special_link(url):
103 action = args = entrypoint_name = entrypoint_value = None
104 if url.startswith('special:menu:'):
105 action = 'OPEN_APPLICATIONS_MENU'
106 args = url.rsplit(':', 1)[1]
107 elif url.startswith('special:about:'):
108 action = 'OPEN_ABOUT_PAGE'
109 args = url.rsplit(':', 1)[1]
110 entrypoint_name = 'entryPoint'
111 entrypoint_value = 'snippet'
112 elif url.startswith('special:highlight:'):
113 action = 'HIGHLIGHT_FEATURE'
114 args = url.rsplit(':', 1)[1]
115 elif url == 'special:preferences':
116 action = 'OPEN_PREFERENCES_PAGE'
117 entrypoint_value = 'snippet'
118 elif url == 'special:accounts':
119 action = 'SHOW_FIREFOX_ACCOUNTS'
120 elif url == 'special:monitor':
121 action = 'ENABLE_FIREFOX_MONITOR'
122 args = {
123 'url': ('https://monitor.firefox.com/oauth/init?'
124 'utm_source=desktop-snippet&utm_term=[[job_id]]&'
125 'utm_content=[[channels]]&utm_campaign=[[campaign_slug]]&'
126 'entrypoint=snippets&form_type=button'),
127 'flowRequestParams': {
128 'entrypoint': 'snippets',
129 'utm_term': 'snippet-job-[[job_id]]',
130 'form_type': 'button'
131 }
132 }
133 return action, args, entrypoint_name, entrypoint_value
134
135
136 def fluent_link_extractor(data, variables):
137 """Replaces all <a> elements with fluent.js link elements sequentially
138 numbered.
139
140 Returns a tuple with the new text and a dict of all the links with url and
141 custom metric where available.
142
143 """
144 class Replacer:
145 link_counter = 0
146 links = {}
147
148 def __call__(self, matchobj):
149 keyname = 'link{0}'.format(self.link_counter)
150 replacement = '<{keyname}>{text}</{keyname}>'.format(
151 keyname=keyname,
152 text=matchobj.group('innerText'))
153 # Find the URL
154 url_match = re.search('href="(?P<url>.+?)"', matchobj.group('attrs'))
155 url = ''
156
157 if url_match:
158 url = url_match.group('url')
159
160 action, args, entrypoint_name, entrypoint_value = convert_special_link(url)
161
162 if action:
163 self.links[keyname] = {
164 'action': action,
165 }
166 if args:
167 self.links[keyname]['args'] = args
168 if entrypoint_name:
169 self.links[keyname]['entrypoint_name'] = entrypoint_name
170 if entrypoint_value:
171 self.links[keyname]['entrypoint_value'] = entrypoint_value
172 else:
173 self.links[keyname] = {
174 'url': url,
175 }
176
177 # Find the optional data-metric attrib
178 metric_match = re.search('data-metric="(?P<metric>.+?)"', matchobj.group('attrs'))
179 if metric_match:
180 self.links[keyname]['metric'] = metric_match.group('metric')
181
182 self.link_counter += 1
183 return replacement
184
185 local_data = copy.deepcopy(data)
186 replacer = Replacer()
187 for variable in variables:
188 if variable not in local_data:
189 continue
190 local_data[variable] = re.sub('(<a(?P<attrs> .*?)>)(?P<innerText>.+?)(</a>)',
191 replacer, local_data[variable])
192
193 local_data['links'] = replacer.links
194 return local_data
195
196
197 def deep_search_and_replace(data, search_string, replace_string):
198 for key, value in data.items():
199 if isinstance(value, str):
200 data[key] = value.replace(search_string, replace_string)
201
202 elif isinstance(value, list):
203 data[key] = [v.replace(search_string, replace_string) for v in value]
204
205 elif isinstance(value, dict):
206 data[key] = deep_search_and_replace(value, search_string, replace_string)
207
208 return data
209
[end of snippets/base/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/snippets/base/util.py b/snippets/base/util.py
--- a/snippets/base/util.py
+++ b/snippets/base/util.py
@@ -107,8 +107,9 @@
elif url.startswith('special:about:'):
action = 'OPEN_ABOUT_PAGE'
args = url.rsplit(':', 1)[1]
- entrypoint_name = 'entryPoint'
- entrypoint_value = 'snippet'
+ if url.startswith('special:about:logins'):
+ entrypoint_name = 'entryPoint'
+ entrypoint_value = 'snippet'
elif url.startswith('special:highlight:'):
action = 'HIGHLIGHT_FEATURE'
args = url.rsplit(':', 1)[1]
| {"golden_diff": "diff --git a/snippets/base/util.py b/snippets/base/util.py\n--- a/snippets/base/util.py\n+++ b/snippets/base/util.py\n@@ -107,8 +107,9 @@\n elif url.startswith('special:about:'):\n action = 'OPEN_ABOUT_PAGE'\n args = url.rsplit(':', 1)[1]\n- entrypoint_name = 'entryPoint'\n- entrypoint_value = 'snippet'\n+ if url.startswith('special:about:logins'):\n+ entrypoint_name = 'entryPoint'\n+ entrypoint_value = 'snippet'\n elif url.startswith('special:highlight:'):\n action = 'HIGHLIGHT_FEATURE'\n args = url.rsplit(':', 1)[1]\n", "issue": "Append entrypoint url argument only to `about:logins` page\n\n", "before_files": [{"content": "import copy\nimport datetime\nimport re\nfrom urllib.parse import ParseResult, urlparse, urlencode\n\nfrom django.http import QueryDict\nfrom django.utils.encoding import smart_bytes\n\nfrom product_details import product_details\nfrom product_details.version_compare import version_list\n\nEPOCH = datetime.datetime.utcfromtimestamp(0)\n\n\ndef get_object_or_none(model_class, **filters):\n \"\"\"\n Identical to Model.get, except instead of throwing exceptions, this returns\n None.\n \"\"\"\n try:\n return model_class.objects.get(**filters)\n except (model_class.DoesNotExist, model_class.MultipleObjectsReturned):\n return None\n\n\ndef first(collection, callback):\n \"\"\"\n Find the first item in collection that, when passed to callback, returns\n True. Returns None if no such item is found.\n \"\"\"\n return next((item for item in collection if callback(item)), None)\n\n\ndef create_locales():\n from snippets.base.models import TargetedLocale\n\n for code, name in product_details.languages.items():\n locale = TargetedLocale.objects.get_or_create(code=code.lower())[0]\n name = name['English']\n if locale.name != name:\n locale.name = name\n locale.save()\n\n\ndef create_countries():\n from snippets.base.models import TargetedCountry\n\n for code, name in product_details.get_regions('en-US').items():\n country = TargetedCountry.objects.get_or_create(code=code.upper())[0]\n if country.name != name:\n country.name = name\n country.save()\n\n\ndef current_firefox_major_version():\n full_version = version_list(\n product_details.firefox_history_major_releases)[0]\n\n return full_version.split('.', 1)[0]\n\n\ndef urlparams(url_, fragment=None, query_dict=None, replace=True, **query):\n \"\"\"\n Add a fragment and/or query parameters to a URL.\n New query params will be appended to exising parameters, except duplicate\n names, which will be replaced when replace=True otherwise preserved.\n\n Copied from mozilla/kuma, modified:\n - to not always replace vars\n - to not escape `[]` characters\n \"\"\"\n url_ = urlparse(url_)\n fragment = fragment if fragment is not None else url_.fragment\n\n q = url_.query\n new_query_dict = (QueryDict(smart_bytes(q), mutable=True) if\n q else QueryDict('', mutable=True))\n if query_dict:\n for k, l in query_dict.lists():\n if not replace and k in new_query_dict:\n continue\n new_query_dict[k] = None\n for v in l:\n new_query_dict.appendlist(k, v)\n\n for k, v in query.items():\n if not replace and k in new_query_dict:\n continue\n\n if isinstance(v, list):\n new_query_dict.setlist(k, v)\n else:\n new_query_dict[k] = v\n\n query_string = urlencode([(k, v) for k, l in new_query_dict.lists() for\n v in l if v is not None], safe='[]')\n new = ParseResult(url_.scheme, url_.netloc, url_.path or '/',\n url_.params, query_string, fragment)\n return new.geturl()\n\n\ndef convert_special_link(url):\n action = args = entrypoint_name = entrypoint_value = None\n if url.startswith('special:menu:'):\n action = 'OPEN_APPLICATIONS_MENU'\n args = url.rsplit(':', 1)[1]\n elif url.startswith('special:about:'):\n action = 'OPEN_ABOUT_PAGE'\n args = url.rsplit(':', 1)[1]\n entrypoint_name = 'entryPoint'\n entrypoint_value = 'snippet'\n elif url.startswith('special:highlight:'):\n action = 'HIGHLIGHT_FEATURE'\n args = url.rsplit(':', 1)[1]\n elif url == 'special:preferences':\n action = 'OPEN_PREFERENCES_PAGE'\n entrypoint_value = 'snippet'\n elif url == 'special:accounts':\n action = 'SHOW_FIREFOX_ACCOUNTS'\n elif url == 'special:monitor':\n action = 'ENABLE_FIREFOX_MONITOR'\n args = {\n 'url': ('https://monitor.firefox.com/oauth/init?'\n 'utm_source=desktop-snippet&utm_term=[[job_id]]&'\n 'utm_content=[[channels]]&utm_campaign=[[campaign_slug]]&'\n 'entrypoint=snippets&form_type=button'),\n 'flowRequestParams': {\n 'entrypoint': 'snippets',\n 'utm_term': 'snippet-job-[[job_id]]',\n 'form_type': 'button'\n }\n }\n return action, args, entrypoint_name, entrypoint_value\n\n\ndef fluent_link_extractor(data, variables):\n \"\"\"Replaces all <a> elements with fluent.js link elements sequentially\n numbered.\n\n Returns a tuple with the new text and a dict of all the links with url and\n custom metric where available.\n\n \"\"\"\n class Replacer:\n link_counter = 0\n links = {}\n\n def __call__(self, matchobj):\n keyname = 'link{0}'.format(self.link_counter)\n replacement = '<{keyname}>{text}</{keyname}>'.format(\n keyname=keyname,\n text=matchobj.group('innerText'))\n # Find the URL\n url_match = re.search('href=\"(?P<url>.+?)\"', matchobj.group('attrs'))\n url = ''\n\n if url_match:\n url = url_match.group('url')\n\n action, args, entrypoint_name, entrypoint_value = convert_special_link(url)\n\n if action:\n self.links[keyname] = {\n 'action': action,\n }\n if args:\n self.links[keyname]['args'] = args\n if entrypoint_name:\n self.links[keyname]['entrypoint_name'] = entrypoint_name\n if entrypoint_value:\n self.links[keyname]['entrypoint_value'] = entrypoint_value\n else:\n self.links[keyname] = {\n 'url': url,\n }\n\n # Find the optional data-metric attrib\n metric_match = re.search('data-metric=\"(?P<metric>.+?)\"', matchobj.group('attrs'))\n if metric_match:\n self.links[keyname]['metric'] = metric_match.group('metric')\n\n self.link_counter += 1\n return replacement\n\n local_data = copy.deepcopy(data)\n replacer = Replacer()\n for variable in variables:\n if variable not in local_data:\n continue\n local_data[variable] = re.sub('(<a(?P<attrs> .*?)>)(?P<innerText>.+?)(</a>)',\n replacer, local_data[variable])\n\n local_data['links'] = replacer.links\n return local_data\n\n\ndef deep_search_and_replace(data, search_string, replace_string):\n for key, value in data.items():\n if isinstance(value, str):\n data[key] = value.replace(search_string, replace_string)\n\n elif isinstance(value, list):\n data[key] = [v.replace(search_string, replace_string) for v in value]\n\n elif isinstance(value, dict):\n data[key] = deep_search_and_replace(value, search_string, replace_string)\n\n return data\n", "path": "snippets/base/util.py"}]} | 2,633 | 158 |
gh_patches_debug_20741 | rasdani/github-patches | git_diff | astronomer__astro-sdk-165 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add SQLite example
* Create an example in `example_dags` illustrating the usage of SQLite
* This example could use one of our checks
* Update `tests/test_example_dags.py` to run it
</issue>
<code>
[start of noxfile.py]
1 """Nox automation definitions."""
2
3 import pathlib
4
5 import nox
6
7 nox.options.sessions = ["dev"]
8
9
10 @nox.session(python="3.9")
11 def dev(session: nox.Session) -> None:
12 """Create a dev environment with everything installed.
13
14 This is useful for setting up IDE for autocompletion etc. Point the
15 development environment to ``.nox/dev``.
16 """
17 session.install("nox")
18 session.install("-e", ".[all]")
19 session.install("-e", ".[tests]")
20
21
22 @nox.session(python=["3.7", "3.8", "3.9"])
23 def test(session: nox.Session) -> None:
24 """Run unit tests."""
25 session.install("-e", ".[all]")
26 session.install("-e", ".[tests]")
27 session.run("airflow", "db", "init")
28 session.run("pytest", *session.posargs)
29
30
31 @nox.session()
32 @nox.parametrize(
33 "extras",
34 [
35 ("postgres-only", {"include": ["postgres"], "exclude": ["amazon"]}),
36 ("postgres-amazon", {"include": ["postgres", "amazon"]}),
37 ("snowflake-amazon", {"include": ["snowflake", "amazon"]})
38 # ("sqlite", {"include": ["sqlite"]}),
39 ],
40 )
41 def test_examples_by_dependency(session: nox.Session, extras):
42 _, extras = extras
43 pypi_deps = ",".join(extras["include"])
44 pytest_options = " and ".join(extras["include"])
45 pytest_options = " and not ".join([pytest_options, *extras.get("exclude", [])])
46 pytest_args = ["-k", pytest_options]
47
48 session.install("-e", f".[{pypi_deps}]")
49 session.install("-e", f".[tests]")
50 session.run("airflow", "db", "init")
51
52 session.run("pytest", "tests/test_example_dags.py", *pytest_args, *session.posargs)
53
54
55 @nox.session()
56 def lint(session: nox.Session) -> None:
57 """Run linters."""
58 session.install("pre-commit")
59 if session.posargs:
60 args = [*session.posargs, "--all-files"]
61 else:
62 args = ["--all-files", "--show-diff-on-failure"]
63 session.run("pre-commit", "run", *args)
64
65
66 @nox.session()
67 def build(session: nox.Session) -> None:
68 """Build release artifacts."""
69 session.install("build")
70
71 # TODO: Automate version bumping, Git tagging, and more?
72
73 dist = pathlib.Path("dist")
74 if dist.exists() and next(dist.iterdir(), None) is not None:
75 session.error(
76 "There are files in dist/. Remove them and try again. "
77 "You can use `git clean -fxdi -- dist` command to do this."
78 )
79 dist.mkdir(exist_ok=True)
80
81 session.run("python", "-m", "build", *session.posargs)
82
83
84 @nox.session()
85 def release(session: nox.Session) -> None:
86 """Publish a release."""
87 session.install("twine")
88 # TODO: Better artifact checking.
89 session.run("twine", "check", *session.posargs)
90 session.run("twine", "upload", *session.posargs)
91
[end of noxfile.py]
[start of example_dags/example_sqlite_load_transform.py]
1 from datetime import datetime
2
3 from airflow import DAG
4
5 from astro import sql as aql
6 from astro.sql.table import Table
7
8 START_DATE = datetime(2000, 1, 1)
9
10
11 @aql.transform()
12 def top_five_animations(input_table: Table):
13 return """
14 SELECT Title, Rating
15 FROM {{input_table}}
16 WHERE Genre1=='Animation'
17 ORDER BY Rating desc
18 LIMIT 5;
19 """
20
21
22 with DAG(
23 "example_sqlite_load_transform",
24 schedule_interval=None,
25 start_date=START_DATE,
26 catchup=False,
27 ) as dag:
28
29 imdb_movies = aql.load_file(
30 path="https://raw.githubusercontent.com/astro-projects/astro/readme/tests/data/imdb.csv",
31 task_id="load_csv",
32 output_table=Table(
33 table_name="imdb_movies", database="sqlite", conn_id="sqlite_default"
34 ),
35 )
36
37 top_five_animations(
38 input_table=imdb_movies,
39 output_table=Table(
40 table_name="top_animation", database="sqlite", conn_id="sqlite_default"
41 ),
42 )
43
[end of example_dags/example_sqlite_load_transform.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/example_dags/example_sqlite_load_transform.py b/example_dags/example_sqlite_load_transform.py
--- a/example_dags/example_sqlite_load_transform.py
+++ b/example_dags/example_sqlite_load_transform.py
@@ -27,7 +27,7 @@
) as dag:
imdb_movies = aql.load_file(
- path="https://raw.githubusercontent.com/astro-projects/astro/readme/tests/data/imdb.csv",
+ path="https://raw.githubusercontent.com/astro-projects/astro/main/tests/data/imdb.csv",
task_id="load_csv",
output_table=Table(
table_name="imdb_movies", database="sqlite", conn_id="sqlite_default"
diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -34,8 +34,8 @@
[
("postgres-only", {"include": ["postgres"], "exclude": ["amazon"]}),
("postgres-amazon", {"include": ["postgres", "amazon"]}),
- ("snowflake-amazon", {"include": ["snowflake", "amazon"]})
- # ("sqlite", {"include": ["sqlite"]}),
+ ("snowflake-amazon", {"include": ["snowflake", "amazon"]}),
+ ("sqlite", {"include": ["sqlite"]}),
],
)
def test_examples_by_dependency(session: nox.Session, extras):
| {"golden_diff": "diff --git a/example_dags/example_sqlite_load_transform.py b/example_dags/example_sqlite_load_transform.py\n--- a/example_dags/example_sqlite_load_transform.py\n+++ b/example_dags/example_sqlite_load_transform.py\n@@ -27,7 +27,7 @@\n ) as dag:\n \n imdb_movies = aql.load_file(\n- path=\"https://raw.githubusercontent.com/astro-projects/astro/readme/tests/data/imdb.csv\",\n+ path=\"https://raw.githubusercontent.com/astro-projects/astro/main/tests/data/imdb.csv\",\n task_id=\"load_csv\",\n output_table=Table(\n table_name=\"imdb_movies\", database=\"sqlite\", conn_id=\"sqlite_default\"\ndiff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -34,8 +34,8 @@\n [\n (\"postgres-only\", {\"include\": [\"postgres\"], \"exclude\": [\"amazon\"]}),\n (\"postgres-amazon\", {\"include\": [\"postgres\", \"amazon\"]}),\n- (\"snowflake-amazon\", {\"include\": [\"snowflake\", \"amazon\"]})\n- # (\"sqlite\", {\"include\": [\"sqlite\"]}),\n+ (\"snowflake-amazon\", {\"include\": [\"snowflake\", \"amazon\"]}),\n+ (\"sqlite\", {\"include\": [\"sqlite\"]}),\n ],\n )\n def test_examples_by_dependency(session: nox.Session, extras):\n", "issue": "Add SQLite example\n* Create an example in `example_dags` illustrating the usage of SQLite\r\n* This example could use one of our checks\r\n* Update `tests/test_example_dags.py` to run it\n", "before_files": [{"content": "\"\"\"Nox automation definitions.\"\"\"\n\nimport pathlib\n\nimport nox\n\nnox.options.sessions = [\"dev\"]\n\n\[email protected](python=\"3.9\")\ndef dev(session: nox.Session) -> None:\n \"\"\"Create a dev environment with everything installed.\n\n This is useful for setting up IDE for autocompletion etc. Point the\n development environment to ``.nox/dev``.\n \"\"\"\n session.install(\"nox\")\n session.install(\"-e\", \".[all]\")\n session.install(\"-e\", \".[tests]\")\n\n\[email protected](python=[\"3.7\", \"3.8\", \"3.9\"])\ndef test(session: nox.Session) -> None:\n \"\"\"Run unit tests.\"\"\"\n session.install(\"-e\", \".[all]\")\n session.install(\"-e\", \".[tests]\")\n session.run(\"airflow\", \"db\", \"init\")\n session.run(\"pytest\", *session.posargs)\n\n\[email protected]()\[email protected](\n \"extras\",\n [\n (\"postgres-only\", {\"include\": [\"postgres\"], \"exclude\": [\"amazon\"]}),\n (\"postgres-amazon\", {\"include\": [\"postgres\", \"amazon\"]}),\n (\"snowflake-amazon\", {\"include\": [\"snowflake\", \"amazon\"]})\n # (\"sqlite\", {\"include\": [\"sqlite\"]}),\n ],\n)\ndef test_examples_by_dependency(session: nox.Session, extras):\n _, extras = extras\n pypi_deps = \",\".join(extras[\"include\"])\n pytest_options = \" and \".join(extras[\"include\"])\n pytest_options = \" and not \".join([pytest_options, *extras.get(\"exclude\", [])])\n pytest_args = [\"-k\", pytest_options]\n\n session.install(\"-e\", f\".[{pypi_deps}]\")\n session.install(\"-e\", f\".[tests]\")\n session.run(\"airflow\", \"db\", \"init\")\n\n session.run(\"pytest\", \"tests/test_example_dags.py\", *pytest_args, *session.posargs)\n\n\[email protected]()\ndef lint(session: nox.Session) -> None:\n \"\"\"Run linters.\"\"\"\n session.install(\"pre-commit\")\n if session.posargs:\n args = [*session.posargs, \"--all-files\"]\n else:\n args = [\"--all-files\", \"--show-diff-on-failure\"]\n session.run(\"pre-commit\", \"run\", *args)\n\n\[email protected]()\ndef build(session: nox.Session) -> None:\n \"\"\"Build release artifacts.\"\"\"\n session.install(\"build\")\n\n # TODO: Automate version bumping, Git tagging, and more?\n\n dist = pathlib.Path(\"dist\")\n if dist.exists() and next(dist.iterdir(), None) is not None:\n session.error(\n \"There are files in dist/. Remove them and try again. \"\n \"You can use `git clean -fxdi -- dist` command to do this.\"\n )\n dist.mkdir(exist_ok=True)\n\n session.run(\"python\", \"-m\", \"build\", *session.posargs)\n\n\[email protected]()\ndef release(session: nox.Session) -> None:\n \"\"\"Publish a release.\"\"\"\n session.install(\"twine\")\n # TODO: Better artifact checking.\n session.run(\"twine\", \"check\", *session.posargs)\n session.run(\"twine\", \"upload\", *session.posargs)\n", "path": "noxfile.py"}, {"content": "from datetime import datetime\n\nfrom airflow import DAG\n\nfrom astro import sql as aql\nfrom astro.sql.table import Table\n\nSTART_DATE = datetime(2000, 1, 1)\n\n\[email protected]()\ndef top_five_animations(input_table: Table):\n return \"\"\"\n SELECT Title, Rating\n FROM {{input_table}}\n WHERE Genre1=='Animation'\n ORDER BY Rating desc\n LIMIT 5;\n \"\"\"\n\n\nwith DAG(\n \"example_sqlite_load_transform\",\n schedule_interval=None,\n start_date=START_DATE,\n catchup=False,\n) as dag:\n\n imdb_movies = aql.load_file(\n path=\"https://raw.githubusercontent.com/astro-projects/astro/readme/tests/data/imdb.csv\",\n task_id=\"load_csv\",\n output_table=Table(\n table_name=\"imdb_movies\", database=\"sqlite\", conn_id=\"sqlite_default\"\n ),\n )\n\n top_five_animations(\n input_table=imdb_movies,\n output_table=Table(\n table_name=\"top_animation\", database=\"sqlite\", conn_id=\"sqlite_default\"\n ),\n )\n", "path": "example_dags/example_sqlite_load_transform.py"}]} | 1,796 | 305 |
gh_patches_debug_36729 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1170 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DroppedSpan causing AttributeError
**Describe the bug**: celery tasks are now failing with attribute errors caused by APM
**Environment (please complete the following information)**
- OS: Linux
- Python version: 3.8.5
- Framework and version [e.g. Django 2.1]: Django 3.2.3, Celery 5.0.5
- APM Server version: 7.13
- Agent version: 6.2.2
Could be related to elastic/apm-agent-python#468
Traceback:
```
Traceback (most recent call last):
......
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/celery/app/task.py", line 421, in delay
return self.apply_async(args, kwargs)
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/celery/app/task.py", line 561, in apply_async
return app.send_task(
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/celery/app/base.py", line 749, in send_task
amqp.send_task_message(P, name, message, **options)
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/celery/app/amqp.py", line 523, in send_task_message
ret = producer.publish(
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/kombu/messaging.py", line 175, in publish
return _publish(
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/kombu/connection.py", line 525, in _ensured
return fun(*args, **kwargs)
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/kombu/messaging.py", line 197, in _publish
return channel.basic_publish(
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/kombu/transport/virtual/base.py", line 605, in basic_publish
return self._put(routing_key, message, **kwargs)
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/kombu/transport/SQS.py", line 294, in _put
c.send_message(**kwargs)
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/elasticapm/instrumentation/packages/base.py", line 210, in call_if_sampling
return self.call(module, method, wrapped, instance, args, kwargs)
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/elasticapm/instrumentation/packages/botocore.py", line 91, in call
span_modifiers[service](span, args, kwargs)
File "/aimful/shared/virtualenv/lib/python3.8/site-packages/elasticapm/instrumentation/packages/botocore.py", line 174, in modify_span_sqs
trace_parent = span.transaction.trace_parent.copy_from(span_id=span.id)
AttributeError: 'DroppedSpan' object has no attribute 'transaction'
```
</issue>
<code>
[start of elasticapm/instrumentation/packages/botocore.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from collections import namedtuple
32
33 from elasticapm.conf import constants
34 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
35 from elasticapm.traces import capture_span
36 from elasticapm.utils.compat import urlparse
37 from elasticapm.utils.logging import get_logger
38
39 logger = get_logger("elasticapm.instrument")
40
41
42 HandlerInfo = namedtuple("HandlerInfo", ("signature", "span_type", "span_subtype", "span_action", "context"))
43
44 # Used for boto3 < 1.7
45 endpoint_to_service_id = {"SNS": "SNS", "S3": "S3", "DYNAMODB": "DynamoDB", "SQS": "SQS"}
46
47
48 class BotocoreInstrumentation(AbstractInstrumentedModule):
49 name = "botocore"
50
51 instrument_list = [("botocore.client", "BaseClient._make_api_call")]
52
53 def call(self, module, method, wrapped, instance, args, kwargs):
54 if "operation_name" in kwargs:
55 operation_name = kwargs["operation_name"]
56 else:
57 operation_name = args[0]
58
59 service_model = instance.meta.service_model
60 if hasattr(service_model, "service_id"): # added in boto3 1.7
61 service = service_model.service_id
62 else:
63 service = service_model.service_name.upper()
64 service = endpoint_to_service_id.get(service, service)
65
66 parsed_url = urlparse.urlparse(instance.meta.endpoint_url)
67 context = {
68 "destination": {
69 "address": parsed_url.hostname,
70 "port": parsed_url.port,
71 "cloud": {"region": instance.meta.region_name},
72 }
73 }
74
75 handler_info = None
76 handler = handlers.get(service, False)
77 if handler:
78 handler_info = handler(operation_name, service, instance, args, kwargs, context)
79 if not handler_info:
80 handler_info = handle_default(operation_name, service, instance, args, kwargs, context)
81
82 with capture_span(
83 handler_info.signature,
84 span_type=handler_info.span_type,
85 leaf=True,
86 span_subtype=handler_info.span_subtype,
87 span_action=handler_info.span_action,
88 extra=handler_info.context,
89 ) as span:
90 if service in span_modifiers:
91 span_modifiers[service](span, args, kwargs)
92 return wrapped(*args, **kwargs)
93
94
95 def handle_s3(operation_name, service, instance, args, kwargs, context):
96 span_type = "storage"
97 span_subtype = "s3"
98 span_action = operation_name
99 if len(args) > 1 and "Bucket" in args[1]:
100 bucket = args[1]["Bucket"]
101 else:
102 # TODO handle Access Points
103 bucket = ""
104 signature = f"S3 {operation_name} {bucket}"
105
106 context["destination"]["service"] = {"name": span_subtype, "resource": bucket, "type": span_type}
107
108 return HandlerInfo(signature, span_type, span_subtype, span_action, context)
109
110
111 def handle_dynamodb(operation_name, service, instance, args, kwargs, context):
112 span_type = "db"
113 span_subtype = "dynamodb"
114 span_action = "query"
115 if len(args) > 1 and "TableName" in args[1]:
116 table = args[1]["TableName"]
117 else:
118 table = ""
119 signature = f"DynamoDB {operation_name} {table}".rstrip()
120
121 context["db"] = {"type": "dynamodb", "instance": instance.meta.region_name}
122 if operation_name == "Query" and len(args) > 1 and "KeyConditionExpression" in args[1]:
123 context["db"]["statement"] = args[1]["KeyConditionExpression"]
124
125 context["destination"]["service"] = {"name": span_subtype, "resource": table, "type": span_type}
126 return HandlerInfo(signature, span_type, span_subtype, span_action, context)
127
128
129 def handle_sns(operation_name, service, instance, args, kwargs, context):
130 if operation_name != "Publish":
131 # only "publish" is handled specifically, other endpoints get the default treatment
132 return False
133 span_type = "messaging"
134 span_subtype = "sns"
135 span_action = "send"
136 topic_name = ""
137 if len(args) > 1:
138 if "Name" in args[1]:
139 topic_name = args[1]["Name"]
140 if "TopicArn" in args[1]:
141 topic_name = args[1]["TopicArn"].rsplit(":", maxsplit=1)[-1]
142 signature = f"SNS {operation_name} {topic_name}".rstrip()
143 context["destination"]["service"] = {
144 "name": span_subtype,
145 "resource": f"{span_subtype}/{topic_name}" if topic_name else span_subtype,
146 "type": span_type,
147 }
148 return HandlerInfo(signature, span_type, span_subtype, span_action, context)
149
150
151 def handle_sqs(operation_name, service, instance, args, kwargs, context):
152 if operation_name not in ("SendMessage", "SendMessageBatch", "ReceiveMessage"):
153 # only "publish" is handled specifically, other endpoints get the default treatment
154 return False
155 span_type = "messaging"
156 span_subtype = "sqs"
157 span_action = "send" if operation_name in ("SendMessage", "SendMessageBatch") else "receive"
158 topic_name = ""
159 batch = "_BATCH" if operation_name == "SendMessageBatch" else ""
160 signature_type = "RECEIVE from" if span_action == "receive" else f"SEND{batch} to"
161
162 if len(args) > 1:
163 topic_name = args[1]["QueueUrl"].rsplit("/", maxsplit=1)[-1]
164 signature = f"SQS {signature_type} {topic_name}".rstrip() if topic_name else f"SQS {signature_type}"
165 context["destination"]["service"] = {
166 "name": span_subtype,
167 "resource": f"{span_subtype}/{topic_name}" if topic_name else span_subtype,
168 "type": span_type,
169 }
170 return HandlerInfo(signature, span_type, span_subtype, span_action, context)
171
172
173 def modify_span_sqs(span, args, kwargs):
174 trace_parent = span.transaction.trace_parent.copy_from(span_id=span.id)
175 attributes = {constants.TRACEPARENT_HEADER_NAME: {"DataType": "String", "StringValue": trace_parent.to_string()}}
176 if trace_parent.tracestate:
177 attributes[constants.TRACESTATE_HEADER_NAME] = {"DataType": "String", "StringValue": trace_parent.tracestate}
178 if len(args) > 1:
179 attributes_count = len(attributes)
180 if "MessageAttributes" in args[1]:
181 messages = [args[1]]
182 # elif "Entries" in args[1]:
183 # messages = args[1]["Entries"]
184 else:
185 messages = []
186 for message in messages:
187 if len(message["MessageAttributes"]) + attributes_count <= 10:
188 message["MessageAttributes"].update(attributes)
189 else:
190 logger.info("Not adding disttracing headers to message due to attribute limit reached")
191
192
193 def handle_default(operation_name, service, instance, args, kwargs, destination):
194 span_type = "aws"
195 span_subtype = service.lower()
196 span_action = operation_name
197
198 destination["service"] = {"name": span_subtype, "resource": span_subtype, "type": span_type}
199
200 signature = f"{service}:{operation_name}"
201 return HandlerInfo(signature, span_type, span_subtype, span_action, destination)
202
203
204 handlers = {
205 "S3": handle_s3,
206 "DynamoDB": handle_dynamodb,
207 "SNS": handle_sns,
208 "SQS": handle_sqs,
209 "default": handle_default,
210 }
211
212 span_modifiers = {
213 "SQS": modify_span_sqs,
214 }
215
[end of elasticapm/instrumentation/packages/botocore.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/instrumentation/packages/botocore.py b/elasticapm/instrumentation/packages/botocore.py
--- a/elasticapm/instrumentation/packages/botocore.py
+++ b/elasticapm/instrumentation/packages/botocore.py
@@ -32,12 +32,14 @@
from elasticapm.conf import constants
from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
-from elasticapm.traces import capture_span
+from elasticapm.traces import capture_span, execution_context
from elasticapm.utils.compat import urlparse
from elasticapm.utils.logging import get_logger
logger = get_logger("elasticapm.instrument")
+SQS_MAX_ATTRIBUTES = 10
+
HandlerInfo = namedtuple("HandlerInfo", ("signature", "span_type", "span_subtype", "span_action", "context"))
@@ -171,7 +173,12 @@
def modify_span_sqs(span, args, kwargs):
- trace_parent = span.transaction.trace_parent.copy_from(span_id=span.id)
+ if span.id:
+ trace_parent = span.transaction.trace_parent.copy_from(span_id=span.id)
+ else:
+ # this is a dropped span, use transaction id instead
+ transaction = execution_context.get_transaction()
+ trace_parent = transaction.trace_parent.copy_from(span_id=transaction.id)
attributes = {constants.TRACEPARENT_HEADER_NAME: {"DataType": "String", "StringValue": trace_parent.to_string()}}
if trace_parent.tracestate:
attributes[constants.TRACESTATE_HEADER_NAME] = {"DataType": "String", "StringValue": trace_parent.tracestate}
@@ -179,12 +186,12 @@
attributes_count = len(attributes)
if "MessageAttributes" in args[1]:
messages = [args[1]]
- # elif "Entries" in args[1]:
- # messages = args[1]["Entries"]
+ elif "Entries" in args[1]:
+ messages = args[1]["Entries"]
else:
messages = []
for message in messages:
- if len(message["MessageAttributes"]) + attributes_count <= 10:
+ if len(message["MessageAttributes"]) + attributes_count <= SQS_MAX_ATTRIBUTES:
message["MessageAttributes"].update(attributes)
else:
logger.info("Not adding disttracing headers to message due to attribute limit reached")
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/botocore.py b/elasticapm/instrumentation/packages/botocore.py\n--- a/elasticapm/instrumentation/packages/botocore.py\n+++ b/elasticapm/instrumentation/packages/botocore.py\n@@ -32,12 +32,14 @@\n \n from elasticapm.conf import constants\n from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\n-from elasticapm.traces import capture_span\n+from elasticapm.traces import capture_span, execution_context\n from elasticapm.utils.compat import urlparse\n from elasticapm.utils.logging import get_logger\n \n logger = get_logger(\"elasticapm.instrument\")\n \n+SQS_MAX_ATTRIBUTES = 10\n+\n \n HandlerInfo = namedtuple(\"HandlerInfo\", (\"signature\", \"span_type\", \"span_subtype\", \"span_action\", \"context\"))\n \n@@ -171,7 +173,12 @@\n \n \n def modify_span_sqs(span, args, kwargs):\n- trace_parent = span.transaction.trace_parent.copy_from(span_id=span.id)\n+ if span.id:\n+ trace_parent = span.transaction.trace_parent.copy_from(span_id=span.id)\n+ else:\n+ # this is a dropped span, use transaction id instead\n+ transaction = execution_context.get_transaction()\n+ trace_parent = transaction.trace_parent.copy_from(span_id=transaction.id)\n attributes = {constants.TRACEPARENT_HEADER_NAME: {\"DataType\": \"String\", \"StringValue\": trace_parent.to_string()}}\n if trace_parent.tracestate:\n attributes[constants.TRACESTATE_HEADER_NAME] = {\"DataType\": \"String\", \"StringValue\": trace_parent.tracestate}\n@@ -179,12 +186,12 @@\n attributes_count = len(attributes)\n if \"MessageAttributes\" in args[1]:\n messages = [args[1]]\n- # elif \"Entries\" in args[1]:\n- # messages = args[1][\"Entries\"]\n+ elif \"Entries\" in args[1]:\n+ messages = args[1][\"Entries\"]\n else:\n messages = []\n for message in messages:\n- if len(message[\"MessageAttributes\"]) + attributes_count <= 10:\n+ if len(message[\"MessageAttributes\"]) + attributes_count <= SQS_MAX_ATTRIBUTES:\n message[\"MessageAttributes\"].update(attributes)\n else:\n logger.info(\"Not adding disttracing headers to message due to attribute limit reached\")\n", "issue": "DroppedSpan causing AttributeError\n**Describe the bug**: celery tasks are now failing with attribute errors caused by APM\r\n\r\n**Environment (please complete the following information)**\r\n- OS: Linux\r\n- Python version: 3.8.5\r\n- Framework and version [e.g. Django 2.1]: Django 3.2.3, Celery 5.0.5\r\n- APM Server version: 7.13\r\n- Agent version: 6.2.2\r\n\r\nCould be related to elastic/apm-agent-python#468\r\n\r\nTraceback:\r\n```\r\nTraceback (most recent call last):\r\n......\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/celery/app/task.py\", line 421, in delay\r\nreturn self.apply_async(args, kwargs)\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/celery/app/task.py\", line 561, in apply_async\r\nreturn app.send_task(\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/celery/app/base.py\", line 749, in send_task\r\namqp.send_task_message(P, name, message, **options)\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/celery/app/amqp.py\", line 523, in send_task_message\r\nret = producer.publish(\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/kombu/messaging.py\", line 175, in publish\r\nreturn _publish(\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/kombu/connection.py\", line 525, in _ensured\r\nreturn fun(*args, **kwargs)\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/kombu/messaging.py\", line 197, in _publish\r\nreturn channel.basic_publish(\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/kombu/transport/virtual/base.py\", line 605, in basic_publish\r\nreturn self._put(routing_key, message, **kwargs)\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/kombu/transport/SQS.py\", line 294, in _put\r\nc.send_message(**kwargs)\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/botocore/client.py\", line 357, in _api_call\r\nreturn self._make_api_call(operation_name, kwargs)\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/elasticapm/instrumentation/packages/base.py\", line 210, in call_if_sampling\r\nreturn self.call(module, method, wrapped, instance, args, kwargs)\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/elasticapm/instrumentation/packages/botocore.py\", line 91, in call\r\nspan_modifiers[service](span, args, kwargs)\r\nFile \"/aimful/shared/virtualenv/lib/python3.8/site-packages/elasticapm/instrumentation/packages/botocore.py\", line 174, in modify_span_sqs\r\ntrace_parent = span.transaction.trace_parent.copy_from(span_id=span.id)\r\nAttributeError: 'DroppedSpan' object has no attribute 'transaction'\r\n```\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom collections import namedtuple\n\nfrom elasticapm.conf import constants\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import capture_span\nfrom elasticapm.utils.compat import urlparse\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.instrument\")\n\n\nHandlerInfo = namedtuple(\"HandlerInfo\", (\"signature\", \"span_type\", \"span_subtype\", \"span_action\", \"context\"))\n\n# Used for boto3 < 1.7\nendpoint_to_service_id = {\"SNS\": \"SNS\", \"S3\": \"S3\", \"DYNAMODB\": \"DynamoDB\", \"SQS\": \"SQS\"}\n\n\nclass BotocoreInstrumentation(AbstractInstrumentedModule):\n name = \"botocore\"\n\n instrument_list = [(\"botocore.client\", \"BaseClient._make_api_call\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"operation_name\" in kwargs:\n operation_name = kwargs[\"operation_name\"]\n else:\n operation_name = args[0]\n\n service_model = instance.meta.service_model\n if hasattr(service_model, \"service_id\"): # added in boto3 1.7\n service = service_model.service_id\n else:\n service = service_model.service_name.upper()\n service = endpoint_to_service_id.get(service, service)\n\n parsed_url = urlparse.urlparse(instance.meta.endpoint_url)\n context = {\n \"destination\": {\n \"address\": parsed_url.hostname,\n \"port\": parsed_url.port,\n \"cloud\": {\"region\": instance.meta.region_name},\n }\n }\n\n handler_info = None\n handler = handlers.get(service, False)\n if handler:\n handler_info = handler(operation_name, service, instance, args, kwargs, context)\n if not handler_info:\n handler_info = handle_default(operation_name, service, instance, args, kwargs, context)\n\n with capture_span(\n handler_info.signature,\n span_type=handler_info.span_type,\n leaf=True,\n span_subtype=handler_info.span_subtype,\n span_action=handler_info.span_action,\n extra=handler_info.context,\n ) as span:\n if service in span_modifiers:\n span_modifiers[service](span, args, kwargs)\n return wrapped(*args, **kwargs)\n\n\ndef handle_s3(operation_name, service, instance, args, kwargs, context):\n span_type = \"storage\"\n span_subtype = \"s3\"\n span_action = operation_name\n if len(args) > 1 and \"Bucket\" in args[1]:\n bucket = args[1][\"Bucket\"]\n else:\n # TODO handle Access Points\n bucket = \"\"\n signature = f\"S3 {operation_name} {bucket}\"\n\n context[\"destination\"][\"service\"] = {\"name\": span_subtype, \"resource\": bucket, \"type\": span_type}\n\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_dynamodb(operation_name, service, instance, args, kwargs, context):\n span_type = \"db\"\n span_subtype = \"dynamodb\"\n span_action = \"query\"\n if len(args) > 1 and \"TableName\" in args[1]:\n table = args[1][\"TableName\"]\n else:\n table = \"\"\n signature = f\"DynamoDB {operation_name} {table}\".rstrip()\n\n context[\"db\"] = {\"type\": \"dynamodb\", \"instance\": instance.meta.region_name}\n if operation_name == \"Query\" and len(args) > 1 and \"KeyConditionExpression\" in args[1]:\n context[\"db\"][\"statement\"] = args[1][\"KeyConditionExpression\"]\n\n context[\"destination\"][\"service\"] = {\"name\": span_subtype, \"resource\": table, \"type\": span_type}\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_sns(operation_name, service, instance, args, kwargs, context):\n if operation_name != \"Publish\":\n # only \"publish\" is handled specifically, other endpoints get the default treatment\n return False\n span_type = \"messaging\"\n span_subtype = \"sns\"\n span_action = \"send\"\n topic_name = \"\"\n if len(args) > 1:\n if \"Name\" in args[1]:\n topic_name = args[1][\"Name\"]\n if \"TopicArn\" in args[1]:\n topic_name = args[1][\"TopicArn\"].rsplit(\":\", maxsplit=1)[-1]\n signature = f\"SNS {operation_name} {topic_name}\".rstrip()\n context[\"destination\"][\"service\"] = {\n \"name\": span_subtype,\n \"resource\": f\"{span_subtype}/{topic_name}\" if topic_name else span_subtype,\n \"type\": span_type,\n }\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_sqs(operation_name, service, instance, args, kwargs, context):\n if operation_name not in (\"SendMessage\", \"SendMessageBatch\", \"ReceiveMessage\"):\n # only \"publish\" is handled specifically, other endpoints get the default treatment\n return False\n span_type = \"messaging\"\n span_subtype = \"sqs\"\n span_action = \"send\" if operation_name in (\"SendMessage\", \"SendMessageBatch\") else \"receive\"\n topic_name = \"\"\n batch = \"_BATCH\" if operation_name == \"SendMessageBatch\" else \"\"\n signature_type = \"RECEIVE from\" if span_action == \"receive\" else f\"SEND{batch} to\"\n\n if len(args) > 1:\n topic_name = args[1][\"QueueUrl\"].rsplit(\"/\", maxsplit=1)[-1]\n signature = f\"SQS {signature_type} {topic_name}\".rstrip() if topic_name else f\"SQS {signature_type}\"\n context[\"destination\"][\"service\"] = {\n \"name\": span_subtype,\n \"resource\": f\"{span_subtype}/{topic_name}\" if topic_name else span_subtype,\n \"type\": span_type,\n }\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef modify_span_sqs(span, args, kwargs):\n trace_parent = span.transaction.trace_parent.copy_from(span_id=span.id)\n attributes = {constants.TRACEPARENT_HEADER_NAME: {\"DataType\": \"String\", \"StringValue\": trace_parent.to_string()}}\n if trace_parent.tracestate:\n attributes[constants.TRACESTATE_HEADER_NAME] = {\"DataType\": \"String\", \"StringValue\": trace_parent.tracestate}\n if len(args) > 1:\n attributes_count = len(attributes)\n if \"MessageAttributes\" in args[1]:\n messages = [args[1]]\n # elif \"Entries\" in args[1]:\n # messages = args[1][\"Entries\"]\n else:\n messages = []\n for message in messages:\n if len(message[\"MessageAttributes\"]) + attributes_count <= 10:\n message[\"MessageAttributes\"].update(attributes)\n else:\n logger.info(\"Not adding disttracing headers to message due to attribute limit reached\")\n\n\ndef handle_default(operation_name, service, instance, args, kwargs, destination):\n span_type = \"aws\"\n span_subtype = service.lower()\n span_action = operation_name\n\n destination[\"service\"] = {\"name\": span_subtype, \"resource\": span_subtype, \"type\": span_type}\n\n signature = f\"{service}:{operation_name}\"\n return HandlerInfo(signature, span_type, span_subtype, span_action, destination)\n\n\nhandlers = {\n \"S3\": handle_s3,\n \"DynamoDB\": handle_dynamodb,\n \"SNS\": handle_sns,\n \"SQS\": handle_sqs,\n \"default\": handle_default,\n}\n\nspan_modifiers = {\n \"SQS\": modify_span_sqs,\n}\n", "path": "elasticapm/instrumentation/packages/botocore.py"}]} | 3,812 | 529 |
gh_patches_debug_7805 | rasdani/github-patches | git_diff | rasterio__rasterio-2657 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`dtypes.get_minimum_dtype` doesn't support `int8`
<!--
WELCOME ABOARD!
Hi and welcome to the Rasterio project. We appreciate bug reports, questions
about documentation, and suggestions for new features. This issue template
isn't intended to ward you off; only to intercept and redirect some particular
categories of reports, and to collect a few important facts that issue reporters
often omit.
The primary forum for questions about installation and usage of Rasterio is
https://rasterio.groups.io/g/main. The authors and other users will answer
questions when they have expertise to share and time to explain. Please take the
time to craft a clear question and be patient about responses. Please do not
bring these questions to Rasterio's issue tracker, which we want to reserve for
bug reports and other actionable issues.
Questions about development of Rasterio, brainstorming, requests for comment,
and not-yet-actionable proposals are welcome in the project's developers
discussion group https://rasterio.groups.io/g/dev. Issues opened in Rasterio's
GitHub repo which haven't been socialized there may be perfunctorily closed.
Please note: Rasterio contains extension modules and is thus susceptible to
C library compatibility issues. If you are reporting an installation or module
import issue, please note that this project only accepts reports about problems
with packages downloaded from the Python Package Index. Conda users should take
issues to one of the following trackers:
- https://github.com/ContinuumIO/anaconda-issues/issues
- https://github.com/conda-forge/rasterio-feedstock
You think you've found a bug? We believe you!
-->
## Expected behavior and actual behavior.
`rasterio.dtypes.get_minimum_dtype` doesn't support `int8` even though it's a supported type in `rasterio` (added by #1595). Looking at [the source](https://github.com/rasterio/rasterio/blob/5cf71dc806adc299108543def00647845ab4fc42/rasterio/dtypes.py#L162), there's no check for signed ranges smaller than `int16`.
Maybe there's a good reason for this, but if it's as simple as just adding a check for the `[-128, 127]` range, I'm happy to make a PR to add that.
## Steps to reproduce the problem.
```python
import rasterio
rasterio.dtypes.get_minimum_dtype([-128, 127]) # int16
```
#### Environment Information
```
rasterio info:
rasterio: 1.3.2
GDAL: 3.5.1
PROJ: 9.0.1
GEOS: 3.10.2
PROJ DATA: /home/az/miniconda3/envs/wxee/share/proj
GDAL DATA: /home/az/miniconda3/envs/wxee/share/gdal
System:
python: 3.9.6 | packaged by conda-forge | (default, Jul 11 2021, 03:39:48) [GCC 9.3.0]
executable: /home/az/miniconda3/envs/wxee/bin/python
machine: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.33
Python deps:
affine: 2.3.1
attrs: None
certifi: 2022.09.24
click: 8.0.1
cligj: 0.7.2
cython: None
numpy: 1.21.1
snuggs: 1.4.7
click-plugins: None
setuptools: 49.6.0.post20210108
```
## Installation Method
Installed from PyPI with `pip==21.2.2`.
Thanks!
</issue>
<code>
[start of rasterio/dtypes.py]
1 """Mapping of GDAL to Numpy data types.
2
3 Since 0.13 we are not importing numpy here and data types are strings.
4 Happily strings can be used throughout Numpy and so existing code will
5 not break.
6
7 """
8 import numpy
9
10 from rasterio.env import GDALVersion
11
12 _GDAL_AT_LEAST_35 = GDALVersion.runtime().at_least("3.5")
13
14 bool_ = 'bool'
15 ubyte = uint8 = 'uint8'
16 sbyte = int8 = 'int8'
17 uint16 = 'uint16'
18 int16 = 'int16'
19 uint32 = 'uint32'
20 int32 = 'int32'
21 uint64 = 'uint64'
22 int64 = 'int64'
23 float32 = 'float32'
24 float64 = 'float64'
25 complex_ = 'complex'
26 complex64 = 'complex64'
27 complex128 = 'complex128'
28
29 complex_int16 = "complex_int16"
30
31 dtype_fwd = {
32 0: None, # GDT_Unknown
33 1: ubyte, # GDT_Byte
34 2: uint16, # GDT_UInt16
35 3: int16, # GDT_Int16
36 4: uint32, # GDT_UInt32
37 5: int32, # GDT_Int32
38 6: float32, # GDT_Float32
39 7: float64, # GDT_Float64
40 8: complex_int16, # GDT_CInt16
41 9: complex64, # GDT_CInt32
42 10: complex64, # GDT_CFloat32
43 11: complex128, # GDT_CFloat64
44 }
45
46 if _GDAL_AT_LEAST_35:
47 dtype_fwd[12] = int64 # GDT_Int64
48 dtype_fwd[13] = uint64 # GDT_UInt64
49
50 dtype_rev = dict((v, k) for k, v in dtype_fwd.items())
51
52 dtype_rev["uint8"] = 1
53 dtype_rev["int8"] = 1
54 dtype_rev["complex"] = 11
55 dtype_rev["complex_int16"] = 8
56
57
58 def _get_gdal_dtype(type_name):
59 try:
60 return dtype_rev[type_name]
61 except KeyError:
62 raise TypeError(
63 f"Unsupported data type {type_name}. "
64 f"Allowed data types: {list(dtype_rev)}."
65 )
66
67 typename_fwd = {
68 0: 'Unknown',
69 1: 'Byte',
70 2: 'UInt16',
71 3: 'Int16',
72 4: 'UInt32',
73 5: 'Int32',
74 6: 'Float32',
75 7: 'Float64',
76 8: 'CInt16',
77 9: 'CInt32',
78 10: 'CFloat32',
79 11: 'CFloat64'}
80
81 if _GDAL_AT_LEAST_35:
82 typename_fwd[12] = 'Int64'
83 typename_fwd[13] = 'UInt64'
84
85 typename_rev = dict((v, k) for k, v in typename_fwd.items())
86
87 dtype_ranges = {
88 'int8': (-128, 127),
89 'uint8': (0, 255),
90 'uint16': (0, 65535),
91 'int16': (-32768, 32767),
92 'uint32': (0, 4294967295),
93 'int32': (-2147483648, 2147483647),
94 'float32': (-3.4028235e+38, 3.4028235e+38),
95 'float64': (-1.7976931348623157e+308, 1.7976931348623157e+308)}
96
97 if _GDAL_AT_LEAST_35:
98 dtype_ranges['int64'] = (-9223372036854775808, 9223372036854775807)
99 dtype_ranges['uint64'] = (0, 18446744073709551615)
100
101
102 def in_dtype_range(value, dtype):
103 """
104 Check if the value is within the dtype range
105 """
106 if numpy.dtype(dtype).kind == "f" and (numpy.isinf(value) or numpy.isnan(value)):
107 return True
108 range_min, range_max = dtype_ranges[dtype]
109 return range_min <= value <= range_max
110
111
112 def _gdal_typename(dt):
113 try:
114 return typename_fwd[dtype_rev[dt]]
115 except KeyError:
116 return typename_fwd[dtype_rev[dt().dtype.name]]
117
118
119 def check_dtype(dt):
120 """Check if dtype is a known dtype."""
121 if str(dt) in dtype_rev:
122 return True
123 elif callable(dt) and str(dt().dtype) in dtype_rev:
124 return True
125 return False
126
127
128 def get_minimum_dtype(values):
129 """Determine minimum type to represent values.
130
131 Uses range checking to determine the minimum integer or floating point
132 data type required to represent values.
133
134 Parameters
135 ----------
136 values: list-like
137
138
139 Returns
140 -------
141 rasterio dtype string
142 """
143 import numpy as np
144
145 if not is_ndarray(values):
146 values = np.array(values)
147
148 min_value = values.min()
149 max_value = values.max()
150
151 if values.dtype.kind in ('i', 'u'):
152 if min_value >= 0:
153 if max_value <= 255:
154 return uint8
155 elif max_value <= 65535:
156 return uint16
157 elif max_value <= 4294967295:
158 return uint32
159 if not _GDAL_AT_LEAST_35:
160 raise ValueError("Values out of range for supported dtypes")
161 return uint64
162 elif min_value >= -32768 and max_value <= 32767:
163 return int16
164 elif min_value >= -2147483648 and max_value <= 2147483647:
165 return int32
166 if not _GDAL_AT_LEAST_35:
167 raise ValueError("Values out of range for supported dtypes")
168 return int64
169
170 else:
171 if min_value >= -3.4028235e+38 and max_value <= 3.4028235e+38:
172 return float32
173 return float64
174
175
176 def is_ndarray(array):
177 """Check if array is a ndarray."""
178 import numpy as np
179
180 return isinstance(array, np.ndarray) or hasattr(array, '__array__')
181
182
183 def can_cast_dtype(values, dtype):
184 """Test if values can be cast to dtype without loss of information.
185
186 Parameters
187 ----------
188 values: list-like
189 dtype: numpy.dtype or string
190
191 Returns
192 -------
193 boolean
194 True if values can be cast to data type.
195 """
196 import numpy as np
197
198 if not is_ndarray(values):
199 values = np.array(values)
200
201 if values.dtype.name == _getnpdtype(dtype).name:
202 return True
203
204 elif values.dtype.kind == 'f':
205 return np.allclose(values, values.astype(dtype), equal_nan=True)
206
207 else:
208 return np.array_equal(values, values.astype(dtype))
209
210
211 def validate_dtype(values, valid_dtypes):
212 """Test if dtype of values is one of valid_dtypes.
213
214 Parameters
215 ----------
216 values: list-like
217 valid_dtypes: list-like
218 list of valid dtype strings, e.g., ('int16', 'int32')
219
220 Returns
221 -------
222 boolean:
223 True if dtype of values is one of valid_dtypes
224 """
225 import numpy as np
226
227 if not is_ndarray(values):
228 values = np.array(values)
229
230 return (values.dtype.name in valid_dtypes or
231 get_minimum_dtype(values) in valid_dtypes)
232
233
234 def _is_complex_int(dtype):
235 return isinstance(dtype, str) and dtype.startswith("complex_int")
236
237
238 def _getnpdtype(dtype):
239 import numpy as np
240 if _is_complex_int(dtype):
241 return np.dtype("complex64")
242 else:
243 return np.dtype(dtype)
244
[end of rasterio/dtypes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/dtypes.py b/rasterio/dtypes.py
--- a/rasterio/dtypes.py
+++ b/rasterio/dtypes.py
@@ -159,6 +159,8 @@
if not _GDAL_AT_LEAST_35:
raise ValueError("Values out of range for supported dtypes")
return uint64
+ elif min_value >= -128 and max_value <= 127:
+ return int8
elif min_value >= -32768 and max_value <= 32767:
return int16
elif min_value >= -2147483648 and max_value <= 2147483647:
| {"golden_diff": "diff --git a/rasterio/dtypes.py b/rasterio/dtypes.py\n--- a/rasterio/dtypes.py\n+++ b/rasterio/dtypes.py\n@@ -159,6 +159,8 @@\n if not _GDAL_AT_LEAST_35:\n raise ValueError(\"Values out of range for supported dtypes\")\n return uint64\n+ elif min_value >= -128 and max_value <= 127:\n+ return int8\n elif min_value >= -32768 and max_value <= 32767:\n return int16\n elif min_value >= -2147483648 and max_value <= 2147483647:\n", "issue": "`dtypes.get_minimum_dtype` doesn't support `int8`\n<!--\r\n\r\nWELCOME ABOARD!\r\n\r\nHi and welcome to the Rasterio project. We appreciate bug reports, questions\r\nabout documentation, and suggestions for new features. This issue template\r\nisn't intended to ward you off; only to intercept and redirect some particular\r\ncategories of reports, and to collect a few important facts that issue reporters\r\noften omit.\r\n\r\nThe primary forum for questions about installation and usage of Rasterio is \r\nhttps://rasterio.groups.io/g/main. The authors and other users will answer \r\nquestions when they have expertise to share and time to explain. Please take the\r\ntime to craft a clear question and be patient about responses. Please do not\r\nbring these questions to Rasterio's issue tracker, which we want to reserve for\r\nbug reports and other actionable issues.\r\n\r\nQuestions about development of Rasterio, brainstorming, requests for comment,\r\nand not-yet-actionable proposals are welcome in the project's developers \r\ndiscussion group https://rasterio.groups.io/g/dev. Issues opened in Rasterio's\r\nGitHub repo which haven't been socialized there may be perfunctorily closed.\r\n\r\nPlease note: Rasterio contains extension modules and is thus susceptible to\r\nC library compatibility issues. If you are reporting an installation or module\r\nimport issue, please note that this project only accepts reports about problems\r\nwith packages downloaded from the Python Package Index. Conda users should take\r\nissues to one of the following trackers:\r\n\r\n- https://github.com/ContinuumIO/anaconda-issues/issues\r\n- https://github.com/conda-forge/rasterio-feedstock\r\n\r\nYou think you've found a bug? We believe you!\r\n-->\r\n\r\n## Expected behavior and actual behavior.\r\n\r\n`rasterio.dtypes.get_minimum_dtype` doesn't support `int8` even though it's a supported type in `rasterio` (added by #1595). Looking at [the source](https://github.com/rasterio/rasterio/blob/5cf71dc806adc299108543def00647845ab4fc42/rasterio/dtypes.py#L162), there's no check for signed ranges smaller than `int16`. \r\n\r\nMaybe there's a good reason for this, but if it's as simple as just adding a check for the `[-128, 127]` range, I'm happy to make a PR to add that.\r\n\r\n## Steps to reproduce the problem.\r\n\r\n```python\r\nimport rasterio\r\n\r\nrasterio.dtypes.get_minimum_dtype([-128, 127]) # int16\r\n```\r\n\r\n#### Environment Information\r\n```\r\nrasterio info:\r\n rasterio: 1.3.2\r\n GDAL: 3.5.1\r\n PROJ: 9.0.1\r\n GEOS: 3.10.2\r\n PROJ DATA: /home/az/miniconda3/envs/wxee/share/proj\r\n GDAL DATA: /home/az/miniconda3/envs/wxee/share/gdal\r\n\r\nSystem:\r\n python: 3.9.6 | packaged by conda-forge | (default, Jul 11 2021, 03:39:48) [GCC 9.3.0]\r\nexecutable: /home/az/miniconda3/envs/wxee/bin/python\r\n machine: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.33\r\n\r\nPython deps:\r\n affine: 2.3.1\r\n attrs: None\r\n certifi: 2022.09.24\r\n click: 8.0.1\r\n cligj: 0.7.2\r\n cython: None\r\n numpy: 1.21.1\r\n snuggs: 1.4.7\r\nclick-plugins: None\r\nsetuptools: 49.6.0.post20210108\r\n```\r\n## Installation Method\r\n\r\nInstalled from PyPI with `pip==21.2.2`.\r\n\r\nThanks!\n", "before_files": [{"content": "\"\"\"Mapping of GDAL to Numpy data types.\n\nSince 0.13 we are not importing numpy here and data types are strings.\nHappily strings can be used throughout Numpy and so existing code will\nnot break.\n\n\"\"\"\nimport numpy\n\nfrom rasterio.env import GDALVersion\n\n_GDAL_AT_LEAST_35 = GDALVersion.runtime().at_least(\"3.5\")\n\nbool_ = 'bool'\nubyte = uint8 = 'uint8'\nsbyte = int8 = 'int8'\nuint16 = 'uint16'\nint16 = 'int16'\nuint32 = 'uint32'\nint32 = 'int32'\nuint64 = 'uint64'\nint64 = 'int64'\nfloat32 = 'float32'\nfloat64 = 'float64'\ncomplex_ = 'complex'\ncomplex64 = 'complex64'\ncomplex128 = 'complex128'\n\ncomplex_int16 = \"complex_int16\"\n\ndtype_fwd = {\n 0: None, # GDT_Unknown\n 1: ubyte, # GDT_Byte\n 2: uint16, # GDT_UInt16\n 3: int16, # GDT_Int16\n 4: uint32, # GDT_UInt32\n 5: int32, # GDT_Int32\n 6: float32, # GDT_Float32\n 7: float64, # GDT_Float64\n 8: complex_int16, # GDT_CInt16\n 9: complex64, # GDT_CInt32\n 10: complex64, # GDT_CFloat32\n 11: complex128, # GDT_CFloat64\n}\n\nif _GDAL_AT_LEAST_35:\n dtype_fwd[12] = int64 # GDT_Int64\n dtype_fwd[13] = uint64 # GDT_UInt64\n\ndtype_rev = dict((v, k) for k, v in dtype_fwd.items())\n\ndtype_rev[\"uint8\"] = 1\ndtype_rev[\"int8\"] = 1\ndtype_rev[\"complex\"] = 11\ndtype_rev[\"complex_int16\"] = 8\n\n\ndef _get_gdal_dtype(type_name):\n try:\n return dtype_rev[type_name]\n except KeyError:\n raise TypeError(\n f\"Unsupported data type {type_name}. \"\n f\"Allowed data types: {list(dtype_rev)}.\"\n )\n\ntypename_fwd = {\n 0: 'Unknown',\n 1: 'Byte',\n 2: 'UInt16',\n 3: 'Int16',\n 4: 'UInt32',\n 5: 'Int32',\n 6: 'Float32',\n 7: 'Float64',\n 8: 'CInt16',\n 9: 'CInt32',\n 10: 'CFloat32',\n 11: 'CFloat64'}\n\nif _GDAL_AT_LEAST_35:\n typename_fwd[12] = 'Int64'\n typename_fwd[13] = 'UInt64'\n\ntypename_rev = dict((v, k) for k, v in typename_fwd.items())\n\ndtype_ranges = {\n 'int8': (-128, 127),\n 'uint8': (0, 255),\n 'uint16': (0, 65535),\n 'int16': (-32768, 32767),\n 'uint32': (0, 4294967295),\n 'int32': (-2147483648, 2147483647),\n 'float32': (-3.4028235e+38, 3.4028235e+38),\n 'float64': (-1.7976931348623157e+308, 1.7976931348623157e+308)}\n\nif _GDAL_AT_LEAST_35:\n dtype_ranges['int64'] = (-9223372036854775808, 9223372036854775807)\n dtype_ranges['uint64'] = (0, 18446744073709551615)\n\n\ndef in_dtype_range(value, dtype):\n \"\"\"\n Check if the value is within the dtype range\n \"\"\"\n if numpy.dtype(dtype).kind == \"f\" and (numpy.isinf(value) or numpy.isnan(value)):\n return True\n range_min, range_max = dtype_ranges[dtype]\n return range_min <= value <= range_max\n\n\ndef _gdal_typename(dt):\n try:\n return typename_fwd[dtype_rev[dt]]\n except KeyError:\n return typename_fwd[dtype_rev[dt().dtype.name]]\n\n\ndef check_dtype(dt):\n \"\"\"Check if dtype is a known dtype.\"\"\"\n if str(dt) in dtype_rev:\n return True\n elif callable(dt) and str(dt().dtype) in dtype_rev:\n return True\n return False\n\n\ndef get_minimum_dtype(values):\n \"\"\"Determine minimum type to represent values.\n\n Uses range checking to determine the minimum integer or floating point\n data type required to represent values.\n\n Parameters\n ----------\n values: list-like\n\n\n Returns\n -------\n rasterio dtype string\n \"\"\"\n import numpy as np\n\n if not is_ndarray(values):\n values = np.array(values)\n\n min_value = values.min()\n max_value = values.max()\n\n if values.dtype.kind in ('i', 'u'):\n if min_value >= 0:\n if max_value <= 255:\n return uint8\n elif max_value <= 65535:\n return uint16\n elif max_value <= 4294967295:\n return uint32\n if not _GDAL_AT_LEAST_35:\n raise ValueError(\"Values out of range for supported dtypes\")\n return uint64\n elif min_value >= -32768 and max_value <= 32767:\n return int16\n elif min_value >= -2147483648 and max_value <= 2147483647:\n return int32\n if not _GDAL_AT_LEAST_35:\n raise ValueError(\"Values out of range for supported dtypes\")\n return int64\n\n else:\n if min_value >= -3.4028235e+38 and max_value <= 3.4028235e+38:\n return float32\n return float64\n\n\ndef is_ndarray(array):\n \"\"\"Check if array is a ndarray.\"\"\"\n import numpy as np\n\n return isinstance(array, np.ndarray) or hasattr(array, '__array__')\n\n\ndef can_cast_dtype(values, dtype):\n \"\"\"Test if values can be cast to dtype without loss of information.\n\n Parameters\n ----------\n values: list-like\n dtype: numpy.dtype or string\n\n Returns\n -------\n boolean\n True if values can be cast to data type.\n \"\"\"\n import numpy as np\n\n if not is_ndarray(values):\n values = np.array(values)\n\n if values.dtype.name == _getnpdtype(dtype).name:\n return True\n\n elif values.dtype.kind == 'f':\n return np.allclose(values, values.astype(dtype), equal_nan=True)\n\n else:\n return np.array_equal(values, values.astype(dtype))\n\n\ndef validate_dtype(values, valid_dtypes):\n \"\"\"Test if dtype of values is one of valid_dtypes.\n\n Parameters\n ----------\n values: list-like\n valid_dtypes: list-like\n list of valid dtype strings, e.g., ('int16', 'int32')\n\n Returns\n -------\n boolean:\n True if dtype of values is one of valid_dtypes\n \"\"\"\n import numpy as np\n\n if not is_ndarray(values):\n values = np.array(values)\n\n return (values.dtype.name in valid_dtypes or\n get_minimum_dtype(values) in valid_dtypes)\n\n\ndef _is_complex_int(dtype):\n return isinstance(dtype, str) and dtype.startswith(\"complex_int\")\n\n\ndef _getnpdtype(dtype):\n import numpy as np\n if _is_complex_int(dtype):\n return np.dtype(\"complex64\")\n else:\n return np.dtype(dtype)\n", "path": "rasterio/dtypes.py"}]} | 4,050 | 167 |
gh_patches_debug_23891 | rasdani/github-patches | git_diff | DDMAL__CantusDB-945 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Users should only be shown the View-Edit toggle if they have edit access for the source/chant in question
@annamorphism made a comment on #441 that really deserves its own issue
> also is there a way to not have the "Edit" tab show up for unauthorized people? it's annoying to try to edit something and then be sent to a 403.
Currently, the view-edit toggle is being displayed whenever the user is logged in. Instead, we need to properly check that the user is actually allowed to edit the chant before displaying the Edit link.
</issue>
<code>
[start of django/cantusdb_project/main_app/views/sequence.py]
1 from django.views.generic import DetailView, ListView, UpdateView
2 from main_app.models import Sequence
3 from django.db.models import Q
4 from main_app.forms import SequenceEditForm
5 from django.contrib.auth.mixins import LoginRequiredMixin
6 from django.contrib import messages
7 from django.contrib.auth.mixins import UserPassesTestMixin
8 from django.core.exceptions import PermissionDenied
9
10
11 class SequenceDetailView(DetailView):
12 """
13 Displays a single Sequence object. Accessed with ``sequences/<int:pk>``
14 """
15
16 model = Sequence
17 context_object_name = "sequence"
18 template_name = "sequence_detail.html"
19
20 def get_context_data(self, **kwargs):
21 sequence = self.get_object()
22 source = sequence.source
23 # if the sequence's source isn't published,
24 # only logged-in users should be able to view the sequence's detail page
25 if (
26 (source is not None)
27 and (source.published is False)
28 and (not self.request.user.is_authenticated)
29 ):
30 raise PermissionDenied()
31
32 context = super().get_context_data(**kwargs)
33 context["concordances"] = (
34 Sequence.objects.filter(cantus_id=sequence.cantus_id)
35 .select_related("source")
36 .order_by("siglum")
37 )
38 return context
39
40
41 class SequenceListView(ListView):
42 """
43 Displays a list of Sequence objects. Accessed with ``sequences/``
44 """
45
46 paginate_by = 100
47 context_object_name = "sequences"
48 template_name = "sequence_list.html"
49
50 def get_queryset(self):
51 queryset = Sequence.objects.select_related("source")
52 display_unpublished = self.request.user.is_authenticated
53 if display_unpublished:
54 q_obj_filter = Q()
55 else:
56 q_obj_filter = Q(source__published=True)
57
58 if self.request.GET.get("incipit"):
59 incipit = self.request.GET.get("incipit")
60 q_obj_filter &= Q(incipit__icontains=incipit)
61 if self.request.GET.get("siglum"):
62 siglum = self.request.GET.get("siglum")
63 q_obj_filter &= Q(siglum__icontains=siglum)
64 if self.request.GET.get("cantus_id"):
65 cantus_id = self.request.GET.get("cantus_id")
66 q_obj_filter &= Q(cantus_id__icontains=cantus_id)
67
68 return queryset.filter(q_obj_filter).order_by("siglum", "s_sequence")
69
70
71 class SequenceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):
72 template_name = "sequence_edit.html"
73 model = Sequence
74 form_class = SequenceEditForm
75 pk_url_kwarg = "sequence_id"
76
77 def form_valid(self, form):
78 form.instance.last_updated_by = self.request.user
79 messages.success(
80 self.request,
81 "Sequence updated successfully!",
82 )
83 return super().form_valid(form)
84
85 def test_func(self):
86 user = self.request.user
87 # checks if the user is a project manager (they should have the privilege to edit any sequence)
88 is_project_manager = user.groups.filter(name="project manager").exists()
89
90 if is_project_manager:
91 return True
92 else:
93 return False
94
[end of django/cantusdb_project/main_app/views/sequence.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/cantusdb_project/main_app/views/sequence.py b/django/cantusdb_project/main_app/views/sequence.py
--- a/django/cantusdb_project/main_app/views/sequence.py
+++ b/django/cantusdb_project/main_app/views/sequence.py
@@ -6,6 +6,7 @@
from django.contrib import messages
from django.contrib.auth.mixins import UserPassesTestMixin
from django.core.exceptions import PermissionDenied
+from main_app.views.chant import user_can_edit_chants_in_source
class SequenceDetailView(DetailView):
@@ -20,6 +21,8 @@
def get_context_data(self, **kwargs):
sequence = self.get_object()
source = sequence.source
+ user = self.request.user
+
# if the sequence's source isn't published,
# only logged-in users should be able to view the sequence's detail page
if (
@@ -35,6 +38,7 @@
.select_related("source")
.order_by("siglum")
)
+ context["user_can_edit_sequence"] = user_can_edit_chants_in_source(user, source)
return context
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/sequence.py b/django/cantusdb_project/main_app/views/sequence.py\n--- a/django/cantusdb_project/main_app/views/sequence.py\n+++ b/django/cantusdb_project/main_app/views/sequence.py\n@@ -6,6 +6,7 @@\n from django.contrib import messages\n from django.contrib.auth.mixins import UserPassesTestMixin\n from django.core.exceptions import PermissionDenied\n+from main_app.views.chant import user_can_edit_chants_in_source\n \n \n class SequenceDetailView(DetailView):\n@@ -20,6 +21,8 @@\n def get_context_data(self, **kwargs):\n sequence = self.get_object()\n source = sequence.source\n+ user = self.request.user\n+\n # if the sequence's source isn't published,\n # only logged-in users should be able to view the sequence's detail page\n if (\n@@ -35,6 +38,7 @@\n .select_related(\"source\")\n .order_by(\"siglum\")\n )\n+ context[\"user_can_edit_sequence\"] = user_can_edit_chants_in_source(user, source)\n return context\n", "issue": "Users should only be shown the View-Edit toggle if they have edit access for the source/chant in question\n@annamorphism made a comment on #441 that really deserves its own issue\r\n\r\n> also is there a way to not have the \"Edit\" tab show up for unauthorized people? it's annoying to try to edit something and then be sent to a 403.\r\n\r\nCurrently, the view-edit toggle is being displayed whenever the user is logged in. Instead, we need to properly check that the user is actually allowed to edit the chant before displaying the Edit link.\n", "before_files": [{"content": "from django.views.generic import DetailView, ListView, UpdateView\nfrom main_app.models import Sequence\nfrom django.db.models import Q\nfrom main_app.forms import SequenceEditForm\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.contrib import messages\nfrom django.contrib.auth.mixins import UserPassesTestMixin\nfrom django.core.exceptions import PermissionDenied\n\n\nclass SequenceDetailView(DetailView):\n \"\"\"\n Displays a single Sequence object. Accessed with ``sequences/<int:pk>``\n \"\"\"\n\n model = Sequence\n context_object_name = \"sequence\"\n template_name = \"sequence_detail.html\"\n\n def get_context_data(self, **kwargs):\n sequence = self.get_object()\n source = sequence.source\n # if the sequence's source isn't published,\n # only logged-in users should be able to view the sequence's detail page\n if (\n (source is not None)\n and (source.published is False)\n and (not self.request.user.is_authenticated)\n ):\n raise PermissionDenied()\n\n context = super().get_context_data(**kwargs)\n context[\"concordances\"] = (\n Sequence.objects.filter(cantus_id=sequence.cantus_id)\n .select_related(\"source\")\n .order_by(\"siglum\")\n )\n return context\n\n\nclass SequenceListView(ListView):\n \"\"\"\n Displays a list of Sequence objects. Accessed with ``sequences/``\n \"\"\"\n\n paginate_by = 100\n context_object_name = \"sequences\"\n template_name = \"sequence_list.html\"\n\n def get_queryset(self):\n queryset = Sequence.objects.select_related(\"source\")\n display_unpublished = self.request.user.is_authenticated\n if display_unpublished:\n q_obj_filter = Q()\n else:\n q_obj_filter = Q(source__published=True)\n\n if self.request.GET.get(\"incipit\"):\n incipit = self.request.GET.get(\"incipit\")\n q_obj_filter &= Q(incipit__icontains=incipit)\n if self.request.GET.get(\"siglum\"):\n siglum = self.request.GET.get(\"siglum\")\n q_obj_filter &= Q(siglum__icontains=siglum)\n if self.request.GET.get(\"cantus_id\"):\n cantus_id = self.request.GET.get(\"cantus_id\")\n q_obj_filter &= Q(cantus_id__icontains=cantus_id)\n\n return queryset.filter(q_obj_filter).order_by(\"siglum\", \"s_sequence\")\n\n\nclass SequenceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):\n template_name = \"sequence_edit.html\"\n model = Sequence\n form_class = SequenceEditForm\n pk_url_kwarg = \"sequence_id\"\n\n def form_valid(self, form):\n form.instance.last_updated_by = self.request.user\n messages.success(\n self.request,\n \"Sequence updated successfully!\",\n )\n return super().form_valid(form)\n\n def test_func(self):\n user = self.request.user\n # checks if the user is a project manager (they should have the privilege to edit any sequence)\n is_project_manager = user.groups.filter(name=\"project manager\").exists()\n\n if is_project_manager:\n return True\n else:\n return False\n", "path": "django/cantusdb_project/main_app/views/sequence.py"}]} | 1,530 | 255 |
gh_patches_debug_20558 | rasdani/github-patches | git_diff | svthalia__concrexit-2546 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot add events on staging/master
### Describe the bug
`KeyError at /admin/events/event/add/`
### How to reproduce
Steps to reproduce the behaviour:
1. Go to admin
2. Try to add event
3. See error mentioned above
### Expected behaviour
Get the actual admin page
</issue>
<code>
[start of website/events/admin/event.py]
1 """Registers admin interfaces for the event model."""
2
3 from django.contrib import admin
4 from django.db.models import Count, Q
5 from django.template.defaultfilters import date as _date
6 from django.urls import reverse, path, resolve
7 from django.utils import timezone
8 from django.utils.html import format_html
9 from django.utils.translation import gettext_lazy as _
10
11 from activemembers.models import MemberGroup
12 from events import services
13 from events import models
14 from events.admin.filters import LectureYearFilter
15 from events.admin.forms import RegistrationInformationFieldForm, EventAdminForm
16 from events.admin.inlines import (
17 RegistrationInformationFieldInline,
18 PizzaEventInline,
19 PromotionRequestInline,
20 )
21 from events.admin.views import (
22 EventAdminDetails,
23 EventRegistrationsExport,
24 EventMessage,
25 EventMarkPresentQR,
26 )
27 from utils.admin import DoNextModelAdmin
28
29
30 @admin.register(models.Event)
31 class EventAdmin(DoNextModelAdmin):
32 """Manage the events."""
33
34 form = EventAdminForm
35
36 inlines = (
37 RegistrationInformationFieldInline,
38 PizzaEventInline,
39 PromotionRequestInline,
40 )
41 list_display = (
42 "overview_link",
43 "event_date",
44 "registration_date",
45 "num_participants",
46 "category",
47 "published",
48 "edit_link",
49 )
50 list_display_links = ("edit_link",)
51 list_filter = (LectureYearFilter, "start", "published", "category")
52 actions = ("make_published", "make_unpublished")
53 date_hierarchy = "start"
54 search_fields = ("title", "description")
55 prepopulated_fields = {"map_location": ("location",)}
56 filter_horizontal = ("documents", "organisers")
57
58 fieldsets = (
59 (
60 _("General"),
61 {
62 "fields": (
63 "title",
64 "published",
65 "organisers",
66 )
67 },
68 ),
69 (
70 _("Detail"),
71 {
72 "fields": (
73 "category",
74 "start",
75 "end",
76 "description",
77 "caption",
78 "location",
79 "map_location",
80 ),
81 "classes": ("collapse", "start-open"),
82 },
83 ),
84 (
85 _("Registrations"),
86 {
87 "fields": (
88 "price",
89 "fine",
90 "tpay_allowed",
91 "max_participants",
92 "registration_start",
93 "registration_end",
94 "cancel_deadline",
95 "send_cancel_email",
96 "optional_registrations",
97 "no_registration_message",
98 ),
99 "classes": ("collapse",),
100 },
101 ),
102 (
103 _("Extra"),
104 {"fields": ("slide", "documents", "shift"), "classes": ("collapse",)},
105 ),
106 )
107
108 def get_queryset(self, request):
109 return (
110 super()
111 .get_queryset(request)
112 .annotate(
113 participant_count=Count(
114 "eventregistration",
115 filter=~Q(eventregistration__date_cancelled__lt=timezone.now()),
116 )
117 )
118 )
119
120 def get_form(self, request, obj=None, change=False, **kwargs):
121 form = super().get_form(request, obj, change, **kwargs)
122 form.clean = lambda form: form.instance.clean_changes(form.changed_data)
123 return form
124
125 def overview_link(self, obj):
126 return format_html(
127 '<a href="{link}">{title}</a>',
128 link=reverse("admin:events_event_details", kwargs={"pk": obj.pk}),
129 title=obj.title,
130 )
131
132 def has_change_permission(self, request, obj=None):
133 """Only allow access to the change form if the user is an organiser."""
134 if obj is not None and not services.is_organiser(request.member, obj):
135 return False
136 return super().has_change_permission(request, obj)
137
138 def event_date(self, obj):
139 event_date = timezone.make_naive(obj.start)
140 return _date(event_date, "l d b Y, G:i")
141
142 event_date.short_description = _("Event Date")
143 event_date.admin_order_field = "start"
144
145 def registration_date(self, obj):
146 if obj.registration_start is not None:
147 start_date = timezone.make_naive(obj.registration_start)
148 else:
149 start_date = obj.registration_start
150
151 return _date(start_date, "l d b Y, G:i")
152
153 registration_date.short_description = _("Registration Start")
154 registration_date.admin_order_field = "registration_start"
155
156 def edit_link(self, obj):
157 return _("Edit")
158
159 edit_link.short_description = ""
160
161 def num_participants(self, obj):
162 """Pretty-print the number of participants."""
163 num = obj.participant_count # from annotation
164 if not obj.max_participants:
165 return f"{num}/∞"
166 return f"{num}/{obj.max_participants}"
167
168 num_participants.short_description = _("Number of participants")
169
170 def make_published(self, request, queryset):
171 """Change the status of the event to published."""
172 self._change_published(request, queryset, True)
173
174 make_published.short_description = _("Publish selected events")
175
176 def make_unpublished(self, request, queryset):
177 """Change the status of the event to unpublished."""
178 self._change_published(request, queryset, False)
179
180 make_unpublished.short_description = _("Unpublish selected events")
181
182 @staticmethod
183 def _change_published(request, queryset, published):
184 if not request.user.is_superuser:
185 queryset = queryset.filter(
186 organisers__in=request.member.get_member_groups()
187 )
188 queryset.update(published=published)
189
190 def save_formset(self, request, form, formset, change):
191 """Save formsets with their order."""
192 formset.save()
193
194 informationfield_forms = (
195 x
196 for x in formset.forms
197 if isinstance(x, RegistrationInformationFieldForm)
198 and "DELETE" not in x.changed_data
199 )
200 form.instance.set_registrationinformationfield_order(
201 [
202 f.instance.pk
203 for f in sorted(
204 informationfield_forms,
205 key=lambda x: (x.cleaned_data["order"], x.instance.pk),
206 )
207 ]
208 )
209 form.instance.save()
210
211 def get_actions(self, request):
212 actions = super().get_actions(request)
213 if "delete_selected" in actions:
214 del actions["delete_selected"]
215 return actions
216
217 def get_formsets_with_inlines(self, request, obj=None):
218 for inline in self.get_inline_instances(request, obj):
219 if self.has_change_permission(request, obj) or obj is None:
220 yield inline.get_formset(request, obj), inline
221
222 def get_urls(self):
223 urls = super().get_urls()
224 custom_urls = [
225 path(
226 "<int:pk>/details/",
227 self.admin_site.admin_view(EventAdminDetails.as_view()),
228 name="events_event_details",
229 ),
230 path(
231 "<int:pk>/export/",
232 self.admin_site.admin_view(EventRegistrationsExport.as_view()),
233 name="events_event_export",
234 ),
235 path(
236 "<int:pk>/message/",
237 self.admin_site.admin_view(EventMessage.as_view(admin=self)),
238 name="events_event_message",
239 ),
240 path(
241 "<int:pk>/mark-present-qr/",
242 self.admin_site.admin_view(EventMarkPresentQR.as_view()),
243 name="events_event_mark_present_qr",
244 ),
245 ]
246 return custom_urls + urls
247
248 def get_field_queryset(self, db, db_field, request):
249 """Members without the can view as organiser permission can only assign their own groups as organiser."""
250 pk = resolve(request.path_info).kwargs["object_id"]
251 if db_field.name == "organisers" and not request.user.has_perm(
252 "events.override_organiser"
253 ):
254 return request.member.get_member_groups().union(
255 MemberGroup.objects.filter(event_organiser__pk=pk)
256 )
257 return super().get_field_queryset(db, db_field, request)
258
[end of website/events/admin/event.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/events/admin/event.py b/website/events/admin/event.py
--- a/website/events/admin/event.py
+++ b/website/events/admin/event.py
@@ -246,12 +246,15 @@
return custom_urls + urls
def get_field_queryset(self, db, db_field, request):
- """Members without the can view as organiser permission can only assign their own groups as organiser."""
- pk = resolve(request.path_info).kwargs["object_id"]
+ """Members without the 'can view as organiser' permission can only assign their own groups as organiser."""
+ pk = resolve(request.path_info).kwargs.get("object_id")
if db_field.name == "organisers" and not request.user.has_perm(
"events.override_organiser"
):
- return request.member.get_member_groups().union(
- MemberGroup.objects.filter(event_organiser__pk=pk)
- )
+ if pk is None:
+ return request.member.get_member_groups()
+ else:
+ return request.member.get_member_groups().union(
+ MemberGroup.objects.filter(event_organiser__pk=pk)
+ )
return super().get_field_queryset(db, db_field, request)
| {"golden_diff": "diff --git a/website/events/admin/event.py b/website/events/admin/event.py\n--- a/website/events/admin/event.py\n+++ b/website/events/admin/event.py\n@@ -246,12 +246,15 @@\n return custom_urls + urls\n \n def get_field_queryset(self, db, db_field, request):\n- \"\"\"Members without the can view as organiser permission can only assign their own groups as organiser.\"\"\"\n- pk = resolve(request.path_info).kwargs[\"object_id\"]\n+ \"\"\"Members without the 'can view as organiser' permission can only assign their own groups as organiser.\"\"\"\n+ pk = resolve(request.path_info).kwargs.get(\"object_id\")\n if db_field.name == \"organisers\" and not request.user.has_perm(\n \"events.override_organiser\"\n ):\n- return request.member.get_member_groups().union(\n- MemberGroup.objects.filter(event_organiser__pk=pk)\n- )\n+ if pk is None:\n+ return request.member.get_member_groups()\n+ else:\n+ return request.member.get_member_groups().union(\n+ MemberGroup.objects.filter(event_organiser__pk=pk)\n+ )\n return super().get_field_queryset(db, db_field, request)\n", "issue": "Cannot add events on staging/master\n### Describe the bug\r\n`KeyError at /admin/events/event/add/`\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to admin\r\n2. Try to add event\r\n3. See error mentioned above\r\n\r\n### Expected behaviour\r\nGet the actual admin page\n", "before_files": [{"content": "\"\"\"Registers admin interfaces for the event model.\"\"\"\n\nfrom django.contrib import admin\nfrom django.db.models import Count, Q\nfrom django.template.defaultfilters import date as _date\nfrom django.urls import reverse, path, resolve\nfrom django.utils import timezone\nfrom django.utils.html import format_html\nfrom django.utils.translation import gettext_lazy as _\n\nfrom activemembers.models import MemberGroup\nfrom events import services\nfrom events import models\nfrom events.admin.filters import LectureYearFilter\nfrom events.admin.forms import RegistrationInformationFieldForm, EventAdminForm\nfrom events.admin.inlines import (\n RegistrationInformationFieldInline,\n PizzaEventInline,\n PromotionRequestInline,\n)\nfrom events.admin.views import (\n EventAdminDetails,\n EventRegistrationsExport,\n EventMessage,\n EventMarkPresentQR,\n)\nfrom utils.admin import DoNextModelAdmin\n\n\[email protected](models.Event)\nclass EventAdmin(DoNextModelAdmin):\n \"\"\"Manage the events.\"\"\"\n\n form = EventAdminForm\n\n inlines = (\n RegistrationInformationFieldInline,\n PizzaEventInline,\n PromotionRequestInline,\n )\n list_display = (\n \"overview_link\",\n \"event_date\",\n \"registration_date\",\n \"num_participants\",\n \"category\",\n \"published\",\n \"edit_link\",\n )\n list_display_links = (\"edit_link\",)\n list_filter = (LectureYearFilter, \"start\", \"published\", \"category\")\n actions = (\"make_published\", \"make_unpublished\")\n date_hierarchy = \"start\"\n search_fields = (\"title\", \"description\")\n prepopulated_fields = {\"map_location\": (\"location\",)}\n filter_horizontal = (\"documents\", \"organisers\")\n\n fieldsets = (\n (\n _(\"General\"),\n {\n \"fields\": (\n \"title\",\n \"published\",\n \"organisers\",\n )\n },\n ),\n (\n _(\"Detail\"),\n {\n \"fields\": (\n \"category\",\n \"start\",\n \"end\",\n \"description\",\n \"caption\",\n \"location\",\n \"map_location\",\n ),\n \"classes\": (\"collapse\", \"start-open\"),\n },\n ),\n (\n _(\"Registrations\"),\n {\n \"fields\": (\n \"price\",\n \"fine\",\n \"tpay_allowed\",\n \"max_participants\",\n \"registration_start\",\n \"registration_end\",\n \"cancel_deadline\",\n \"send_cancel_email\",\n \"optional_registrations\",\n \"no_registration_message\",\n ),\n \"classes\": (\"collapse\",),\n },\n ),\n (\n _(\"Extra\"),\n {\"fields\": (\"slide\", \"documents\", \"shift\"), \"classes\": (\"collapse\",)},\n ),\n )\n\n def get_queryset(self, request):\n return (\n super()\n .get_queryset(request)\n .annotate(\n participant_count=Count(\n \"eventregistration\",\n filter=~Q(eventregistration__date_cancelled__lt=timezone.now()),\n )\n )\n )\n\n def get_form(self, request, obj=None, change=False, **kwargs):\n form = super().get_form(request, obj, change, **kwargs)\n form.clean = lambda form: form.instance.clean_changes(form.changed_data)\n return form\n\n def overview_link(self, obj):\n return format_html(\n '<a href=\"{link}\">{title}</a>',\n link=reverse(\"admin:events_event_details\", kwargs={\"pk\": obj.pk}),\n title=obj.title,\n )\n\n def has_change_permission(self, request, obj=None):\n \"\"\"Only allow access to the change form if the user is an organiser.\"\"\"\n if obj is not None and not services.is_organiser(request.member, obj):\n return False\n return super().has_change_permission(request, obj)\n\n def event_date(self, obj):\n event_date = timezone.make_naive(obj.start)\n return _date(event_date, \"l d b Y, G:i\")\n\n event_date.short_description = _(\"Event Date\")\n event_date.admin_order_field = \"start\"\n\n def registration_date(self, obj):\n if obj.registration_start is not None:\n start_date = timezone.make_naive(obj.registration_start)\n else:\n start_date = obj.registration_start\n\n return _date(start_date, \"l d b Y, G:i\")\n\n registration_date.short_description = _(\"Registration Start\")\n registration_date.admin_order_field = \"registration_start\"\n\n def edit_link(self, obj):\n return _(\"Edit\")\n\n edit_link.short_description = \"\"\n\n def num_participants(self, obj):\n \"\"\"Pretty-print the number of participants.\"\"\"\n num = obj.participant_count # from annotation\n if not obj.max_participants:\n return f\"{num}/\u221e\"\n return f\"{num}/{obj.max_participants}\"\n\n num_participants.short_description = _(\"Number of participants\")\n\n def make_published(self, request, queryset):\n \"\"\"Change the status of the event to published.\"\"\"\n self._change_published(request, queryset, True)\n\n make_published.short_description = _(\"Publish selected events\")\n\n def make_unpublished(self, request, queryset):\n \"\"\"Change the status of the event to unpublished.\"\"\"\n self._change_published(request, queryset, False)\n\n make_unpublished.short_description = _(\"Unpublish selected events\")\n\n @staticmethod\n def _change_published(request, queryset, published):\n if not request.user.is_superuser:\n queryset = queryset.filter(\n organisers__in=request.member.get_member_groups()\n )\n queryset.update(published=published)\n\n def save_formset(self, request, form, formset, change):\n \"\"\"Save formsets with their order.\"\"\"\n formset.save()\n\n informationfield_forms = (\n x\n for x in formset.forms\n if isinstance(x, RegistrationInformationFieldForm)\n and \"DELETE\" not in x.changed_data\n )\n form.instance.set_registrationinformationfield_order(\n [\n f.instance.pk\n for f in sorted(\n informationfield_forms,\n key=lambda x: (x.cleaned_data[\"order\"], x.instance.pk),\n )\n ]\n )\n form.instance.save()\n\n def get_actions(self, request):\n actions = super().get_actions(request)\n if \"delete_selected\" in actions:\n del actions[\"delete_selected\"]\n return actions\n\n def get_formsets_with_inlines(self, request, obj=None):\n for inline in self.get_inline_instances(request, obj):\n if self.has_change_permission(request, obj) or obj is None:\n yield inline.get_formset(request, obj), inline\n\n def get_urls(self):\n urls = super().get_urls()\n custom_urls = [\n path(\n \"<int:pk>/details/\",\n self.admin_site.admin_view(EventAdminDetails.as_view()),\n name=\"events_event_details\",\n ),\n path(\n \"<int:pk>/export/\",\n self.admin_site.admin_view(EventRegistrationsExport.as_view()),\n name=\"events_event_export\",\n ),\n path(\n \"<int:pk>/message/\",\n self.admin_site.admin_view(EventMessage.as_view(admin=self)),\n name=\"events_event_message\",\n ),\n path(\n \"<int:pk>/mark-present-qr/\",\n self.admin_site.admin_view(EventMarkPresentQR.as_view()),\n name=\"events_event_mark_present_qr\",\n ),\n ]\n return custom_urls + urls\n\n def get_field_queryset(self, db, db_field, request):\n \"\"\"Members without the can view as organiser permission can only assign their own groups as organiser.\"\"\"\n pk = resolve(request.path_info).kwargs[\"object_id\"]\n if db_field.name == \"organisers\" and not request.user.has_perm(\n \"events.override_organiser\"\n ):\n return request.member.get_member_groups().union(\n MemberGroup.objects.filter(event_organiser__pk=pk)\n )\n return super().get_field_queryset(db, db_field, request)\n", "path": "website/events/admin/event.py"}]} | 2,922 | 265 |
gh_patches_debug_38416 | rasdani/github-patches | git_diff | vyperlang__vyper-1042 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Convert function takes string instead of type for conversion to
### What's your issue about?
If you forget to add quotes to the type you want to convert to, Vyper will give an unhelpful error:
```python
convert(some_int128, uint256)
# raises:
# AttributeError: 'Name' object has no attribute 's'
```
### How can it be fixed?
So, catching this error might work out to fix this issue, but I think the underlying issue is that the second argument (`convertTo`) is a string instead of a typename. This actually makes it a little more unintuitive to write conversions as you could misspell the name and not get visual feedback from your IDE (assuming you have syntax highlighting up)
I would suggest turning this into a VIP to modify the syntax of convert such that a valid typename is supplied as the second argument
#### Cute Animal Picture

</issue>
<code>
[start of vyper/types/convert.py]
1 from vyper.functions.signature import (
2 signature
3 )
4 from vyper.parser.parser_utils import (
5 LLLnode,
6 getpos,
7 byte_array_to_num
8 )
9 from vyper.exceptions import (
10 InvalidLiteralException,
11 TypeMismatchException,
12 )
13 from vyper.types import (
14 BaseType,
15 )
16 from vyper.types import (
17 get_type,
18 )
19 from vyper.utils import (
20 DECIMAL_DIVISOR,
21 MemoryPositions,
22 SizeLimits
23 )
24
25
26 @signature(('uint256', 'bytes32', 'bytes'), 'str_literal')
27 def to_int128(expr, args, kwargs, context):
28 in_node = args[0]
29 typ, len = get_type(in_node)
30 if typ in ('uint256', 'bytes32'):
31 if in_node.typ.is_literal and not SizeLimits.in_bounds('int128', in_node.value):
32 raise InvalidLiteralException("Number out of range: {}".format(in_node.value), expr)
33 return LLLnode.from_list(
34 ['clamp', ['mload', MemoryPositions.MINNUM], in_node,
35 ['mload', MemoryPositions.MAXNUM]], typ=BaseType('int128', in_node.typ.unit), pos=getpos(expr)
36 )
37 else:
38 return byte_array_to_num(in_node, expr, 'int128')
39
40
41 @signature(('num_literal', 'int128', 'bytes32', 'address'), 'str_literal')
42 def to_uint256(expr, args, kwargs, context):
43 in_node = args[0]
44 input_type, len = get_type(in_node)
45
46 if isinstance(in_node, int):
47 if not SizeLimits.in_bounds('uint256', in_node):
48 raise InvalidLiteralException("Number out of range: {}".format(in_node))
49 _unit = in_node.typ.unit if input_type == 'int128' else None
50 return LLLnode.from_list(in_node, typ=BaseType('uint256', _unit), pos=getpos(expr))
51
52 elif isinstance(in_node, LLLnode) and input_type in ('int128', 'num_literal'):
53 _unit = in_node.typ.unit if input_type == 'int128' else None
54 return LLLnode.from_list(['clampge', in_node, 0], typ=BaseType('uint256', _unit), pos=getpos(expr))
55
56 elif isinstance(in_node, LLLnode) and input_type in ('bytes32', 'address'):
57 return LLLnode(value=in_node.value, args=in_node.args, typ=BaseType('uint256'), pos=getpos(expr))
58
59 else:
60 raise InvalidLiteralException("Invalid input for uint256: %r" % in_node, expr)
61
62
63 @signature(('int128', 'uint256'), 'str_literal')
64 def to_decimal(expr, args, kwargs, context):
65 input = args[0]
66 if input.typ.typ == 'uint256':
67 return LLLnode.from_list(
68 ['uclample', ['mul', input, DECIMAL_DIVISOR], ['mload', MemoryPositions.MAXDECIMAL]],
69 typ=BaseType('decimal', input.typ.unit, input.typ.positional), pos=getpos(expr)
70 )
71 else:
72 return LLLnode.from_list(
73 ['mul', input, DECIMAL_DIVISOR],
74 typ=BaseType('decimal', input.typ.unit, input.typ.positional),
75 pos=getpos(expr)
76 )
77
78
79 @signature(('int128', 'uint256', 'address', 'bytes'), 'str_literal')
80 def to_bytes32(expr, args, kwargs, context):
81 input = args[0]
82 typ, len = get_type(input)
83 if typ == 'bytes':
84 if len != 32:
85 raise TypeMismatchException("Unable to convert bytes[{}] to bytes32".format(len))
86 if input.location == "memory":
87 return LLLnode.from_list(
88 ['mload', ['add', input, 32]], typ=BaseType('bytes32')
89 )
90 elif input.location == "storage":
91 return LLLnode.from_list(
92 ['sload', ['add', ['sha3_32', input], 1]], typ=BaseType('bytes32')
93 )
94 else:
95 return LLLnode(value=input.value, args=input.args, typ=BaseType('bytes32'), pos=getpos(expr))
96
97
98 def convert(expr, context):
99 output_type = expr.args[1].s
100 if output_type in conversion_table:
101 return conversion_table[output_type](expr, context)
102 else:
103 raise Exception("Conversion to {} is invalid.".format(output_type))
104
105
106 conversion_table = {
107 'int128': to_int128,
108 'uint256': to_uint256,
109 'decimal': to_decimal,
110 'bytes32': to_bytes32,
111 }
112
[end of vyper/types/convert.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vyper/types/convert.py b/vyper/types/convert.py
--- a/vyper/types/convert.py
+++ b/vyper/types/convert.py
@@ -1,3 +1,6 @@
+import ast
+import warnings
+
from vyper.functions.signature import (
signature
)
@@ -9,6 +12,7 @@
from vyper.exceptions import (
InvalidLiteralException,
TypeMismatchException,
+ ParserException,
)
from vyper.types import (
BaseType,
@@ -23,7 +27,7 @@
)
-@signature(('uint256', 'bytes32', 'bytes'), 'str_literal')
+@signature(('uint256', 'bytes32', 'bytes'), '*')
def to_int128(expr, args, kwargs, context):
in_node = args[0]
typ, len = get_type(in_node)
@@ -38,7 +42,7 @@
return byte_array_to_num(in_node, expr, 'int128')
-@signature(('num_literal', 'int128', 'bytes32', 'address'), 'str_literal')
+@signature(('num_literal', 'int128', 'bytes32', 'address'), '*')
def to_uint256(expr, args, kwargs, context):
in_node = args[0]
input_type, len = get_type(in_node)
@@ -60,7 +64,7 @@
raise InvalidLiteralException("Invalid input for uint256: %r" % in_node, expr)
-@signature(('int128', 'uint256'), 'str_literal')
+@signature(('int128', 'uint256'), '*')
def to_decimal(expr, args, kwargs, context):
input = args[0]
if input.typ.typ == 'uint256':
@@ -76,7 +80,7 @@
)
-@signature(('int128', 'uint256', 'address', 'bytes'), 'str_literal')
+@signature(('int128', 'uint256', 'address', 'bytes'), '*')
def to_bytes32(expr, args, kwargs, context):
input = args[0]
typ, len = get_type(input)
@@ -96,11 +100,23 @@
def convert(expr, context):
- output_type = expr.args[1].s
+
+ if isinstance(expr.args[1], ast.Str):
+ warnings.warn(
+ "String parameter has been removed, see VIP1026). "
+ "Use a vyper type instead.",
+ DeprecationWarning
+ )
+
+ if isinstance(expr.args[1], ast.Name):
+ output_type = expr.args[1].id
+ else:
+ raise ParserException("Invalid conversion type, use valid vyper type.", expr)
+
if output_type in conversion_table:
return conversion_table[output_type](expr, context)
else:
- raise Exception("Conversion to {} is invalid.".format(output_type))
+ raise ParserException("Conversion to {} is invalid.".format(output_type), expr)
conversion_table = {
| {"golden_diff": "diff --git a/vyper/types/convert.py b/vyper/types/convert.py\n--- a/vyper/types/convert.py\n+++ b/vyper/types/convert.py\n@@ -1,3 +1,6 @@\n+import ast\n+import warnings\n+\n from vyper.functions.signature import (\n signature\n )\n@@ -9,6 +12,7 @@\n from vyper.exceptions import (\n InvalidLiteralException,\n TypeMismatchException,\n+ ParserException,\n )\n from vyper.types import (\n BaseType,\n@@ -23,7 +27,7 @@\n )\n \n \n-@signature(('uint256', 'bytes32', 'bytes'), 'str_literal')\n+@signature(('uint256', 'bytes32', 'bytes'), '*')\n def to_int128(expr, args, kwargs, context):\n in_node = args[0]\n typ, len = get_type(in_node)\n@@ -38,7 +42,7 @@\n return byte_array_to_num(in_node, expr, 'int128')\n \n \n-@signature(('num_literal', 'int128', 'bytes32', 'address'), 'str_literal')\n+@signature(('num_literal', 'int128', 'bytes32', 'address'), '*')\n def to_uint256(expr, args, kwargs, context):\n in_node = args[0]\n input_type, len = get_type(in_node)\n@@ -60,7 +64,7 @@\n raise InvalidLiteralException(\"Invalid input for uint256: %r\" % in_node, expr)\n \n \n-@signature(('int128', 'uint256'), 'str_literal')\n+@signature(('int128', 'uint256'), '*')\n def to_decimal(expr, args, kwargs, context):\n input = args[0]\n if input.typ.typ == 'uint256':\n@@ -76,7 +80,7 @@\n )\n \n \n-@signature(('int128', 'uint256', 'address', 'bytes'), 'str_literal')\n+@signature(('int128', 'uint256', 'address', 'bytes'), '*')\n def to_bytes32(expr, args, kwargs, context):\n input = args[0]\n typ, len = get_type(input)\n@@ -96,11 +100,23 @@\n \n \n def convert(expr, context):\n- output_type = expr.args[1].s\n+\n+ if isinstance(expr.args[1], ast.Str):\n+ warnings.warn(\n+ \"String parameter has been removed, see VIP1026). \"\n+ \"Use a vyper type instead.\",\n+ DeprecationWarning\n+ )\n+\n+ if isinstance(expr.args[1], ast.Name):\n+ output_type = expr.args[1].id\n+ else:\n+ raise ParserException(\"Invalid conversion type, use valid vyper type.\", expr)\n+\n if output_type in conversion_table:\n return conversion_table[output_type](expr, context)\n else:\n- raise Exception(\"Conversion to {} is invalid.\".format(output_type))\n+ raise ParserException(\"Conversion to {} is invalid.\".format(output_type), expr)\n \n \n conversion_table = {\n", "issue": "Convert function takes string instead of type for conversion to\n### What's your issue about?\r\nIf you forget to add quotes to the type you want to convert to, Vyper will give an unhelpful error:\r\n```python\r\nconvert(some_int128, uint256)\r\n# raises:\r\n# AttributeError: 'Name' object has no attribute 's'\r\n```\r\n\r\n### How can it be fixed?\r\nSo, catching this error might work out to fix this issue, but I think the underlying issue is that the second argument (`convertTo`) is a string instead of a typename. This actually makes it a little more unintuitive to write conversions as you could misspell the name and not get visual feedback from your IDE (assuming you have syntax highlighting up)\r\n\r\nI would suggest turning this into a VIP to modify the syntax of convert such that a valid typename is supplied as the second argument\r\n\r\n#### Cute Animal Picture\r\n\r\n\n", "before_files": [{"content": "from vyper.functions.signature import (\n signature\n)\nfrom vyper.parser.parser_utils import (\n LLLnode,\n getpos,\n byte_array_to_num\n)\nfrom vyper.exceptions import (\n InvalidLiteralException,\n TypeMismatchException,\n)\nfrom vyper.types import (\n BaseType,\n)\nfrom vyper.types import (\n get_type,\n)\nfrom vyper.utils import (\n DECIMAL_DIVISOR,\n MemoryPositions,\n SizeLimits\n)\n\n\n@signature(('uint256', 'bytes32', 'bytes'), 'str_literal')\ndef to_int128(expr, args, kwargs, context):\n in_node = args[0]\n typ, len = get_type(in_node)\n if typ in ('uint256', 'bytes32'):\n if in_node.typ.is_literal and not SizeLimits.in_bounds('int128', in_node.value):\n raise InvalidLiteralException(\"Number out of range: {}\".format(in_node.value), expr)\n return LLLnode.from_list(\n ['clamp', ['mload', MemoryPositions.MINNUM], in_node,\n ['mload', MemoryPositions.MAXNUM]], typ=BaseType('int128', in_node.typ.unit), pos=getpos(expr)\n )\n else:\n return byte_array_to_num(in_node, expr, 'int128')\n\n\n@signature(('num_literal', 'int128', 'bytes32', 'address'), 'str_literal')\ndef to_uint256(expr, args, kwargs, context):\n in_node = args[0]\n input_type, len = get_type(in_node)\n\n if isinstance(in_node, int):\n if not SizeLimits.in_bounds('uint256', in_node):\n raise InvalidLiteralException(\"Number out of range: {}\".format(in_node))\n _unit = in_node.typ.unit if input_type == 'int128' else None\n return LLLnode.from_list(in_node, typ=BaseType('uint256', _unit), pos=getpos(expr))\n\n elif isinstance(in_node, LLLnode) and input_type in ('int128', 'num_literal'):\n _unit = in_node.typ.unit if input_type == 'int128' else None\n return LLLnode.from_list(['clampge', in_node, 0], typ=BaseType('uint256', _unit), pos=getpos(expr))\n\n elif isinstance(in_node, LLLnode) and input_type in ('bytes32', 'address'):\n return LLLnode(value=in_node.value, args=in_node.args, typ=BaseType('uint256'), pos=getpos(expr))\n\n else:\n raise InvalidLiteralException(\"Invalid input for uint256: %r\" % in_node, expr)\n\n\n@signature(('int128', 'uint256'), 'str_literal')\ndef to_decimal(expr, args, kwargs, context):\n input = args[0]\n if input.typ.typ == 'uint256':\n return LLLnode.from_list(\n ['uclample', ['mul', input, DECIMAL_DIVISOR], ['mload', MemoryPositions.MAXDECIMAL]],\n typ=BaseType('decimal', input.typ.unit, input.typ.positional), pos=getpos(expr)\n )\n else:\n return LLLnode.from_list(\n ['mul', input, DECIMAL_DIVISOR],\n typ=BaseType('decimal', input.typ.unit, input.typ.positional),\n pos=getpos(expr)\n )\n\n\n@signature(('int128', 'uint256', 'address', 'bytes'), 'str_literal')\ndef to_bytes32(expr, args, kwargs, context):\n input = args[0]\n typ, len = get_type(input)\n if typ == 'bytes':\n if len != 32:\n raise TypeMismatchException(\"Unable to convert bytes[{}] to bytes32\".format(len))\n if input.location == \"memory\":\n return LLLnode.from_list(\n ['mload', ['add', input, 32]], typ=BaseType('bytes32')\n )\n elif input.location == \"storage\":\n return LLLnode.from_list(\n ['sload', ['add', ['sha3_32', input], 1]], typ=BaseType('bytes32')\n )\n else:\n return LLLnode(value=input.value, args=input.args, typ=BaseType('bytes32'), pos=getpos(expr))\n\n\ndef convert(expr, context):\n output_type = expr.args[1].s\n if output_type in conversion_table:\n return conversion_table[output_type](expr, context)\n else:\n raise Exception(\"Conversion to {} is invalid.\".format(output_type))\n\n\nconversion_table = {\n 'int128': to_int128,\n 'uint256': to_uint256,\n 'decimal': to_decimal,\n 'bytes32': to_bytes32,\n}\n", "path": "vyper/types/convert.py"}]} | 2,064 | 697 |
gh_patches_debug_9246 | rasdani/github-patches | git_diff | pymedusa__Medusa-7541 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[APP SUBMITTED]: qBittorrent: Unable to set priority for Torrent
### INFO
**Python Version**: `3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 21:26:53) [MSC v.1916 32 bit (Intel)]`
**Operating System**: `Windows-10-10.0.18362-SP0`
**Locale**: `cp1252`
**Branch**: [master](../tree/master)
**Database**: `44.14`
**Commit**: pymedusa/Medusa@d0c136d7a528a471b51676140bd35d24d97f65c6
**Link to Log**: **No Log available**
### ERROR
<pre>
2019-12-29 19:19:49 ERROR SNATCHQUEUE-SNATCH-161511 :: [d0c136d] qBittorrent: Unable to set priority for Torrent
</pre>
---
_STAFF NOTIFIED_: @pymedusa/support @pymedusa/moderators
</issue>
<code>
[start of medusa/clients/torrent/qbittorrent.py]
1 # coding=utf-8
2
3 """qBittorrent Client."""
4
5 from __future__ import unicode_literals
6
7 import logging
8
9 from medusa import app
10 from medusa.clients.torrent.generic import GenericClient
11 from medusa.logger.adapters.style import BraceAdapter
12
13 from requests.auth import HTTPDigestAuth
14 from requests.compat import urljoin
15
16 log = BraceAdapter(logging.getLogger(__name__))
17 log.logger.addHandler(logging.NullHandler())
18
19
20 class APIUnavailableError(Exception):
21 """Raised when the API version is not available."""
22
23
24 class QBittorrentAPI(GenericClient):
25 """qBittorrent API class."""
26
27 def __init__(self, host=None, username=None, password=None):
28 """Constructor.
29
30 :param host:
31 :type host: string
32 :param username:
33 :type username: string
34 :param password:
35 :type password: string
36 """
37 super(QBittorrentAPI, self).__init__('qBittorrent', host, username, password)
38 self.url = self.host
39 # Auth for API v1.0.0 (qBittorrent v3.1.x and older)
40 self.session.auth = HTTPDigestAuth(self.username, self.password)
41
42 self.api = None
43
44 def _get_auth(self):
45 """Authenticate with the client using the most recent API version available for use."""
46 try:
47 auth = self._get_auth_v2()
48 version = 2
49 except APIUnavailableError:
50 auth = self._get_auth_legacy()
51 version = 1
52
53 if not auth:
54 # Authentication failed
55 return auth
56
57 # Get API version
58 if version == 2:
59 self.url = urljoin(self.host, 'api/v2/app/webapiVersion')
60 try:
61 response = self.session.get(self.url, verify=app.TORRENT_VERIFY_CERT)
62 if not response.text:
63 raise ValueError('Response from client is empty. [Status: {0}]'.format(response.status_code))
64 # Make sure version is using the (major, minor, release) format
65 version = tuple(map(int, response.text.split('.')))
66 # Fill up with zeros to get the correct format. e.g: (2, 3) => (2, 3, 0)
67 self.api = version + (0,) * (3 - len(version))
68 except (AttributeError, ValueError) as error:
69 log.error('{name}: Unable to get API version. Error: {error!r}',
70 {'name': self.name, 'error': error})
71 elif version == 1:
72 try:
73 self.url = urljoin(self.host, 'version/api')
74 version = int(self.session.get(self.url, verify=app.TORRENT_VERIFY_CERT).text)
75 # Convert old API versioning to new versioning (major, minor, release)
76 self.api = (1, version % 100, 0)
77 except Exception:
78 self.api = (1, 0, 0)
79
80 return auth
81
82 def _get_auth_v2(self):
83 """Authenticate using API v2."""
84 self.url = urljoin(self.host, 'api/v2/auth/login')
85 data = {
86 'username': self.username,
87 'password': self.password,
88 }
89 try:
90 self.response = self.session.post(self.url, data=data, verify=app.TORRENT_VERIFY_CERT)
91 except Exception as error:
92 log.warning('{name}: Exception while trying to authenticate: {error}',
93 {'name': self.name, 'error': error}, exc_info=1)
94 return None
95
96 if self.response.status_code == 200:
97 if self.response.text == 'Fails.':
98 log.warning('{name}: Invalid Username or Password, check your config',
99 {'name': self.name})
100 return None
101
102 # Successful log in
103 self.session.cookies = self.response.cookies
104 self.auth = self.response.text
105
106 return self.auth
107
108 if self.response.status_code == 404:
109 # API v2 is not available
110 raise APIUnavailableError()
111
112 if self.response.status_code == 403:
113 log.warning('{name}: Your IP address has been banned after too many failed authentication attempts.'
114 ' Restart {name} to unban.',
115 {'name': self.name})
116 return None
117
118 def _get_auth_legacy(self):
119 """Authenticate using legacy API."""
120 self.url = urljoin(self.host, 'login')
121 data = {
122 'username': self.username,
123 'password': self.password,
124 }
125 try:
126 self.response = self.session.post(self.url, data=data, verify=app.TORRENT_VERIFY_CERT)
127 except Exception as error:
128 log.warning('{name}: Exception while trying to authenticate: {error}',
129 {'name': self.name, 'error': error}, exc_info=1)
130 return None
131
132 # API v1.0.0 (qBittorrent v3.1.x and older)
133 if self.response.status_code == 404:
134 try:
135 self.response = self.session.get(self.host, verify=app.TORRENT_VERIFY_CERT)
136 except Exception as error:
137 log.warning('{name}: Exception while trying to authenticate: {error}',
138 {'name': self.name, 'error': error}, exc_info=1)
139 return None
140
141 self.session.cookies = self.response.cookies
142 self.auth = (self.response.status_code != 404) or None
143
144 return self.auth
145
146 def _add_torrent_uri(self, result):
147
148 command = 'api/v2/torrents/add' if self.api >= (2, 0, 0) else 'command/download'
149 self.url = urljoin(self.host, command)
150 data = {
151 'urls': result.url,
152 }
153 return self._request(method='post', data=data, cookies=self.session.cookies)
154
155 def _add_torrent_file(self, result):
156
157 command = 'api/v2/torrents/add' if self.api >= (2, 0, 0) else 'command/upload'
158 self.url = urljoin(self.host, command)
159 files = {
160 'torrents': result.content
161 }
162 return self._request(method='post', files=files, cookies=self.session.cookies)
163
164 def _set_torrent_label(self, result):
165
166 label = app.TORRENT_LABEL_ANIME if result.series.is_anime else app.TORRENT_LABEL
167 if not label:
168 return True
169
170 api = self.api
171 if api >= (2, 0, 0):
172 self.url = urljoin(self.host, 'api/v2/torrents/setCategory')
173 label_key = 'category'
174 elif api > (1, 6, 0):
175 label_key = 'Category' if api >= (1, 10, 0) else 'Label'
176 self.url = urljoin(self.host, 'command/set' + label_key)
177
178 data = {
179 'hashes': result.hash.lower(),
180 label_key.lower(): label.replace(' ', '_'),
181 }
182 ok = self._request(method='post', data=data, cookies=self.session.cookies)
183
184 if self.response.status_code == 409:
185 log.warning('{name}: Unable to set torrent label. You need to create the label '
186 ' in {name} first.', {'name': self.name})
187 ok = True
188
189 return ok
190
191 def _set_torrent_priority(self, result):
192
193 command = 'api/v2/torrents' if self.api >= (2, 0, 0) else 'command'
194 method = 'increase' if result.priority == 1 else 'decrease'
195 self.url = urljoin(self.host, '{command}/{method}Prio'.format(command=command, method=method))
196 data = {
197 'hashes': result.hash.lower(),
198 }
199 ok = self._request(method='post', data=data, cookies=self.session.cookies)
200
201 if self.response.status_code == 403:
202 log.info('{name}: Unable to set torrent priority because torrent queueing'
203 ' is disabled in {name} settings.', {'name': self.name})
204 ok = True
205
206 return ok
207
208 def _set_torrent_pause(self, result):
209 api = self.api
210 state = 'pause' if app.TORRENT_PAUSED else 'resume'
211 command = 'api/v2/torrents' if api >= (2, 0, 0) else 'command'
212 hashes_key = 'hashes' if self.api >= (1, 18, 0) else 'hash'
213 self.url = urljoin(self.host, '{command}/{state}'.format(command=command, state=state))
214 data = {
215 hashes_key: result.hash.lower(),
216 }
217 return self._request(method='post', data=data, cookies=self.session.cookies)
218
219 def remove_torrent(self, info_hash):
220 """Remove torrent from client using given info_hash.
221
222 :param info_hash:
223 :type info_hash: string
224 :return
225 :rtype: bool
226 """
227 data = {
228 'hashes': info_hash.lower(),
229 }
230 if self.api >= (2, 0, 0):
231 self.url = urljoin(self.host, 'api/v2/torrents/delete')
232 data['deleteFiles'] = True
233 else:
234 self.url = urljoin(self.host, 'command/deletePerm')
235
236 return self._request(method='post', data=data, cookies=self.session.cookies)
237
238
239 api = QBittorrentAPI
240
[end of medusa/clients/torrent/qbittorrent.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/medusa/clients/torrent/qbittorrent.py b/medusa/clients/torrent/qbittorrent.py
--- a/medusa/clients/torrent/qbittorrent.py
+++ b/medusa/clients/torrent/qbittorrent.py
@@ -198,7 +198,7 @@
}
ok = self._request(method='post', data=data, cookies=self.session.cookies)
- if self.response.status_code == 403:
+ if self.response.status_code == (409 if self.api >= (2, 0, 0) else 403):
log.info('{name}: Unable to set torrent priority because torrent queueing'
' is disabled in {name} settings.', {'name': self.name})
ok = True
| {"golden_diff": "diff --git a/medusa/clients/torrent/qbittorrent.py b/medusa/clients/torrent/qbittorrent.py\n--- a/medusa/clients/torrent/qbittorrent.py\n+++ b/medusa/clients/torrent/qbittorrent.py\n@@ -198,7 +198,7 @@\n }\n ok = self._request(method='post', data=data, cookies=self.session.cookies)\n \n- if self.response.status_code == 403:\n+ if self.response.status_code == (409 if self.api >= (2, 0, 0) else 403):\n log.info('{name}: Unable to set torrent priority because torrent queueing'\n ' is disabled in {name} settings.', {'name': self.name})\n ok = True\n", "issue": "[APP SUBMITTED]: qBittorrent: Unable to set priority for Torrent\n\n### INFO\n**Python Version**: `3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 21:26:53) [MSC v.1916 32 bit (Intel)]`\n**Operating System**: `Windows-10-10.0.18362-SP0`\n**Locale**: `cp1252`\n**Branch**: [master](../tree/master)\n**Database**: `44.14`\n**Commit**: pymedusa/Medusa@d0c136d7a528a471b51676140bd35d24d97f65c6\n**Link to Log**: **No Log available**\n### ERROR\n<pre>\n2019-12-29 19:19:49 ERROR SNATCHQUEUE-SNATCH-161511 :: [d0c136d] qBittorrent: Unable to set priority for Torrent\n</pre>\n---\n_STAFF NOTIFIED_: @pymedusa/support @pymedusa/moderators\n\n", "before_files": [{"content": "# coding=utf-8\n\n\"\"\"qBittorrent Client.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\n\nfrom medusa import app\nfrom medusa.clients.torrent.generic import GenericClient\nfrom medusa.logger.adapters.style import BraceAdapter\n\nfrom requests.auth import HTTPDigestAuth\nfrom requests.compat import urljoin\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\n\nclass APIUnavailableError(Exception):\n \"\"\"Raised when the API version is not available.\"\"\"\n\n\nclass QBittorrentAPI(GenericClient):\n \"\"\"qBittorrent API class.\"\"\"\n\n def __init__(self, host=None, username=None, password=None):\n \"\"\"Constructor.\n\n :param host:\n :type host: string\n :param username:\n :type username: string\n :param password:\n :type password: string\n \"\"\"\n super(QBittorrentAPI, self).__init__('qBittorrent', host, username, password)\n self.url = self.host\n # Auth for API v1.0.0 (qBittorrent v3.1.x and older)\n self.session.auth = HTTPDigestAuth(self.username, self.password)\n\n self.api = None\n\n def _get_auth(self):\n \"\"\"Authenticate with the client using the most recent API version available for use.\"\"\"\n try:\n auth = self._get_auth_v2()\n version = 2\n except APIUnavailableError:\n auth = self._get_auth_legacy()\n version = 1\n\n if not auth:\n # Authentication failed\n return auth\n\n # Get API version\n if version == 2:\n self.url = urljoin(self.host, 'api/v2/app/webapiVersion')\n try:\n response = self.session.get(self.url, verify=app.TORRENT_VERIFY_CERT)\n if not response.text:\n raise ValueError('Response from client is empty. [Status: {0}]'.format(response.status_code))\n # Make sure version is using the (major, minor, release) format\n version = tuple(map(int, response.text.split('.')))\n # Fill up with zeros to get the correct format. e.g: (2, 3) => (2, 3, 0)\n self.api = version + (0,) * (3 - len(version))\n except (AttributeError, ValueError) as error:\n log.error('{name}: Unable to get API version. Error: {error!r}',\n {'name': self.name, 'error': error})\n elif version == 1:\n try:\n self.url = urljoin(self.host, 'version/api')\n version = int(self.session.get(self.url, verify=app.TORRENT_VERIFY_CERT).text)\n # Convert old API versioning to new versioning (major, minor, release)\n self.api = (1, version % 100, 0)\n except Exception:\n self.api = (1, 0, 0)\n\n return auth\n\n def _get_auth_v2(self):\n \"\"\"Authenticate using API v2.\"\"\"\n self.url = urljoin(self.host, 'api/v2/auth/login')\n data = {\n 'username': self.username,\n 'password': self.password,\n }\n try:\n self.response = self.session.post(self.url, data=data, verify=app.TORRENT_VERIFY_CERT)\n except Exception as error:\n log.warning('{name}: Exception while trying to authenticate: {error}',\n {'name': self.name, 'error': error}, exc_info=1)\n return None\n\n if self.response.status_code == 200:\n if self.response.text == 'Fails.':\n log.warning('{name}: Invalid Username or Password, check your config',\n {'name': self.name})\n return None\n\n # Successful log in\n self.session.cookies = self.response.cookies\n self.auth = self.response.text\n\n return self.auth\n\n if self.response.status_code == 404:\n # API v2 is not available\n raise APIUnavailableError()\n\n if self.response.status_code == 403:\n log.warning('{name}: Your IP address has been banned after too many failed authentication attempts.'\n ' Restart {name} to unban.',\n {'name': self.name})\n return None\n\n def _get_auth_legacy(self):\n \"\"\"Authenticate using legacy API.\"\"\"\n self.url = urljoin(self.host, 'login')\n data = {\n 'username': self.username,\n 'password': self.password,\n }\n try:\n self.response = self.session.post(self.url, data=data, verify=app.TORRENT_VERIFY_CERT)\n except Exception as error:\n log.warning('{name}: Exception while trying to authenticate: {error}',\n {'name': self.name, 'error': error}, exc_info=1)\n return None\n\n # API v1.0.0 (qBittorrent v3.1.x and older)\n if self.response.status_code == 404:\n try:\n self.response = self.session.get(self.host, verify=app.TORRENT_VERIFY_CERT)\n except Exception as error:\n log.warning('{name}: Exception while trying to authenticate: {error}',\n {'name': self.name, 'error': error}, exc_info=1)\n return None\n\n self.session.cookies = self.response.cookies\n self.auth = (self.response.status_code != 404) or None\n\n return self.auth\n\n def _add_torrent_uri(self, result):\n\n command = 'api/v2/torrents/add' if self.api >= (2, 0, 0) else 'command/download'\n self.url = urljoin(self.host, command)\n data = {\n 'urls': result.url,\n }\n return self._request(method='post', data=data, cookies=self.session.cookies)\n\n def _add_torrent_file(self, result):\n\n command = 'api/v2/torrents/add' if self.api >= (2, 0, 0) else 'command/upload'\n self.url = urljoin(self.host, command)\n files = {\n 'torrents': result.content\n }\n return self._request(method='post', files=files, cookies=self.session.cookies)\n\n def _set_torrent_label(self, result):\n\n label = app.TORRENT_LABEL_ANIME if result.series.is_anime else app.TORRENT_LABEL\n if not label:\n return True\n\n api = self.api\n if api >= (2, 0, 0):\n self.url = urljoin(self.host, 'api/v2/torrents/setCategory')\n label_key = 'category'\n elif api > (1, 6, 0):\n label_key = 'Category' if api >= (1, 10, 0) else 'Label'\n self.url = urljoin(self.host, 'command/set' + label_key)\n\n data = {\n 'hashes': result.hash.lower(),\n label_key.lower(): label.replace(' ', '_'),\n }\n ok = self._request(method='post', data=data, cookies=self.session.cookies)\n\n if self.response.status_code == 409:\n log.warning('{name}: Unable to set torrent label. You need to create the label '\n ' in {name} first.', {'name': self.name})\n ok = True\n\n return ok\n\n def _set_torrent_priority(self, result):\n\n command = 'api/v2/torrents' if self.api >= (2, 0, 0) else 'command'\n method = 'increase' if result.priority == 1 else 'decrease'\n self.url = urljoin(self.host, '{command}/{method}Prio'.format(command=command, method=method))\n data = {\n 'hashes': result.hash.lower(),\n }\n ok = self._request(method='post', data=data, cookies=self.session.cookies)\n\n if self.response.status_code == 403:\n log.info('{name}: Unable to set torrent priority because torrent queueing'\n ' is disabled in {name} settings.', {'name': self.name})\n ok = True\n\n return ok\n\n def _set_torrent_pause(self, result):\n api = self.api\n state = 'pause' if app.TORRENT_PAUSED else 'resume'\n command = 'api/v2/torrents' if api >= (2, 0, 0) else 'command'\n hashes_key = 'hashes' if self.api >= (1, 18, 0) else 'hash'\n self.url = urljoin(self.host, '{command}/{state}'.format(command=command, state=state))\n data = {\n hashes_key: result.hash.lower(),\n }\n return self._request(method='post', data=data, cookies=self.session.cookies)\n\n def remove_torrent(self, info_hash):\n \"\"\"Remove torrent from client using given info_hash.\n\n :param info_hash:\n :type info_hash: string\n :return\n :rtype: bool\n \"\"\"\n data = {\n 'hashes': info_hash.lower(),\n }\n if self.api >= (2, 0, 0):\n self.url = urljoin(self.host, 'api/v2/torrents/delete')\n data['deleteFiles'] = True\n else:\n self.url = urljoin(self.host, 'command/deletePerm')\n\n return self._request(method='post', data=data, cookies=self.session.cookies)\n\n\napi = QBittorrentAPI\n", "path": "medusa/clients/torrent/qbittorrent.py"}]} | 3,497 | 176 |
gh_patches_debug_42031 | rasdani/github-patches | git_diff | google__jax-2034 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
stax neural network initializers default to 32bit floats even in 64 bit mode
I need/want to try out my code with 64 bit wide floats. My code uses neural networks setup with stax, the initializers for which default to 32 bit floats (and there is no useable API to change this in user facing code).
Using the thus initialized (`float32`) parameters (and `float64` data batches) in an update `fori_loop` (taking the gradients with `value_and_grad` and updating parameters with any optimizer) results in either one of two faulty behaviors (depending on the exact nature of computation):
1. the gradients come out as `float32`, thus thwarting my intention of computing with `float64` values
2. the gradients come out as `float64`, which results in a type-mismatch exception thrown by `fori_loop` since the initial value argument passed to it was of type `float32`
</issue>
<code>
[start of jax/nn/initializers.py]
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Common neural network layer initializers, consistent with definitions
17 used in Keras and Sonnet.
18 """
19
20 from __future__ import absolute_import
21 from __future__ import division
22
23 from functools import partial
24
25 import numpy as onp
26
27 import jax.numpy as np
28 from jax import lax
29 from jax import ops
30 from jax import random
31
32 def zeros(key, shape, dtype=np.float32): return np.zeros(shape, dtype)
33 def ones(key, shape, dtype=np.float32): return np.ones(shape, dtype)
34
35 def uniform(scale=1e-2):
36 def init(key, shape, dtype=np.float32):
37 return random.uniform(key, shape, dtype) * scale
38 return init
39
40 def normal(stddev=1e-2):
41 def init(key, shape, dtype=np.float32):
42 return random.normal(key, shape, dtype) * stddev
43 return init
44
45 def _compute_fans(shape, in_axis=-2, out_axis=-1):
46 receptive_field_size = onp.prod(shape) / shape[in_axis] / shape[out_axis]
47 fan_in = shape[in_axis] * receptive_field_size
48 fan_out = shape[out_axis] * receptive_field_size
49 return fan_in, fan_out
50
51 def variance_scaling(scale, mode, distribution, in_axis=-2, out_axis=-1):
52 def init(key, shape, dtype=np.float32):
53 fan_in, fan_out = _compute_fans(shape, in_axis, out_axis)
54 if mode == "fan_in": denominator = fan_in
55 elif mode == "fan_out": denominator = fan_out
56 elif mode == "fan_avg": denominator = (fan_in + fan_out) / 2
57 else:
58 raise ValueError(
59 "invalid mode for variance scaling initializer: {}".format(mode))
60 variance = np.array(scale / denominator, dtype=dtype)
61 if distribution == "truncated_normal":
62 # constant is stddev of standard normal truncated to (-2, 2)
63 stddev = np.sqrt(variance) / np.array(.87962566103423978, dtype)
64 return random.truncated_normal(key, -2, 2, shape, dtype) * stddev
65 elif distribution == "normal":
66 return random.normal(key, shape, dtype) * np.sqrt(variance)
67 elif distribution == "uniform":
68 return random.uniform(key, shape, dtype, -1) * onp.sqrt(3 * variance)
69 else:
70 raise ValueError("invalid distribution for variance scaling initializer")
71 return init
72
73 xavier_uniform = glorot_uniform = partial(variance_scaling, 1.0, "fan_avg", "uniform")
74 xavier_normal = glorot_normal = partial(variance_scaling, 1.0, "fan_avg", "truncated_normal")
75 lecun_uniform = partial(variance_scaling, 1.0, "fan_in", "uniform")
76 lecun_normal = partial(variance_scaling, 1.0, "fan_in", "truncated_normal")
77 kaiming_uniform = he_uniform = partial(variance_scaling, 2.0, "fan_in", "uniform")
78 kaiming_normal = he_normal = partial(variance_scaling, 2.0, "fan_in", "truncated_normal")
79
80 def orthogonal(scale=1.0, column_axis=-1):
81 """
82 Construct an initializer for uniformly distributed orthogonal matrices.
83
84 If the shape is not square, the matrices will have orthonormal rows or columns
85 depending on which side is smaller.
86 """
87 def init(key, shape, dtype=np.float32):
88 if len(shape) < 2:
89 raise ValueError("orthogonal initializer requires at least a 2D shape")
90 n_rows, n_cols = onp.prod(shape) // shape[column_axis], shape[column_axis]
91 matrix_shape = (n_cols, n_rows) if n_rows < n_cols else (n_rows, n_cols)
92 A = random.normal(key, matrix_shape, dtype)
93 Q, R = np.linalg.qr(A)
94 Q *= np.sign(np.diag(R)) # needed for a uniform distribution
95 if n_rows < n_cols: Q = Q.T
96 Q = np.reshape(Q, tuple(onp.delete(shape, column_axis)) + (shape[column_axis],))
97 Q = np.moveaxis(Q, -1, column_axis)
98 return scale * Q
99 return init
100
101
102 def delta_orthogonal(scale=1.0, column_axis=-1):
103 """
104 Construct an initializer for delta orthogonal kernels; see arXiv:1806.05393.
105
106 The shape must be 3D, 4D or 5D.
107 """
108 def init(key, shape, dtype=np.float32):
109 if len(shape) not in [3, 4, 5]:
110 raise ValueError("Delta orthogonal initializer requires a 3D, 4D or 5D "
111 "shape.")
112 if shape[-1] < shape[-2]:
113 raise ValueError("`fan_in` must be less or equal than `fan_out`. ")
114 ortho_init = orthogonal(scale=scale, column_axis=column_axis)
115 ortho_matrix = ortho_init(key, shape[-2:])
116 W = np.zeros(shape)
117 if len(shape) == 3:
118 k = shape[0]
119 return ops.index_update(W, ops.index[(k-1)//2, ...], ortho_matrix)
120 elif len(shape) == 4:
121 k1, k2 = shape[:2]
122 return ops.index_update(W, ops.index[(k1-1)//2, (k2-1)//2, ...], ortho_matrix)
123 else:
124 k1, k2, k3 = shape[:3]
125 return ops.index_update(W, ops.index[(k1-1)//2, (k2-1)//2, (k3-1)//2, ...],
126 ortho_matrix)
127 return init
128
[end of jax/nn/initializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/jax/nn/initializers.py b/jax/nn/initializers.py
--- a/jax/nn/initializers.py
+++ b/jax/nn/initializers.py
@@ -32,13 +32,13 @@
def zeros(key, shape, dtype=np.float32): return np.zeros(shape, dtype)
def ones(key, shape, dtype=np.float32): return np.ones(shape, dtype)
-def uniform(scale=1e-2):
- def init(key, shape, dtype=np.float32):
+def uniform(scale=1e-2, dtype=np.float32):
+ def init(key, shape, dtype=dtype):
return random.uniform(key, shape, dtype) * scale
return init
-def normal(stddev=1e-2):
- def init(key, shape, dtype=np.float32):
+def normal(stddev=1e-2, dtype=np.float32):
+ def init(key, shape, dtype=dtype):
return random.normal(key, shape, dtype) * stddev
return init
@@ -48,8 +48,8 @@
fan_out = shape[out_axis] * receptive_field_size
return fan_in, fan_out
-def variance_scaling(scale, mode, distribution, in_axis=-2, out_axis=-1):
- def init(key, shape, dtype=np.float32):
+def variance_scaling(scale, mode, distribution, in_axis=-2, out_axis=-1, dtype=np.float32):
+ def init(key, shape, dtype=dtype):
fan_in, fan_out = _compute_fans(shape, in_axis, out_axis)
if mode == "fan_in": denominator = fan_in
elif mode == "fan_out": denominator = fan_out
@@ -77,14 +77,14 @@
kaiming_uniform = he_uniform = partial(variance_scaling, 2.0, "fan_in", "uniform")
kaiming_normal = he_normal = partial(variance_scaling, 2.0, "fan_in", "truncated_normal")
-def orthogonal(scale=1.0, column_axis=-1):
+def orthogonal(scale=1.0, column_axis=-1, dtype=np.float32):
"""
Construct an initializer for uniformly distributed orthogonal matrices.
If the shape is not square, the matrices will have orthonormal rows or columns
depending on which side is smaller.
"""
- def init(key, shape, dtype=np.float32):
+ def init(key, shape, dtype=dtype):
if len(shape) < 2:
raise ValueError("orthogonal initializer requires at least a 2D shape")
n_rows, n_cols = onp.prod(shape) // shape[column_axis], shape[column_axis]
@@ -99,21 +99,21 @@
return init
-def delta_orthogonal(scale=1.0, column_axis=-1):
+def delta_orthogonal(scale=1.0, column_axis=-1, dtype=np.float32):
"""
Construct an initializer for delta orthogonal kernels; see arXiv:1806.05393.
The shape must be 3D, 4D or 5D.
"""
- def init(key, shape, dtype=np.float32):
+ def init(key, shape, dtype=dtype):
if len(shape) not in [3, 4, 5]:
raise ValueError("Delta orthogonal initializer requires a 3D, 4D or 5D "
"shape.")
if shape[-1] < shape[-2]:
raise ValueError("`fan_in` must be less or equal than `fan_out`. ")
- ortho_init = orthogonal(scale=scale, column_axis=column_axis)
+ ortho_init = orthogonal(scale=scale, column_axis=column_axis, dtype=dtype)
ortho_matrix = ortho_init(key, shape[-2:])
- W = np.zeros(shape)
+ W = np.zeros(shape, dtype=dtype)
if len(shape) == 3:
k = shape[0]
return ops.index_update(W, ops.index[(k-1)//2, ...], ortho_matrix)
| {"golden_diff": "diff --git a/jax/nn/initializers.py b/jax/nn/initializers.py\n--- a/jax/nn/initializers.py\n+++ b/jax/nn/initializers.py\n@@ -32,13 +32,13 @@\n def zeros(key, shape, dtype=np.float32): return np.zeros(shape, dtype)\n def ones(key, shape, dtype=np.float32): return np.ones(shape, dtype)\n \n-def uniform(scale=1e-2):\n- def init(key, shape, dtype=np.float32):\n+def uniform(scale=1e-2, dtype=np.float32):\n+ def init(key, shape, dtype=dtype):\n return random.uniform(key, shape, dtype) * scale\n return init\n \n-def normal(stddev=1e-2):\n- def init(key, shape, dtype=np.float32):\n+def normal(stddev=1e-2, dtype=np.float32):\n+ def init(key, shape, dtype=dtype):\n return random.normal(key, shape, dtype) * stddev\n return init\n \n@@ -48,8 +48,8 @@\n fan_out = shape[out_axis] * receptive_field_size\n return fan_in, fan_out\n \n-def variance_scaling(scale, mode, distribution, in_axis=-2, out_axis=-1):\n- def init(key, shape, dtype=np.float32):\n+def variance_scaling(scale, mode, distribution, in_axis=-2, out_axis=-1, dtype=np.float32):\n+ def init(key, shape, dtype=dtype):\n fan_in, fan_out = _compute_fans(shape, in_axis, out_axis)\n if mode == \"fan_in\": denominator = fan_in\n elif mode == \"fan_out\": denominator = fan_out\n@@ -77,14 +77,14 @@\n kaiming_uniform = he_uniform = partial(variance_scaling, 2.0, \"fan_in\", \"uniform\")\n kaiming_normal = he_normal = partial(variance_scaling, 2.0, \"fan_in\", \"truncated_normal\")\n \n-def orthogonal(scale=1.0, column_axis=-1):\n+def orthogonal(scale=1.0, column_axis=-1, dtype=np.float32):\n \"\"\"\n Construct an initializer for uniformly distributed orthogonal matrices.\n \n If the shape is not square, the matrices will have orthonormal rows or columns\n depending on which side is smaller.\n \"\"\"\n- def init(key, shape, dtype=np.float32):\n+ def init(key, shape, dtype=dtype):\n if len(shape) < 2:\n raise ValueError(\"orthogonal initializer requires at least a 2D shape\")\n n_rows, n_cols = onp.prod(shape) // shape[column_axis], shape[column_axis]\n@@ -99,21 +99,21 @@\n return init\n \n \n-def delta_orthogonal(scale=1.0, column_axis=-1):\n+def delta_orthogonal(scale=1.0, column_axis=-1, dtype=np.float32):\n \"\"\"\n Construct an initializer for delta orthogonal kernels; see arXiv:1806.05393. \n \n The shape must be 3D, 4D or 5D.\n \"\"\"\n- def init(key, shape, dtype=np.float32):\n+ def init(key, shape, dtype=dtype):\n if len(shape) not in [3, 4, 5]:\n raise ValueError(\"Delta orthogonal initializer requires a 3D, 4D or 5D \"\n \"shape.\")\n if shape[-1] < shape[-2]:\n raise ValueError(\"`fan_in` must be less or equal than `fan_out`. \")\n- ortho_init = orthogonal(scale=scale, column_axis=column_axis)\n+ ortho_init = orthogonal(scale=scale, column_axis=column_axis, dtype=dtype)\n ortho_matrix = ortho_init(key, shape[-2:])\n- W = np.zeros(shape)\n+ W = np.zeros(shape, dtype=dtype)\n if len(shape) == 3:\n k = shape[0]\n return ops.index_update(W, ops.index[(k-1)//2, ...], ortho_matrix)\n", "issue": "stax neural network initializers default to 32bit floats even in 64 bit mode\nI need/want to try out my code with 64 bit wide floats. My code uses neural networks setup with stax, the initializers for which default to 32 bit floats (and there is no useable API to change this in user facing code).\r\n\r\nUsing the thus initialized (`float32`) parameters (and `float64` data batches) in an update `fori_loop` (taking the gradients with `value_and_grad` and updating parameters with any optimizer) results in either one of two faulty behaviors (depending on the exact nature of computation):\r\n\r\n1. the gradients come out as `float32`, thus thwarting my intention of computing with `float64` values\r\n2. the gradients come out as `float64`, which results in a type-mismatch exception thrown by `fori_loop` since the initial value argument passed to it was of type `float32`\r\n\r\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nCommon neural network layer initializers, consistent with definitions\nused in Keras and Sonnet.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\n\nfrom functools import partial\n\nimport numpy as onp\n\nimport jax.numpy as np\nfrom jax import lax\nfrom jax import ops\nfrom jax import random\n\ndef zeros(key, shape, dtype=np.float32): return np.zeros(shape, dtype)\ndef ones(key, shape, dtype=np.float32): return np.ones(shape, dtype)\n\ndef uniform(scale=1e-2):\n def init(key, shape, dtype=np.float32):\n return random.uniform(key, shape, dtype) * scale\n return init\n\ndef normal(stddev=1e-2):\n def init(key, shape, dtype=np.float32):\n return random.normal(key, shape, dtype) * stddev\n return init\n\ndef _compute_fans(shape, in_axis=-2, out_axis=-1):\n receptive_field_size = onp.prod(shape) / shape[in_axis] / shape[out_axis]\n fan_in = shape[in_axis] * receptive_field_size\n fan_out = shape[out_axis] * receptive_field_size\n return fan_in, fan_out\n\ndef variance_scaling(scale, mode, distribution, in_axis=-2, out_axis=-1):\n def init(key, shape, dtype=np.float32):\n fan_in, fan_out = _compute_fans(shape, in_axis, out_axis)\n if mode == \"fan_in\": denominator = fan_in\n elif mode == \"fan_out\": denominator = fan_out\n elif mode == \"fan_avg\": denominator = (fan_in + fan_out) / 2\n else:\n raise ValueError(\n \"invalid mode for variance scaling initializer: {}\".format(mode))\n variance = np.array(scale / denominator, dtype=dtype)\n if distribution == \"truncated_normal\":\n # constant is stddev of standard normal truncated to (-2, 2)\n stddev = np.sqrt(variance) / np.array(.87962566103423978, dtype)\n return random.truncated_normal(key, -2, 2, shape, dtype) * stddev\n elif distribution == \"normal\":\n return random.normal(key, shape, dtype) * np.sqrt(variance)\n elif distribution == \"uniform\":\n return random.uniform(key, shape, dtype, -1) * onp.sqrt(3 * variance)\n else:\n raise ValueError(\"invalid distribution for variance scaling initializer\")\n return init\n\nxavier_uniform = glorot_uniform = partial(variance_scaling, 1.0, \"fan_avg\", \"uniform\")\nxavier_normal = glorot_normal = partial(variance_scaling, 1.0, \"fan_avg\", \"truncated_normal\")\nlecun_uniform = partial(variance_scaling, 1.0, \"fan_in\", \"uniform\")\nlecun_normal = partial(variance_scaling, 1.0, \"fan_in\", \"truncated_normal\")\nkaiming_uniform = he_uniform = partial(variance_scaling, 2.0, \"fan_in\", \"uniform\")\nkaiming_normal = he_normal = partial(variance_scaling, 2.0, \"fan_in\", \"truncated_normal\")\n\ndef orthogonal(scale=1.0, column_axis=-1):\n \"\"\"\n Construct an initializer for uniformly distributed orthogonal matrices.\n \n If the shape is not square, the matrices will have orthonormal rows or columns\n depending on which side is smaller.\n \"\"\"\n def init(key, shape, dtype=np.float32):\n if len(shape) < 2:\n raise ValueError(\"orthogonal initializer requires at least a 2D shape\")\n n_rows, n_cols = onp.prod(shape) // shape[column_axis], shape[column_axis]\n matrix_shape = (n_cols, n_rows) if n_rows < n_cols else (n_rows, n_cols)\n A = random.normal(key, matrix_shape, dtype)\n Q, R = np.linalg.qr(A)\n Q *= np.sign(np.diag(R)) # needed for a uniform distribution\n if n_rows < n_cols: Q = Q.T\n Q = np.reshape(Q, tuple(onp.delete(shape, column_axis)) + (shape[column_axis],))\n Q = np.moveaxis(Q, -1, column_axis)\n return scale * Q\n return init\n\n\ndef delta_orthogonal(scale=1.0, column_axis=-1):\n \"\"\"\n Construct an initializer for delta orthogonal kernels; see arXiv:1806.05393. \n\n The shape must be 3D, 4D or 5D.\n \"\"\"\n def init(key, shape, dtype=np.float32):\n if len(shape) not in [3, 4, 5]:\n raise ValueError(\"Delta orthogonal initializer requires a 3D, 4D or 5D \"\n \"shape.\")\n if shape[-1] < shape[-2]:\n raise ValueError(\"`fan_in` must be less or equal than `fan_out`. \")\n ortho_init = orthogonal(scale=scale, column_axis=column_axis)\n ortho_matrix = ortho_init(key, shape[-2:])\n W = np.zeros(shape)\n if len(shape) == 3:\n k = shape[0]\n return ops.index_update(W, ops.index[(k-1)//2, ...], ortho_matrix)\n elif len(shape) == 4:\n k1, k2 = shape[:2]\n return ops.index_update(W, ops.index[(k1-1)//2, (k2-1)//2, ...], ortho_matrix)\n else:\n k1, k2, k3 = shape[:3]\n return ops.index_update(W, ops.index[(k1-1)//2, (k2-1)//2, (k3-1)//2, ...],\n ortho_matrix)\n return init\n", "path": "jax/nn/initializers.py"}]} | 2,433 | 909 |
gh_patches_debug_43118 | rasdani/github-patches | git_diff | tournesol-app__tournesol-1854 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[back, front] refactor: Harmonize all API and Serializers (then:feat: Rate later list display information about the number of comparisons)
As a user I want to see how many times I have compared the videos when I look at my rate later list so that I can easily chose which videos to remove from my rate later list.
Display the information similarly as in the card used on the comparison page.
This probably will be best implemented by updating the backend endpoint to include the extra info
</issue>
<code>
[start of backend/tournesol/serializers/contributor_recommendations.py]
1 from drf_spectacular.utils import extend_schema_field, extend_schema_serializer
2 from rest_framework.serializers import SerializerMethodField
3
4 from tournesol.models.ratings import ContributorRating
5 from tournesol.serializers.criteria_score import ContributorCriteriaScoreSerializer
6 from tournesol.serializers.poll import IndividualRatingSerializer, RecommendationSerializer
7
8
9 class IndividualRatingWithScoresSerializer(IndividualRatingSerializer):
10 criteria_scores = ContributorCriteriaScoreSerializer(many=True, read_only=True)
11
12 class Meta:
13 model = ContributorRating
14 fields = IndividualRatingSerializer.Meta.fields + ["criteria_scores"]
15 read_only_fields = fields
16
17
18 @extend_schema_serializer(
19 exclude_fields=[
20 # legacy fields have been moved to "entity", "invidual_rating", "collective_rating", etc.
21 "uid",
22 "type",
23 "n_comparisons",
24 "n_contributors",
25 "metadata",
26 "total_score",
27 "tournesol_score",
28 "criteria_scores",
29 "unsafe",
30 "is_public",
31 ]
32 )
33 class ContributorRecommendationsSerializer(RecommendationSerializer):
34 """
35 An entity recommended by a user.
36 """
37
38 is_public = SerializerMethodField()
39 criteria_scores = SerializerMethodField()
40 individual_rating = IndividualRatingWithScoresSerializer(
41 source="single_contributor_rating",
42 read_only=True,
43 )
44
45 class Meta(RecommendationSerializer.Meta):
46 fields = RecommendationSerializer.Meta.fields + ["is_public", "individual_rating"]
47
48 @extend_schema_field(ContributorCriteriaScoreSerializer(many=True))
49 def get_criteria_scores(self, obj):
50 return ContributorCriteriaScoreSerializer(
51 obj.single_contributor_rating.criteria_scores, many=True
52 ).data
53
54 def get_is_public(self, obj) -> bool:
55 return obj.single_contributor_rating.is_public
56
[end of backend/tournesol/serializers/contributor_recommendations.py]
[start of backend/tournesol/serializers/poll.py]
1 from drf_spectacular.utils import extend_schema_serializer
2 from rest_framework import serializers
3 from rest_framework.serializers import IntegerField, ModelSerializer
4
5 from tournesol.models import ContributorRating, CriteriaRank, Entity, EntityPollRating, Poll
6 from tournesol.models.entity_poll_rating import UNSAFE_REASONS
7 from tournesol.serializers.entity import EntityCriteriaScoreSerializer, RelatedEntitySerializer
8 from tournesol.serializers.entity_context import EntityContextSerializer
9
10
11 class PollCriteriaSerializer(ModelSerializer):
12 name = serializers.CharField(source="criteria.name")
13 label = serializers.CharField(source="criteria.get_label")
14
15 class Meta:
16 model = CriteriaRank
17 fields = ["name", "label", "optional"]
18
19
20 class PollSerializer(ModelSerializer):
21 criterias = PollCriteriaSerializer(source="criteriarank_set", many=True)
22
23 class Meta:
24 model = Poll
25 fields = ["name", "criterias", "entity_type", "active"]
26
27
28 class UnsafeStatusSerializer(ModelSerializer):
29 status = serializers.BooleanField(source="is_recommendation_unsafe")
30 reasons = serializers.ListField(
31 child=serializers.ChoiceField(choices=UNSAFE_REASONS),
32 source="unsafe_recommendation_reasons",
33 )
34
35 class Meta:
36 model = EntityPollRating
37 fields = [
38 "status",
39 "reasons",
40 ]
41
42
43 class CollectiveRatingSerializer(ModelSerializer):
44 unsafe = UnsafeStatusSerializer(source="*", read_only=True)
45
46 class Meta:
47 model = EntityPollRating
48 fields = [
49 "n_comparisons",
50 "n_contributors",
51 "tournesol_score",
52 "unsafe",
53 ]
54 read_only_fields = fields
55
56
57 class ExtendedCollectiveRatingSerializer(CollectiveRatingSerializer):
58 criteria_scores = EntityCriteriaScoreSerializer(source="entity.criteria_scores", many=True)
59
60 class Meta:
61 model = CollectiveRatingSerializer.Meta.model
62 fields = CollectiveRatingSerializer.Meta.fields + ["criteria_scores"]
63 read_only_fields = fields
64
65
66 class IndividualRatingSerializer(ModelSerializer):
67 n_comparisons = IntegerField(read_only=True, default=0)
68
69 class Meta:
70 model = ContributorRating
71 fields = [
72 "is_public",
73 "n_comparisons",
74 ]
75 read_only_fields = fields
76
77
78 class RecommendationMetadataSerializer(serializers.Serializer):
79 total_score = serializers.FloatField(read_only=True, allow_null=True)
80
81
82 @extend_schema_serializer(
83 exclude_fields=[
84 # legacy fields have been moved to "entity", "collective_rating", etc.
85 "uid",
86 "type",
87 "n_comparisons",
88 "n_contributors",
89 "metadata",
90 "total_score",
91 "tournesol_score",
92 "criteria_scores",
93 "unsafe",
94 ]
95 )
96 class RecommendationSerializer(ModelSerializer):
97 # pylint: disable=duplicate-code
98 n_comparisons = serializers.IntegerField(source="rating_n_ratings")
99 n_contributors = serializers.IntegerField(source="rating_n_contributors")
100 criteria_scores = EntityCriteriaScoreSerializer(many=True)
101 # TODO: the field total_score is the only field in this serializer that
102 # on the parameters of an api request. Should it be treated differently?
103 total_score = serializers.FloatField()
104 unsafe = UnsafeStatusSerializer(
105 source="single_poll_rating", allow_null=True, default=None, read_only=True
106 )
107
108 entity = RelatedEntitySerializer(source="*", read_only=True)
109 collective_rating = ExtendedCollectiveRatingSerializer(
110 source="single_poll_rating",
111 read_only=True,
112 allow_null=True,
113 )
114 entity_contexts = EntityContextSerializer(
115 source="single_poll_entity_contexts",
116 read_only=True,
117 many=True
118 )
119 recommendation_metadata = RecommendationMetadataSerializer(source="*", read_only=True)
120
121 class Meta:
122 model = Entity
123 fields = [
124 "uid",
125 "type",
126 "n_comparisons",
127 "n_contributors",
128 "metadata",
129 "total_score",
130 "tournesol_score",
131 "criteria_scores",
132 "unsafe",
133 "entity",
134 "collective_rating",
135 "entity_contexts",
136 "recommendation_metadata",
137 ]
138 read_only_fields = fields
139
140
141 class RecommendationsFilterSerializer(serializers.Serializer):
142 date_lte = serializers.DateTimeField(default=None)
143 date_gte = serializers.DateTimeField(default=None)
144 search = serializers.CharField(default=None, help_text="A search query to filter entities")
145 unsafe = serializers.BooleanField(
146 default=False,
147 help_text="If true, entities considered as unsafe recommendations because of a"
148 " low score or due to too few contributions will be included.",
149 )
150 exclude_compared_entities = serializers.BooleanField(
151 default=False,
152 help_text="If true and a user is authenticated, then entities compared by the"
153 " user will be removed from the response",
154 )
155
[end of backend/tournesol/serializers/poll.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/tournesol/serializers/contributor_recommendations.py b/backend/tournesol/serializers/contributor_recommendations.py
--- a/backend/tournesol/serializers/contributor_recommendations.py
+++ b/backend/tournesol/serializers/contributor_recommendations.py
@@ -1,6 +1,3 @@
-from drf_spectacular.utils import extend_schema_field, extend_schema_serializer
-from rest_framework.serializers import SerializerMethodField
-
from tournesol.models.ratings import ContributorRating
from tournesol.serializers.criteria_score import ContributorCriteriaScoreSerializer
from tournesol.serializers.poll import IndividualRatingSerializer, RecommendationSerializer
@@ -15,41 +12,14 @@
read_only_fields = fields
-@extend_schema_serializer(
- exclude_fields=[
- # legacy fields have been moved to "entity", "invidual_rating", "collective_rating", etc.
- "uid",
- "type",
- "n_comparisons",
- "n_contributors",
- "metadata",
- "total_score",
- "tournesol_score",
- "criteria_scores",
- "unsafe",
- "is_public",
- ]
-)
class ContributorRecommendationsSerializer(RecommendationSerializer):
"""
An entity recommended by a user.
"""
-
- is_public = SerializerMethodField()
- criteria_scores = SerializerMethodField()
individual_rating = IndividualRatingWithScoresSerializer(
source="single_contributor_rating",
read_only=True,
)
class Meta(RecommendationSerializer.Meta):
- fields = RecommendationSerializer.Meta.fields + ["is_public", "individual_rating"]
-
- @extend_schema_field(ContributorCriteriaScoreSerializer(many=True))
- def get_criteria_scores(self, obj):
- return ContributorCriteriaScoreSerializer(
- obj.single_contributor_rating.criteria_scores, many=True
- ).data
-
- def get_is_public(self, obj) -> bool:
- return obj.single_contributor_rating.is_public
+ fields = RecommendationSerializer.Meta.fields + ["individual_rating"]
diff --git a/backend/tournesol/serializers/poll.py b/backend/tournesol/serializers/poll.py
--- a/backend/tournesol/serializers/poll.py
+++ b/backend/tournesol/serializers/poll.py
@@ -1,4 +1,3 @@
-from drf_spectacular.utils import extend_schema_serializer
from rest_framework import serializers
from rest_framework.serializers import IntegerField, ModelSerializer
@@ -79,32 +78,7 @@
total_score = serializers.FloatField(read_only=True, allow_null=True)
-@extend_schema_serializer(
- exclude_fields=[
- # legacy fields have been moved to "entity", "collective_rating", etc.
- "uid",
- "type",
- "n_comparisons",
- "n_contributors",
- "metadata",
- "total_score",
- "tournesol_score",
- "criteria_scores",
- "unsafe",
- ]
-)
class RecommendationSerializer(ModelSerializer):
- # pylint: disable=duplicate-code
- n_comparisons = serializers.IntegerField(source="rating_n_ratings")
- n_contributors = serializers.IntegerField(source="rating_n_contributors")
- criteria_scores = EntityCriteriaScoreSerializer(many=True)
- # TODO: the field total_score is the only field in this serializer that
- # on the parameters of an api request. Should it be treated differently?
- total_score = serializers.FloatField()
- unsafe = UnsafeStatusSerializer(
- source="single_poll_rating", allow_null=True, default=None, read_only=True
- )
-
entity = RelatedEntitySerializer(source="*", read_only=True)
collective_rating = ExtendedCollectiveRatingSerializer(
source="single_poll_rating",
@@ -121,15 +95,6 @@
class Meta:
model = Entity
fields = [
- "uid",
- "type",
- "n_comparisons",
- "n_contributors",
- "metadata",
- "total_score",
- "tournesol_score",
- "criteria_scores",
- "unsafe",
"entity",
"collective_rating",
"entity_contexts",
| {"golden_diff": "diff --git a/backend/tournesol/serializers/contributor_recommendations.py b/backend/tournesol/serializers/contributor_recommendations.py\n--- a/backend/tournesol/serializers/contributor_recommendations.py\n+++ b/backend/tournesol/serializers/contributor_recommendations.py\n@@ -1,6 +1,3 @@\n-from drf_spectacular.utils import extend_schema_field, extend_schema_serializer\n-from rest_framework.serializers import SerializerMethodField\n-\n from tournesol.models.ratings import ContributorRating\n from tournesol.serializers.criteria_score import ContributorCriteriaScoreSerializer\n from tournesol.serializers.poll import IndividualRatingSerializer, RecommendationSerializer\n@@ -15,41 +12,14 @@\n read_only_fields = fields\n \n \n-@extend_schema_serializer(\n- exclude_fields=[\n- # legacy fields have been moved to \"entity\", \"invidual_rating\", \"collective_rating\", etc.\n- \"uid\",\n- \"type\",\n- \"n_comparisons\",\n- \"n_contributors\",\n- \"metadata\",\n- \"total_score\",\n- \"tournesol_score\",\n- \"criteria_scores\",\n- \"unsafe\",\n- \"is_public\",\n- ]\n-)\n class ContributorRecommendationsSerializer(RecommendationSerializer):\n \"\"\"\n An entity recommended by a user.\n \"\"\"\n-\n- is_public = SerializerMethodField()\n- criteria_scores = SerializerMethodField()\n individual_rating = IndividualRatingWithScoresSerializer(\n source=\"single_contributor_rating\",\n read_only=True,\n )\n \n class Meta(RecommendationSerializer.Meta):\n- fields = RecommendationSerializer.Meta.fields + [\"is_public\", \"individual_rating\"]\n-\n- @extend_schema_field(ContributorCriteriaScoreSerializer(many=True))\n- def get_criteria_scores(self, obj):\n- return ContributorCriteriaScoreSerializer(\n- obj.single_contributor_rating.criteria_scores, many=True\n- ).data\n-\n- def get_is_public(self, obj) -> bool:\n- return obj.single_contributor_rating.is_public\n+ fields = RecommendationSerializer.Meta.fields + [\"individual_rating\"]\ndiff --git a/backend/tournesol/serializers/poll.py b/backend/tournesol/serializers/poll.py\n--- a/backend/tournesol/serializers/poll.py\n+++ b/backend/tournesol/serializers/poll.py\n@@ -1,4 +1,3 @@\n-from drf_spectacular.utils import extend_schema_serializer\n from rest_framework import serializers\n from rest_framework.serializers import IntegerField, ModelSerializer\n \n@@ -79,32 +78,7 @@\n total_score = serializers.FloatField(read_only=True, allow_null=True)\n \n \n-@extend_schema_serializer(\n- exclude_fields=[\n- # legacy fields have been moved to \"entity\", \"collective_rating\", etc.\n- \"uid\",\n- \"type\",\n- \"n_comparisons\",\n- \"n_contributors\",\n- \"metadata\",\n- \"total_score\",\n- \"tournesol_score\",\n- \"criteria_scores\",\n- \"unsafe\",\n- ]\n-)\n class RecommendationSerializer(ModelSerializer):\n- # pylint: disable=duplicate-code\n- n_comparisons = serializers.IntegerField(source=\"rating_n_ratings\")\n- n_contributors = serializers.IntegerField(source=\"rating_n_contributors\")\n- criteria_scores = EntityCriteriaScoreSerializer(many=True)\n- # TODO: the field total_score is the only field in this serializer that\n- # on the parameters of an api request. Should it be treated differently?\n- total_score = serializers.FloatField()\n- unsafe = UnsafeStatusSerializer(\n- source=\"single_poll_rating\", allow_null=True, default=None, read_only=True\n- )\n-\n entity = RelatedEntitySerializer(source=\"*\", read_only=True)\n collective_rating = ExtendedCollectiveRatingSerializer(\n source=\"single_poll_rating\",\n@@ -121,15 +95,6 @@\n class Meta:\n model = Entity\n fields = [\n- \"uid\",\n- \"type\",\n- \"n_comparisons\",\n- \"n_contributors\",\n- \"metadata\",\n- \"total_score\",\n- \"tournesol_score\",\n- \"criteria_scores\",\n- \"unsafe\",\n \"entity\",\n \"collective_rating\",\n \"entity_contexts\",\n", "issue": "[back, front] refactor: Harmonize all API and Serializers (then:feat: Rate later list display information about the number of comparisons)\nAs a user I want to see how many times I have compared the videos when I look at my rate later list so that I can easily chose which videos to remove from my rate later list.\r\n\r\nDisplay the information similarly as in the card used on the comparison page.\r\n\r\nThis probably will be best implemented by updating the backend endpoint to include the extra info\n", "before_files": [{"content": "from drf_spectacular.utils import extend_schema_field, extend_schema_serializer\nfrom rest_framework.serializers import SerializerMethodField\n\nfrom tournesol.models.ratings import ContributorRating\nfrom tournesol.serializers.criteria_score import ContributorCriteriaScoreSerializer\nfrom tournesol.serializers.poll import IndividualRatingSerializer, RecommendationSerializer\n\n\nclass IndividualRatingWithScoresSerializer(IndividualRatingSerializer):\n criteria_scores = ContributorCriteriaScoreSerializer(many=True, read_only=True)\n\n class Meta:\n model = ContributorRating\n fields = IndividualRatingSerializer.Meta.fields + [\"criteria_scores\"]\n read_only_fields = fields\n\n\n@extend_schema_serializer(\n exclude_fields=[\n # legacy fields have been moved to \"entity\", \"invidual_rating\", \"collective_rating\", etc.\n \"uid\",\n \"type\",\n \"n_comparisons\",\n \"n_contributors\",\n \"metadata\",\n \"total_score\",\n \"tournesol_score\",\n \"criteria_scores\",\n \"unsafe\",\n \"is_public\",\n ]\n)\nclass ContributorRecommendationsSerializer(RecommendationSerializer):\n \"\"\"\n An entity recommended by a user.\n \"\"\"\n\n is_public = SerializerMethodField()\n criteria_scores = SerializerMethodField()\n individual_rating = IndividualRatingWithScoresSerializer(\n source=\"single_contributor_rating\",\n read_only=True,\n )\n\n class Meta(RecommendationSerializer.Meta):\n fields = RecommendationSerializer.Meta.fields + [\"is_public\", \"individual_rating\"]\n\n @extend_schema_field(ContributorCriteriaScoreSerializer(many=True))\n def get_criteria_scores(self, obj):\n return ContributorCriteriaScoreSerializer(\n obj.single_contributor_rating.criteria_scores, many=True\n ).data\n\n def get_is_public(self, obj) -> bool:\n return obj.single_contributor_rating.is_public\n", "path": "backend/tournesol/serializers/contributor_recommendations.py"}, {"content": "from drf_spectacular.utils import extend_schema_serializer\nfrom rest_framework import serializers\nfrom rest_framework.serializers import IntegerField, ModelSerializer\n\nfrom tournesol.models import ContributorRating, CriteriaRank, Entity, EntityPollRating, Poll\nfrom tournesol.models.entity_poll_rating import UNSAFE_REASONS\nfrom tournesol.serializers.entity import EntityCriteriaScoreSerializer, RelatedEntitySerializer\nfrom tournesol.serializers.entity_context import EntityContextSerializer\n\n\nclass PollCriteriaSerializer(ModelSerializer):\n name = serializers.CharField(source=\"criteria.name\")\n label = serializers.CharField(source=\"criteria.get_label\")\n\n class Meta:\n model = CriteriaRank\n fields = [\"name\", \"label\", \"optional\"]\n\n\nclass PollSerializer(ModelSerializer):\n criterias = PollCriteriaSerializer(source=\"criteriarank_set\", many=True)\n\n class Meta:\n model = Poll\n fields = [\"name\", \"criterias\", \"entity_type\", \"active\"]\n\n\nclass UnsafeStatusSerializer(ModelSerializer):\n status = serializers.BooleanField(source=\"is_recommendation_unsafe\")\n reasons = serializers.ListField(\n child=serializers.ChoiceField(choices=UNSAFE_REASONS),\n source=\"unsafe_recommendation_reasons\",\n )\n\n class Meta:\n model = EntityPollRating\n fields = [\n \"status\",\n \"reasons\",\n ]\n\n\nclass CollectiveRatingSerializer(ModelSerializer):\n unsafe = UnsafeStatusSerializer(source=\"*\", read_only=True)\n\n class Meta:\n model = EntityPollRating\n fields = [\n \"n_comparisons\",\n \"n_contributors\",\n \"tournesol_score\",\n \"unsafe\",\n ]\n read_only_fields = fields\n\n\nclass ExtendedCollectiveRatingSerializer(CollectiveRatingSerializer):\n criteria_scores = EntityCriteriaScoreSerializer(source=\"entity.criteria_scores\", many=True)\n\n class Meta:\n model = CollectiveRatingSerializer.Meta.model\n fields = CollectiveRatingSerializer.Meta.fields + [\"criteria_scores\"]\n read_only_fields = fields\n\n\nclass IndividualRatingSerializer(ModelSerializer):\n n_comparisons = IntegerField(read_only=True, default=0)\n\n class Meta:\n model = ContributorRating\n fields = [\n \"is_public\",\n \"n_comparisons\",\n ]\n read_only_fields = fields\n\n\nclass RecommendationMetadataSerializer(serializers.Serializer):\n total_score = serializers.FloatField(read_only=True, allow_null=True)\n\n\n@extend_schema_serializer(\n exclude_fields=[\n # legacy fields have been moved to \"entity\", \"collective_rating\", etc.\n \"uid\",\n \"type\",\n \"n_comparisons\",\n \"n_contributors\",\n \"metadata\",\n \"total_score\",\n \"tournesol_score\",\n \"criteria_scores\",\n \"unsafe\",\n ]\n)\nclass RecommendationSerializer(ModelSerializer):\n # pylint: disable=duplicate-code\n n_comparisons = serializers.IntegerField(source=\"rating_n_ratings\")\n n_contributors = serializers.IntegerField(source=\"rating_n_contributors\")\n criteria_scores = EntityCriteriaScoreSerializer(many=True)\n # TODO: the field total_score is the only field in this serializer that\n # on the parameters of an api request. Should it be treated differently?\n total_score = serializers.FloatField()\n unsafe = UnsafeStatusSerializer(\n source=\"single_poll_rating\", allow_null=True, default=None, read_only=True\n )\n\n entity = RelatedEntitySerializer(source=\"*\", read_only=True)\n collective_rating = ExtendedCollectiveRatingSerializer(\n source=\"single_poll_rating\",\n read_only=True,\n allow_null=True,\n )\n entity_contexts = EntityContextSerializer(\n source=\"single_poll_entity_contexts\",\n read_only=True,\n many=True\n )\n recommendation_metadata = RecommendationMetadataSerializer(source=\"*\", read_only=True)\n\n class Meta:\n model = Entity\n fields = [\n \"uid\",\n \"type\",\n \"n_comparisons\",\n \"n_contributors\",\n \"metadata\",\n \"total_score\",\n \"tournesol_score\",\n \"criteria_scores\",\n \"unsafe\",\n \"entity\",\n \"collective_rating\",\n \"entity_contexts\",\n \"recommendation_metadata\",\n ]\n read_only_fields = fields\n\n\nclass RecommendationsFilterSerializer(serializers.Serializer):\n date_lte = serializers.DateTimeField(default=None)\n date_gte = serializers.DateTimeField(default=None)\n search = serializers.CharField(default=None, help_text=\"A search query to filter entities\")\n unsafe = serializers.BooleanField(\n default=False,\n help_text=\"If true, entities considered as unsafe recommendations because of a\"\n \" low score or due to too few contributions will be included.\",\n )\n exclude_compared_entities = serializers.BooleanField(\n default=False,\n help_text=\"If true and a user is authenticated, then entities compared by the\"\n \" user will be removed from the response\",\n )\n", "path": "backend/tournesol/serializers/poll.py"}]} | 2,545 | 934 |
gh_patches_debug_35861 | rasdani/github-patches | git_diff | streamlink__streamlink-2048 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Euronews error, unable to open URL
<!--
Thanks for reporting a bug!
USE THE TEMPLATE. Otherwise your bug report may be rejected.
First, see the contribution guidelines:
https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink
Also check the list of open and closed bug reports:
https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22
Please see the text preview to avoid unnecessary formatting errors.
-->
## Bug Report
<!-- Replace [ ] with [x] in order to check the box -->
- [x] This is a bug report and I have read the contribution guidelines.
### Description
<!-- Explain the bug as thoroughly as you can. Don't leave out information which is necessary for us to reproduce and debug this issue. -->
I'm unable to open Euronews live stream.
### Expected / Actual behavior
<!-- What do you expect to happen, and what is actually happening? -->
I expect the stream to open in my media player.
Instead I get this:
```
marco@vbox-ubuntu1804:~$ streamlink http://it.euronews.com/live
[cli][info] Found matching plugin euronews for URL http://it.euronews.com/live
error: Unable to open URL: //euronews-it-p-api.hexaglobe.net/1c903a19de71387485a0f6f74d7923f5/5b8a5583/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls (Invalid URL '//euronews-it-p-api.hexaglobe.net/1c903a19de71387485a0f6f74d7923f5/5b8a5583/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls': No schema supplied. Perhaps you meant http:////euronews-it-p-api.hexaglobe.net/1c903a19de71387485a0f6f74d7923f5/5b8a5583/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls?)
```
### Reproduction steps / Explicit stream URLs to test
<!-- How can we reproduce this? Please note the exact steps below using the list format supplied. If you need more steps please add them. -->
Run this command:
```
streamlink http://it.euronews.com/live
```
### Log output
<!--
TEXT LOG OUTPUT IS REQUIRED for a bug report!
Use the `--loglevel debug` parameter and avoid using parameters which suppress log output.
https://streamlink.github.io/cli.html#cmdoption-l
Make sure to **remove usernames and passwords**
You can copy the output to https://gist.github.com/ or paste it below.
-->
```
marco@vbox-ubuntu1804:~$ streamlink --loglevel debug http://it.euronews.com/live
[cli][debug] OS: Linux-4.15.0-33-generic-x86_64-with-Ubuntu-18.04-bionic
[cli][debug] Python: 3.6.5
[cli][debug] Streamlink: 0.14.2+92.gc7bef14b
[cli][debug] Requests(2.18.4), Socks(1.6.7), Websocket(0.51.0)
[cli][info] Found matching plugin euronews for URL http://it.euronews.com/live
error: Unable to open URL: //euronews-it-p-api.hexaglobe.net/688afb391d4325cad6765c6dc61585a4/5b8a7b36/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls (Invalid URL '//euronews-it-p-api.hexaglobe.net/688afb391d4325cad6765c6dc61585a4/5b8a7b36/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls': No schema supplied. Perhaps you meant http:////euronews-it-p-api.hexaglobe.net/688afb391d4325cad6765c6dc61585a4/5b8a7b36/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls?)
```
### Additional comments, screenshots, etc.
Streamlink versions tested: 0.9.0, 0.14.2 from pip, master from git
Same error with all of them.
[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)
</issue>
<code>
[start of src/streamlink/plugins/euronews.py]
1 import re
2
3 from streamlink.plugin import Plugin
4 from streamlink.plugin.api import validate
5 from streamlink.stream import HLSStream, HTTPStream
6
7
8 class Euronews(Plugin):
9 _url_re = re.compile(r"http(?:s)?://(\w+)\.?euronews.com/(live|.*)")
10 _re_vod = re.compile(r'<meta\s+property="og:video"\s+content="(http.*?)"\s*/>')
11 _live_api_url = "http://{0}.euronews.com/api/watchlive.json"
12 _live_schema = validate.Schema({
13 u"url": validate.url()
14 })
15 _stream_api_schema = validate.Schema({
16 u'status': u'ok',
17 u'primary': validate.url(),
18 validate.optional(u'backup'): validate.url()
19 })
20
21 @classmethod
22 def can_handle_url(cls, url):
23 return cls._url_re.match(url)
24
25 def _get_vod_stream(self):
26 """
27 Find the VOD video url
28 :return: video url
29 """
30 res = self.session.http.get(self.url)
31 video_urls = self._re_vod.findall(res.text)
32 if len(video_urls):
33 return dict(vod=HTTPStream(self.session, video_urls[0]))
34
35 def _get_live_streams(self, subdomain):
36 """
37 Get the live stream in a particular language
38 :param subdomain:
39 :return:
40 """
41 res = self.session.http.get(self._live_api_url.format(subdomain))
42 live_res = self.session.http.json(res, schema=self._live_schema)
43 api_res = self.session.http.get(live_res[u"url"])
44 stream_data = self.session.http.json(api_res, schema=self._stream_api_schema)
45 return HLSStream.parse_variant_playlist(self.session, stream_data[u'primary'])
46
47 def _get_streams(self):
48 """
49 Find the streams for euronews
50 :return:
51 """
52 match = self._url_re.match(self.url)
53 subdomain, path = match.groups()
54
55 if path == "live":
56 return self._get_live_streams(subdomain)
57 else:
58 return self._get_vod_stream()
59
60
61 __plugin__ = Euronews
62
[end of src/streamlink/plugins/euronews.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/euronews.py b/src/streamlink/plugins/euronews.py
--- a/src/streamlink/plugins/euronews.py
+++ b/src/streamlink/plugins/euronews.py
@@ -3,10 +3,11 @@
from streamlink.plugin import Plugin
from streamlink.plugin.api import validate
from streamlink.stream import HLSStream, HTTPStream
+from streamlink.utils.url import update_scheme
class Euronews(Plugin):
- _url_re = re.compile(r"http(?:s)?://(\w+)\.?euronews.com/(live|.*)")
+ _url_re = re.compile(r'(?P<scheme>https?)://(?P<subdomain>\w+)\.?euronews.com/(?P<path>live|.*)')
_re_vod = re.compile(r'<meta\s+property="og:video"\s+content="(http.*?)"\s*/>')
_live_api_url = "http://{0}.euronews.com/api/watchlive.json"
_live_schema = validate.Schema({
@@ -32,28 +33,29 @@
if len(video_urls):
return dict(vod=HTTPStream(self.session, video_urls[0]))
- def _get_live_streams(self, subdomain):
+ def _get_live_streams(self, match):
"""
Get the live stream in a particular language
- :param subdomain:
+ :param match:
:return:
"""
- res = self.session.http.get(self._live_api_url.format(subdomain))
- live_res = self.session.http.json(res, schema=self._live_schema)
- api_res = self.session.http.get(live_res[u"url"])
- stream_data = self.session.http.json(api_res, schema=self._stream_api_schema)
- return HLSStream.parse_variant_playlist(self.session, stream_data[u'primary'])
+ live_url = self._live_api_url.format(match.get("subdomain"))
+ live_res = self.session.http.json(self.session.http.get(live_url), schema=self._live_schema)
+
+ api_url = update_scheme("{0}:///".format(match.get("scheme")), live_res["url"])
+ api_res = self.session.http.json(self.session.http.get(api_url), schema=self._stream_api_schema)
+
+ return HLSStream.parse_variant_playlist(self.session, api_res["primary"])
def _get_streams(self):
"""
Find the streams for euronews
:return:
"""
- match = self._url_re.match(self.url)
- subdomain, path = match.groups()
+ match = self._url_re.match(self.url).groupdict()
- if path == "live":
- return self._get_live_streams(subdomain)
+ if match.get("path") == "live":
+ return self._get_live_streams(match)
else:
return self._get_vod_stream()
| {"golden_diff": "diff --git a/src/streamlink/plugins/euronews.py b/src/streamlink/plugins/euronews.py\n--- a/src/streamlink/plugins/euronews.py\n+++ b/src/streamlink/plugins/euronews.py\n@@ -3,10 +3,11 @@\n from streamlink.plugin import Plugin\n from streamlink.plugin.api import validate\n from streamlink.stream import HLSStream, HTTPStream\n+from streamlink.utils.url import update_scheme\n \n \n class Euronews(Plugin):\n- _url_re = re.compile(r\"http(?:s)?://(\\w+)\\.?euronews.com/(live|.*)\")\n+ _url_re = re.compile(r'(?P<scheme>https?)://(?P<subdomain>\\w+)\\.?euronews.com/(?P<path>live|.*)')\n _re_vod = re.compile(r'<meta\\s+property=\"og:video\"\\s+content=\"(http.*?)\"\\s*/>')\n _live_api_url = \"http://{0}.euronews.com/api/watchlive.json\"\n _live_schema = validate.Schema({\n@@ -32,28 +33,29 @@\n if len(video_urls):\n return dict(vod=HTTPStream(self.session, video_urls[0]))\n \n- def _get_live_streams(self, subdomain):\n+ def _get_live_streams(self, match):\n \"\"\"\n Get the live stream in a particular language\n- :param subdomain:\n+ :param match:\n :return:\n \"\"\"\n- res = self.session.http.get(self._live_api_url.format(subdomain))\n- live_res = self.session.http.json(res, schema=self._live_schema)\n- api_res = self.session.http.get(live_res[u\"url\"])\n- stream_data = self.session.http.json(api_res, schema=self._stream_api_schema)\n- return HLSStream.parse_variant_playlist(self.session, stream_data[u'primary'])\n+ live_url = self._live_api_url.format(match.get(\"subdomain\"))\n+ live_res = self.session.http.json(self.session.http.get(live_url), schema=self._live_schema)\n+\n+ api_url = update_scheme(\"{0}:///\".format(match.get(\"scheme\")), live_res[\"url\"])\n+ api_res = self.session.http.json(self.session.http.get(api_url), schema=self._stream_api_schema)\n+\n+ return HLSStream.parse_variant_playlist(self.session, api_res[\"primary\"])\n \n def _get_streams(self):\n \"\"\"\n Find the streams for euronews\n :return:\n \"\"\"\n- match = self._url_re.match(self.url)\n- subdomain, path = match.groups()\n+ match = self._url_re.match(self.url).groupdict()\n \n- if path == \"live\":\n- return self._get_live_streams(subdomain)\n+ if match.get(\"path\") == \"live\":\n+ return self._get_live_streams(match)\n else:\n return self._get_vod_stream()\n", "issue": "Euronews error, unable to open URL\n<!--\r\nThanks for reporting a bug!\r\nUSE THE TEMPLATE. Otherwise your bug report may be rejected.\r\n\r\nFirst, see the contribution guidelines:\r\nhttps://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink\r\n\r\nAlso check the list of open and closed bug reports:\r\nhttps://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22\r\n\r\nPlease see the text preview to avoid unnecessary formatting errors.\r\n-->\r\n\r\n\r\n## Bug Report\r\n\r\n<!-- Replace [ ] with [x] in order to check the box -->\r\n- [x] This is a bug report and I have read the contribution guidelines.\r\n\r\n\r\n### Description\r\n\r\n<!-- Explain the bug as thoroughly as you can. Don't leave out information which is necessary for us to reproduce and debug this issue. -->\r\nI'm unable to open Euronews live stream.\r\n\r\n### Expected / Actual behavior\r\n\r\n<!-- What do you expect to happen, and what is actually happening? -->\r\nI expect the stream to open in my media player.\r\nInstead I get this:\r\n```\r\nmarco@vbox-ubuntu1804:~$ streamlink http://it.euronews.com/live\r\n[cli][info] Found matching plugin euronews for URL http://it.euronews.com/live\r\nerror: Unable to open URL: //euronews-it-p-api.hexaglobe.net/1c903a19de71387485a0f6f74d7923f5/5b8a5583/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls (Invalid URL '//euronews-it-p-api.hexaglobe.net/1c903a19de71387485a0f6f74d7923f5/5b8a5583/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls': No schema supplied. Perhaps you meant http:////euronews-it-p-api.hexaglobe.net/1c903a19de71387485a0f6f74d7923f5/5b8a5583/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls?)\r\n```\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n\r\n<!-- How can we reproduce this? Please note the exact steps below using the list format supplied. If you need more steps please add them. -->\r\n\r\nRun this command:\r\n```\r\nstreamlink http://it.euronews.com/live\r\n```\r\n\r\n### Log output\r\n\r\n<!--\r\nTEXT LOG OUTPUT IS REQUIRED for a bug report!\r\nUse the `--loglevel debug` parameter and avoid using parameters which suppress log output.\r\nhttps://streamlink.github.io/cli.html#cmdoption-l\r\n\r\nMake sure to **remove usernames and passwords**\r\nYou can copy the output to https://gist.github.com/ or paste it below.\r\n-->\r\n\r\n```\r\nmarco@vbox-ubuntu1804:~$ streamlink --loglevel debug http://it.euronews.com/live\r\n[cli][debug] OS: Linux-4.15.0-33-generic-x86_64-with-Ubuntu-18.04-bionic\r\n[cli][debug] Python: 3.6.5\r\n[cli][debug] Streamlink: 0.14.2+92.gc7bef14b\r\n[cli][debug] Requests(2.18.4), Socks(1.6.7), Websocket(0.51.0)\r\n[cli][info] Found matching plugin euronews for URL http://it.euronews.com/live\r\nerror: Unable to open URL: //euronews-it-p-api.hexaglobe.net/688afb391d4325cad6765c6dc61585a4/5b8a7b36/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls (Invalid URL '//euronews-it-p-api.hexaglobe.net/688afb391d4325cad6765c6dc61585a4/5b8a7b36/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls': No schema supplied. Perhaps you meant http:////euronews-it-p-api.hexaglobe.net/688afb391d4325cad6765c6dc61585a4/5b8a7b36/euronews/euronews-euronews-website-web-responsive-2/it/stream_info.php?format=hls?)\r\n```\r\n\r\n\r\n### Additional comments, screenshots, etc.\r\nStreamlink versions tested: 0.9.0, 0.14.2 from pip, master from git\r\nSame error with all of them.\r\n\r\n\r\n[Love Streamlink? Please consider supporting our collective. Thanks!](https://opencollective.com/streamlink/donate)\r\n\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream, HTTPStream\n\n\nclass Euronews(Plugin):\n _url_re = re.compile(r\"http(?:s)?://(\\w+)\\.?euronews.com/(live|.*)\")\n _re_vod = re.compile(r'<meta\\s+property=\"og:video\"\\s+content=\"(http.*?)\"\\s*/>')\n _live_api_url = \"http://{0}.euronews.com/api/watchlive.json\"\n _live_schema = validate.Schema({\n u\"url\": validate.url()\n })\n _stream_api_schema = validate.Schema({\n u'status': u'ok',\n u'primary': validate.url(),\n validate.optional(u'backup'): validate.url()\n })\n\n @classmethod\n def can_handle_url(cls, url):\n return cls._url_re.match(url)\n\n def _get_vod_stream(self):\n \"\"\"\n Find the VOD video url\n :return: video url\n \"\"\"\n res = self.session.http.get(self.url)\n video_urls = self._re_vod.findall(res.text)\n if len(video_urls):\n return dict(vod=HTTPStream(self.session, video_urls[0]))\n\n def _get_live_streams(self, subdomain):\n \"\"\"\n Get the live stream in a particular language\n :param subdomain:\n :return:\n \"\"\"\n res = self.session.http.get(self._live_api_url.format(subdomain))\n live_res = self.session.http.json(res, schema=self._live_schema)\n api_res = self.session.http.get(live_res[u\"url\"])\n stream_data = self.session.http.json(api_res, schema=self._stream_api_schema)\n return HLSStream.parse_variant_playlist(self.session, stream_data[u'primary'])\n\n def _get_streams(self):\n \"\"\"\n Find the streams for euronews\n :return:\n \"\"\"\n match = self._url_re.match(self.url)\n subdomain, path = match.groups()\n\n if path == \"live\":\n return self._get_live_streams(subdomain)\n else:\n return self._get_vod_stream()\n\n\n__plugin__ = Euronews\n", "path": "src/streamlink/plugins/euronews.py"}]} | 2,278 | 625 |
gh_patches_debug_15061 | rasdani/github-patches | git_diff | netbox-community__netbox-2996 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Inconsistent /api/virtualization/interfaces/ results
<!--
Before opening a new issue, please search through the existing issues to
see if your topic has already been addressed. Note that you may need to
remove the "is:open" filter from the search bar to include closed issues.
Check the appropriate type for your issue below by placing an x between the
brackets. For assistance with installation issues, or for any other issues
other than those listed below, please raise your topic for discussion on
our mailing list:
https://groups.google.com/forum/#!forum/netbox-discuss
Please note that issues which do not fall under any of the below categories
will be closed. Due to an excessive backlog of feature requests, we are
not currently accepting any proposals which extend NetBox's feature scope.
Do not prepend any sort of tag to your issue's title. An administrator will
review your issue and assign labels as appropriate.
--->
### Issue type
[ ] Feature request <!-- An enhancement of existing functionality -->
[ x ] Bug report <!-- Unexpected or erroneous behavior -->
[ ] Documentation <!-- A modification to the documentation -->
<!--
Please describe the environment in which you are running NetBox. (Be sure
to verify that you are running the latest stable release of NetBox before
submitting a bug report.) If you are submitting a bug report and have made
any changes to the code base, please first validate that your bug can be
recreated while running an official release.
-->
### Environment
* Python version: 3.6.5
* NetBox version: 2.3.4
<!--
BUG REPORTS must include:
* A list of the steps needed for someone else to reproduce the bug
* A description of the expected and observed behavior
* Any relevant error messages (screenshots may also help)
FEATURE REQUESTS must include:
* A detailed description of the proposed functionality
* A use case for the new feature
* A rough description of any necessary changes to the database schema
* Any relevant third-party libraries which would be needed
-->
### Description
Querying all virtualized interfaces returns inconsistents results : the count is OK, but some interfaces are missing and some are duplicated.
The underlying query is ordering by an empty column ("dcim_device"."name") which seems to fall under the non predictable results case : https://www.postgresql.org/docs/current/static/queries-limit.html
```sql
... WHERE "dcim_interface"."virtual_machine_id" IS NOT NULL
ORDER BY "dcim_device"."name" ASC, "dcim_interface"."name" ASC LIMIT 1000 OFFSET 50
```
</issue>
<code>
[start of netbox/dcim/managers.py]
1 from django.db.models import Manager, QuerySet
2 from django.db.models.expressions import RawSQL
3
4 from .constants import NONCONNECTABLE_IFACE_TYPES
5
6 # Regular expressions for parsing Interface names
7 TYPE_RE = r"SUBSTRING({} FROM '^([^0-9\.:]+)')"
8 SLOT_RE = r"COALESCE(CAST(SUBSTRING({} FROM '^(?:[^0-9]+)?(\d{{1,9}})/') AS integer), NULL)"
9 SUBSLOT_RE = r"COALESCE(CAST(SUBSTRING({} FROM '^(?:[^0-9\.:]+)?\d{{1,9}}/(\d{{1,9}})') AS integer), NULL)"
10 POSITION_RE = r"COALESCE(CAST(SUBSTRING({} FROM '^(?:[^0-9]+)?(?:\d{{1,9}}/){{2}}(\d{{1,9}})') AS integer), NULL)"
11 SUBPOSITION_RE = r"COALESCE(CAST(SUBSTRING({} FROM '^(?:[^0-9]+)?(?:\d{{1,9}}/){{3}}(\d{{1,9}})') AS integer), NULL)"
12 ID_RE = r"CAST(SUBSTRING({} FROM '^(?:[^0-9\.:]+)?(\d{{1,9}})([^/]|$)') AS integer)"
13 CHANNEL_RE = r"COALESCE(CAST(SUBSTRING({} FROM '^.*:(\d{{1,9}})(\.\d{{1,9}})?$') AS integer), 0)"
14 VC_RE = r"COALESCE(CAST(SUBSTRING({} FROM '^.*\.(\d{{1,9}})$') AS integer), 0)"
15
16
17 class DeviceComponentManager(Manager):
18
19 def get_queryset(self):
20
21 queryset = super().get_queryset()
22 table_name = self.model._meta.db_table
23 sql = r"CONCAT(REGEXP_REPLACE({}.name, '\d+$', ''), LPAD(SUBSTRING({}.name FROM '\d+$'), 8, '0'))"
24
25 # Pad any trailing digits to effect natural sorting
26 return queryset.extra(
27 select={
28 'name_padded': sql.format(table_name, table_name),
29 }
30 ).order_by('name_padded', 'pk')
31
32
33 class InterfaceQuerySet(QuerySet):
34
35 def connectable(self):
36 """
37 Return only physical interfaces which are capable of being connected to other interfaces (i.e. not virtual or
38 wireless).
39 """
40 return self.exclude(form_factor__in=NONCONNECTABLE_IFACE_TYPES)
41
42
43 class InterfaceManager(Manager):
44
45 def get_queryset(self):
46 """
47 Naturally order interfaces by their type and numeric position. To order interfaces naturally, the `name` field
48 is split into eight distinct components: leading text (type), slot, subslot, position, subposition, ID, channel,
49 and virtual circuit:
50
51 {type}{slot or ID}/{subslot}/{position}/{subposition}:{channel}.{vc}
52
53 Components absent from the interface name are coalesced to zero or null. For example, an interface named
54 GigabitEthernet1/2/3 would be parsed as follows:
55
56 type = 'GigabitEthernet'
57 slot = 1
58 subslot = 2
59 position = 3
60 subposition = None
61 id = None
62 channel = 0
63 vc = 0
64
65 The original `name` field is considered in its entirety to serve as a fallback in the event interfaces do not
66 match any of the prescribed fields.
67 """
68
69 sql_col = '{}.name'.format(self.model._meta.db_table)
70 ordering = [
71 '_slot', '_subslot', '_position', '_subposition', '_type', '_id', '_channel', '_vc', 'name',
72 ]
73
74 fields = {
75 '_type': RawSQL(TYPE_RE.format(sql_col), []),
76 '_id': RawSQL(ID_RE.format(sql_col), []),
77 '_slot': RawSQL(SLOT_RE.format(sql_col), []),
78 '_subslot': RawSQL(SUBSLOT_RE.format(sql_col), []),
79 '_position': RawSQL(POSITION_RE.format(sql_col), []),
80 '_subposition': RawSQL(SUBPOSITION_RE.format(sql_col), []),
81 '_channel': RawSQL(CHANNEL_RE.format(sql_col), []),
82 '_vc': RawSQL(VC_RE.format(sql_col), []),
83 }
84
85 return InterfaceQuerySet(self.model, using=self._db).annotate(**fields).order_by(*ordering)
86
[end of netbox/dcim/managers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/dcim/managers.py b/netbox/dcim/managers.py
--- a/netbox/dcim/managers.py
+++ b/netbox/dcim/managers.py
@@ -64,11 +64,15 @@
The original `name` field is considered in its entirety to serve as a fallback in the event interfaces do not
match any of the prescribed fields.
+
+ The `id` field is included to enforce deterministic ordering of interfaces in similar vein of other device
+ components.
"""
sql_col = '{}.name'.format(self.model._meta.db_table)
ordering = [
- '_slot', '_subslot', '_position', '_subposition', '_type', '_id', '_channel', '_vc', 'name',
+ '_slot', '_subslot', '_position', '_subposition', '_type', '_id', '_channel', '_vc', 'name', 'pk'
+
]
fields = {
| {"golden_diff": "diff --git a/netbox/dcim/managers.py b/netbox/dcim/managers.py\n--- a/netbox/dcim/managers.py\n+++ b/netbox/dcim/managers.py\n@@ -64,11 +64,15 @@\n \n The original `name` field is considered in its entirety to serve as a fallback in the event interfaces do not\n match any of the prescribed fields.\n+\n+ The `id` field is included to enforce deterministic ordering of interfaces in similar vein of other device\n+ components.\n \"\"\"\n \n sql_col = '{}.name'.format(self.model._meta.db_table)\n ordering = [\n- '_slot', '_subslot', '_position', '_subposition', '_type', '_id', '_channel', '_vc', 'name',\n+ '_slot', '_subslot', '_position', '_subposition', '_type', '_id', '_channel', '_vc', 'name', 'pk'\n+\n ]\n \n fields = {\n", "issue": "Inconsistent /api/virtualization/interfaces/ results\n<!--\r\n Before opening a new issue, please search through the existing issues to\r\n see if your topic has already been addressed. Note that you may need to\r\n remove the \"is:open\" filter from the search bar to include closed issues.\r\n\r\n Check the appropriate type for your issue below by placing an x between the\r\n brackets. For assistance with installation issues, or for any other issues\r\n other than those listed below, please raise your topic for discussion on\r\n our mailing list:\r\n\r\n https://groups.google.com/forum/#!forum/netbox-discuss\r\n\r\n Please note that issues which do not fall under any of the below categories\r\n will be closed. Due to an excessive backlog of feature requests, we are\r\n not currently accepting any proposals which extend NetBox's feature scope.\r\n\r\n Do not prepend any sort of tag to your issue's title. An administrator will\r\n review your issue and assign labels as appropriate.\r\n--->\r\n### Issue type\r\n[ ] Feature request <!-- An enhancement of existing functionality -->\r\n[ x ] Bug report <!-- Unexpected or erroneous behavior -->\r\n[ ] Documentation <!-- A modification to the documentation -->\r\n\r\n<!--\r\n Please describe the environment in which you are running NetBox. (Be sure\r\n to verify that you are running the latest stable release of NetBox before\r\n submitting a bug report.) If you are submitting a bug report and have made\r\n any changes to the code base, please first validate that your bug can be\r\n recreated while running an official release.\r\n-->\r\n### Environment\r\n* Python version: 3.6.5\r\n* NetBox version: 2.3.4\r\n\r\n<!--\r\n BUG REPORTS must include:\r\n * A list of the steps needed for someone else to reproduce the bug\r\n * A description of the expected and observed behavior\r\n * Any relevant error messages (screenshots may also help)\r\n\r\n FEATURE REQUESTS must include:\r\n * A detailed description of the proposed functionality\r\n * A use case for the new feature\r\n * A rough description of any necessary changes to the database schema\r\n * Any relevant third-party libraries which would be needed\r\n-->\r\n### Description\r\n\r\nQuerying all virtualized interfaces returns inconsistents results : the count is OK, but some interfaces are missing and some are duplicated.\r\n\r\nThe underlying query is ordering by an empty column (\"dcim_device\".\"name\") which seems to fall under the non predictable results case : https://www.postgresql.org/docs/current/static/queries-limit.html\r\n\r\n```sql\r\n... WHERE \"dcim_interface\".\"virtual_machine_id\" IS NOT NULL \r\nORDER BY \"dcim_device\".\"name\" ASC, \"dcim_interface\".\"name\" ASC LIMIT 1000 OFFSET 50\r\n```\n", "before_files": [{"content": "from django.db.models import Manager, QuerySet\nfrom django.db.models.expressions import RawSQL\n\nfrom .constants import NONCONNECTABLE_IFACE_TYPES\n\n# Regular expressions for parsing Interface names\nTYPE_RE = r\"SUBSTRING({} FROM '^([^0-9\\.:]+)')\"\nSLOT_RE = r\"COALESCE(CAST(SUBSTRING({} FROM '^(?:[^0-9]+)?(\\d{{1,9}})/') AS integer), NULL)\"\nSUBSLOT_RE = r\"COALESCE(CAST(SUBSTRING({} FROM '^(?:[^0-9\\.:]+)?\\d{{1,9}}/(\\d{{1,9}})') AS integer), NULL)\"\nPOSITION_RE = r\"COALESCE(CAST(SUBSTRING({} FROM '^(?:[^0-9]+)?(?:\\d{{1,9}}/){{2}}(\\d{{1,9}})') AS integer), NULL)\"\nSUBPOSITION_RE = r\"COALESCE(CAST(SUBSTRING({} FROM '^(?:[^0-9]+)?(?:\\d{{1,9}}/){{3}}(\\d{{1,9}})') AS integer), NULL)\"\nID_RE = r\"CAST(SUBSTRING({} FROM '^(?:[^0-9\\.:]+)?(\\d{{1,9}})([^/]|$)') AS integer)\"\nCHANNEL_RE = r\"COALESCE(CAST(SUBSTRING({} FROM '^.*:(\\d{{1,9}})(\\.\\d{{1,9}})?$') AS integer), 0)\"\nVC_RE = r\"COALESCE(CAST(SUBSTRING({} FROM '^.*\\.(\\d{{1,9}})$') AS integer), 0)\"\n\n\nclass DeviceComponentManager(Manager):\n\n def get_queryset(self):\n\n queryset = super().get_queryset()\n table_name = self.model._meta.db_table\n sql = r\"CONCAT(REGEXP_REPLACE({}.name, '\\d+$', ''), LPAD(SUBSTRING({}.name FROM '\\d+$'), 8, '0'))\"\n\n # Pad any trailing digits to effect natural sorting\n return queryset.extra(\n select={\n 'name_padded': sql.format(table_name, table_name),\n }\n ).order_by('name_padded', 'pk')\n\n\nclass InterfaceQuerySet(QuerySet):\n\n def connectable(self):\n \"\"\"\n Return only physical interfaces which are capable of being connected to other interfaces (i.e. not virtual or\n wireless).\n \"\"\"\n return self.exclude(form_factor__in=NONCONNECTABLE_IFACE_TYPES)\n\n\nclass InterfaceManager(Manager):\n\n def get_queryset(self):\n \"\"\"\n Naturally order interfaces by their type and numeric position. To order interfaces naturally, the `name` field\n is split into eight distinct components: leading text (type), slot, subslot, position, subposition, ID, channel,\n and virtual circuit:\n\n {type}{slot or ID}/{subslot}/{position}/{subposition}:{channel}.{vc}\n\n Components absent from the interface name are coalesced to zero or null. For example, an interface named\n GigabitEthernet1/2/3 would be parsed as follows:\n\n type = 'GigabitEthernet'\n slot = 1\n subslot = 2\n position = 3\n subposition = None\n id = None\n channel = 0\n vc = 0\n\n The original `name` field is considered in its entirety to serve as a fallback in the event interfaces do not\n match any of the prescribed fields.\n \"\"\"\n\n sql_col = '{}.name'.format(self.model._meta.db_table)\n ordering = [\n '_slot', '_subslot', '_position', '_subposition', '_type', '_id', '_channel', '_vc', 'name',\n ]\n\n fields = {\n '_type': RawSQL(TYPE_RE.format(sql_col), []),\n '_id': RawSQL(ID_RE.format(sql_col), []),\n '_slot': RawSQL(SLOT_RE.format(sql_col), []),\n '_subslot': RawSQL(SUBSLOT_RE.format(sql_col), []),\n '_position': RawSQL(POSITION_RE.format(sql_col), []),\n '_subposition': RawSQL(SUBPOSITION_RE.format(sql_col), []),\n '_channel': RawSQL(CHANNEL_RE.format(sql_col), []),\n '_vc': RawSQL(VC_RE.format(sql_col), []),\n }\n\n return InterfaceQuerySet(self.model, using=self._db).annotate(**fields).order_by(*ordering)\n", "path": "netbox/dcim/managers.py"}]} | 2,249 | 210 |
gh_patches_debug_5600 | rasdani/github-patches | git_diff | deepset-ai__haystack-6503 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`TransformersSimilarityRanker` breaks when `documents` list consists of a single document
**Describe the bug**
`TransformersSimilarityRanker` throws an error if its input for `documents` consists of a list containing only a single Document.
**Error message**
```
Traceback (most recent call last):
File "/Users/bogdan/Repositories/haystack/haystack/core/pipeline/pipeline.py", line 634, in _run_component
outputs = instance.run(**inputs)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bogdan/Repositories/haystack/haystack/components/rankers/transformers_similarity.py", line 130, in run
for sorted_index_tensor in sorted_indices:
File "/Users/bogdan/miniconda3/envs/haystack/lib/python3.11/site-packages/torch/_tensor.py", line 990, in __iter__
raise TypeError("iteration over a 0-d tensor")
TypeError: iteration over a 0-d tensor
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/bogdan/Repositories/haystack/bogdan_scripts/advent2.py", line 39, in <module>
result = pipeline.run({
^^^^^^^^^^^^^^
File "/Users/bogdan/Repositories/haystack/haystack/pipeline.py", line 85, in run
return self._run_internal(data=data, debug=debug)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bogdan/Repositories/haystack/haystack/pipeline.py", line 111, in _run_internal
return super().run(data=data, debug=debug)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bogdan/Repositories/haystack/haystack/core/pipeline/pipeline.py", line 491, in run
outputs = self._run_component(name=component_name, inputs=dict(inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/bogdan/Repositories/haystack/haystack/core/pipeline/pipeline.py", line 647, in _run_component
raise PipelineRuntimeError(
haystack.core.errors.PipelineRuntimeError: ranker raised 'TypeError: iteration over a 0-d tensor'
Inputs: {'query': 'What is our favorite animal?', 'documents': [Document(id=763a01ff13b87fd4f8bcb0a2cd0903f3aa6522002ac4e1f2fe1590fb96b48d16, content: 'Try out Haystack 2.0-Beta to discover what’s coming in the next major release
with 10 challenges in ...', meta: {'content_type': 'text/html', 'url': 'https://haystack.deepset.ai/advent-of-haystack/day-1#challenge', 'source_id': '1754c44adcb6ab95cd02b13c6b7c5b265d9de856f381e5717d6abfd4bc1ffc8e'})]}
See the stacktrace above for more information.
```
**Expected behavior**
The pipeline should run fine if the ranker gets only a single document.
**Additional context**
I noticed this error while trying to solve day 2 of the advent of Haystack.
I'll create a PR to fix this :)
**To Reproduce**
```python
from haystack.components.fetchers.link_content import LinkContentFetcher
from haystack.components.converters import HTMLToDocument
from haystack.components.preprocessors import DocumentSplitter
from haystack.components.rankers import TransformersSimilarityRanker
from haystack.components.generators import GPTGenerator
from haystack.components.builders.prompt_builder import PromptBuilder
from haystack import Pipeline
openai_api_key = <API_KEY>
prompt_template = """
According to these documents:
{% for doc in documents %}
{{ doc.content }}
{% endfor %}
Answer the given question: {{question}}
Answer:
"""
prompt_builder = PromptBuilder(template=prompt_template)
pipeline = Pipeline()
pipeline.add_component("fetcher", instance=LinkContentFetcher())
pipeline.add_component("converter", instance=HTMLToDocument())
pipeline.add_component("splitter", instance=DocumentSplitter())
pipeline.add_component("ranker", instance=TransformersSimilarityRanker())
pipeline.add_component("prompt_builder", instance=prompt_builder)
pipeline.add_component("generator", instance=GPTGenerator(api_key=openai_api_key))
pipeline.connect("fetcher", "converter")
pipeline.connect("converter", "splitter")
pipeline.connect("splitter.documents", "ranker.documents")
pipeline.connect("ranker.documents", "prompt_builder.documents")
pipeline.connect("prompt_builder", "generator")
question = "What is our favorite animal?"
result = pipeline.run({
"prompt_builder": {"question": question},
"ranker": {"query": question},
"fetcher": {"urls": ["https://haystack.deepset.ai/advent-of-haystack/day-1#challenge"]}
})
print(result['generator']['replies'][0])
```
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: Google Colab
- Haystack version (commit or version number): 2.0-beta
</issue>
<code>
[start of haystack/components/rankers/transformers_similarity.py]
1 import logging
2 from pathlib import Path
3 from typing import List, Union, Dict, Any, Optional
4
5 from haystack import ComponentError, Document, component, default_to_dict
6 from haystack.lazy_imports import LazyImport
7
8 logger = logging.getLogger(__name__)
9
10
11 with LazyImport(message="Run 'pip install transformers[torch,sentencepiece]'") as torch_and_transformers_import:
12 import torch
13 from transformers import AutoModelForSequenceClassification, AutoTokenizer
14
15
16 @component
17 class TransformersSimilarityRanker:
18 """
19 Ranks Documents based on their similarity to the query.
20 It uses a pre-trained cross-encoder model (from the Hugging Face Hub) to embed the query and the Documents.
21
22 Usage example:
23 ```
24 from haystack import Document
25 from haystack.components.rankers import TransformersSimilarityRanker
26
27 ranker = TransformersSimilarityRanker()
28 docs = [Document(content="Paris"), Document(content="Berlin")]
29 query = "City in Germany"
30 output = ranker.run(query=query, documents=docs)
31 docs = output["documents"]
32 assert len(docs) == 2
33 assert docs[0].content == "Berlin"
34 ```
35 """
36
37 def __init__(
38 self,
39 model_name_or_path: Union[str, Path] = "cross-encoder/ms-marco-MiniLM-L-6-v2",
40 device: str = "cpu",
41 token: Union[bool, str, None] = None,
42 top_k: int = 10,
43 ):
44 """
45 Creates an instance of TransformersSimilarityRanker.
46
47 :param model_name_or_path: The name or path of a pre-trained cross-encoder model
48 from the Hugging Face Hub.
49 :param device: The torch device (for example, cuda:0, cpu, mps) to which you want to limit model inference.
50 :param token: The API token used to download private models from Hugging Face.
51 If this parameter is set to `True`, the token generated when running
52 `transformers-cli login` (stored in ~/.huggingface) is used.
53 :param top_k: The maximum number of Documents to return per query.
54 """
55 torch_and_transformers_import.check()
56
57 self.model_name_or_path = model_name_or_path
58 if top_k <= 0:
59 raise ValueError(f"top_k must be > 0, but got {top_k}")
60 self.top_k = top_k
61 self.device = device
62 self.token = token
63 self.model = None
64 self.tokenizer = None
65
66 def _get_telemetry_data(self) -> Dict[str, Any]:
67 """
68 Data that is sent to Posthog for usage analytics.
69 """
70 return {"model": str(self.model_name_or_path)}
71
72 def warm_up(self):
73 """
74 Warm up the model and tokenizer used for scoring the Documents.
75 """
76 if self.model_name_or_path and not self.model:
77 self.model = AutoModelForSequenceClassification.from_pretrained(self.model_name_or_path, token=self.token)
78 self.model = self.model.to(self.device)
79 self.model.eval()
80 self.tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path, token=self.token)
81
82 def to_dict(self) -> Dict[str, Any]:
83 """
84 Serialize this component to a dictionary.
85 """
86 return default_to_dict(
87 self,
88 device=self.device,
89 model_name_or_path=self.model_name_or_path,
90 token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens
91 top_k=self.top_k,
92 )
93
94 @component.output_types(documents=List[Document])
95 def run(self, query: str, documents: List[Document], top_k: Optional[int] = None):
96 """
97 Returns a list of Documents ranked by their similarity to the given query.
98
99 :param query: Query string.
100 :param documents: List of Documents.
101 :param top_k: The maximum number of Documents you want the Ranker to return.
102 :return: List of Documents sorted by their similarity to the query with the most similar Documents appearing first.
103 """
104 if not documents:
105 return {"documents": []}
106
107 if top_k is None:
108 top_k = self.top_k
109
110 elif top_k <= 0:
111 raise ValueError(f"top_k must be > 0, but got {top_k}")
112
113 # If a model path is provided but the model isn't loaded
114 if self.model_name_or_path and not self.model:
115 raise ComponentError(
116 f"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'."
117 )
118
119 query_doc_pairs = [[query, doc.content] for doc in documents]
120 features = self.tokenizer(
121 query_doc_pairs, padding=True, truncation=True, return_tensors="pt"
122 ).to( # type: ignore
123 self.device
124 )
125 with torch.inference_mode():
126 similarity_scores = self.model(**features).logits.squeeze() # type: ignore
127
128 _, sorted_indices = torch.sort(similarity_scores, descending=True)
129 ranked_docs = []
130 for sorted_index_tensor in sorted_indices:
131 i = sorted_index_tensor.item()
132 documents[i].score = similarity_scores[i].item()
133 ranked_docs.append(documents[i])
134 return {"documents": ranked_docs[:top_k]}
135
[end of haystack/components/rankers/transformers_similarity.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/haystack/components/rankers/transformers_similarity.py b/haystack/components/rankers/transformers_similarity.py
--- a/haystack/components/rankers/transformers_similarity.py
+++ b/haystack/components/rankers/transformers_similarity.py
@@ -123,7 +123,7 @@
self.device
)
with torch.inference_mode():
- similarity_scores = self.model(**features).logits.squeeze() # type: ignore
+ similarity_scores = self.model(**features).logits.squeeze(dim=1) # type: ignore
_, sorted_indices = torch.sort(similarity_scores, descending=True)
ranked_docs = []
| {"golden_diff": "diff --git a/haystack/components/rankers/transformers_similarity.py b/haystack/components/rankers/transformers_similarity.py\n--- a/haystack/components/rankers/transformers_similarity.py\n+++ b/haystack/components/rankers/transformers_similarity.py\n@@ -123,7 +123,7 @@\n self.device\n )\n with torch.inference_mode():\n- similarity_scores = self.model(**features).logits.squeeze() # type: ignore\n+ similarity_scores = self.model(**features).logits.squeeze(dim=1) # type: ignore\n \n _, sorted_indices = torch.sort(similarity_scores, descending=True)\n ranked_docs = []\n", "issue": "`TransformersSimilarityRanker` breaks when `documents` list consists of a single document\n**Describe the bug**\r\n`TransformersSimilarityRanker` throws an error if its input for `documents` consists of a list containing only a single Document.\r\n\r\n**Error message**\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/bogdan/Repositories/haystack/haystack/core/pipeline/pipeline.py\", line 634, in _run_component\r\n outputs = instance.run(**inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bogdan/Repositories/haystack/haystack/components/rankers/transformers_similarity.py\", line 130, in run\r\n for sorted_index_tensor in sorted_indices:\r\n File \"/Users/bogdan/miniconda3/envs/haystack/lib/python3.11/site-packages/torch/_tensor.py\", line 990, in __iter__\r\n raise TypeError(\"iteration over a 0-d tensor\")\r\nTypeError: iteration over a 0-d tensor\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/bogdan/Repositories/haystack/bogdan_scripts/advent2.py\", line 39, in <module>\r\n result = pipeline.run({\r\n ^^^^^^^^^^^^^^\r\n File \"/Users/bogdan/Repositories/haystack/haystack/pipeline.py\", line 85, in run\r\n return self._run_internal(data=data, debug=debug)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bogdan/Repositories/haystack/haystack/pipeline.py\", line 111, in _run_internal\r\n return super().run(data=data, debug=debug)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bogdan/Repositories/haystack/haystack/core/pipeline/pipeline.py\", line 491, in run\r\n outputs = self._run_component(name=component_name, inputs=dict(inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bogdan/Repositories/haystack/haystack/core/pipeline/pipeline.py\", line 647, in _run_component\r\n raise PipelineRuntimeError(\r\nhaystack.core.errors.PipelineRuntimeError: ranker raised 'TypeError: iteration over a 0-d tensor' \r\nInputs: {'query': 'What is our favorite animal?', 'documents': [Document(id=763a01ff13b87fd4f8bcb0a2cd0903f3aa6522002ac4e1f2fe1590fb96b48d16, content: 'Try out Haystack 2.0-Beta to discover what\u2019s coming in the next major release\r\nwith 10 challenges in ...', meta: {'content_type': 'text/html', 'url': 'https://haystack.deepset.ai/advent-of-haystack/day-1#challenge', 'source_id': '1754c44adcb6ab95cd02b13c6b7c5b265d9de856f381e5717d6abfd4bc1ffc8e'})]}\r\n\r\nSee the stacktrace above for more information.\r\n```\r\n\r\n**Expected behavior**\r\nThe pipeline should run fine if the ranker gets only a single document. \r\n\r\n**Additional context**\r\nI noticed this error while trying to solve day 2 of the advent of Haystack. \r\nI'll create a PR to fix this :) \r\n\r\n**To Reproduce**\r\n```python\r\nfrom haystack.components.fetchers.link_content import LinkContentFetcher\r\nfrom haystack.components.converters import HTMLToDocument\r\nfrom haystack.components.preprocessors import DocumentSplitter\r\nfrom haystack.components.rankers import TransformersSimilarityRanker\r\nfrom haystack.components.generators import GPTGenerator\r\nfrom haystack.components.builders.prompt_builder import PromptBuilder\r\nfrom haystack import Pipeline\r\n\r\n\r\nopenai_api_key = <API_KEY>\r\nprompt_template = \"\"\"\r\nAccording to these documents:\r\n\r\n{% for doc in documents %}\r\n {{ doc.content }}\r\n{% endfor %}\r\n\r\nAnswer the given question: {{question}}\r\nAnswer:\r\n\"\"\"\r\nprompt_builder = PromptBuilder(template=prompt_template)\r\n\r\npipeline = Pipeline()\r\npipeline.add_component(\"fetcher\", instance=LinkContentFetcher())\r\npipeline.add_component(\"converter\", instance=HTMLToDocument())\r\npipeline.add_component(\"splitter\", instance=DocumentSplitter())\r\npipeline.add_component(\"ranker\", instance=TransformersSimilarityRanker())\r\npipeline.add_component(\"prompt_builder\", instance=prompt_builder)\r\npipeline.add_component(\"generator\", instance=GPTGenerator(api_key=openai_api_key))\r\n\r\npipeline.connect(\"fetcher\", \"converter\")\r\npipeline.connect(\"converter\", \"splitter\")\r\npipeline.connect(\"splitter.documents\", \"ranker.documents\")\r\npipeline.connect(\"ranker.documents\", \"prompt_builder.documents\")\r\npipeline.connect(\"prompt_builder\", \"generator\")\r\n\r\n\r\nquestion = \"What is our favorite animal?\"\r\nresult = pipeline.run({\r\n \"prompt_builder\": {\"question\": question},\r\n \"ranker\": {\"query\": question},\r\n \"fetcher\": {\"urls\": [\"https://haystack.deepset.ai/advent-of-haystack/day-1#challenge\"]}\r\n})\r\n\r\nprint(result['generator']['replies'][0])\r\n```\r\n\r\n**FAQ Check**\r\n- [x] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?\r\n\r\n**System:**\r\n - OS: Google Colab \r\n - Haystack version (commit or version number): 2.0-beta\r\n\n", "before_files": [{"content": "import logging\nfrom pathlib import Path\nfrom typing import List, Union, Dict, Any, Optional\n\nfrom haystack import ComponentError, Document, component, default_to_dict\nfrom haystack.lazy_imports import LazyImport\n\nlogger = logging.getLogger(__name__)\n\n\nwith LazyImport(message=\"Run 'pip install transformers[torch,sentencepiece]'\") as torch_and_transformers_import:\n import torch\n from transformers import AutoModelForSequenceClassification, AutoTokenizer\n\n\n@component\nclass TransformersSimilarityRanker:\n \"\"\"\n Ranks Documents based on their similarity to the query.\n It uses a pre-trained cross-encoder model (from the Hugging Face Hub) to embed the query and the Documents.\n\n Usage example:\n ```\n from haystack import Document\n from haystack.components.rankers import TransformersSimilarityRanker\n\n ranker = TransformersSimilarityRanker()\n docs = [Document(content=\"Paris\"), Document(content=\"Berlin\")]\n query = \"City in Germany\"\n output = ranker.run(query=query, documents=docs)\n docs = output[\"documents\"]\n assert len(docs) == 2\n assert docs[0].content == \"Berlin\"\n ```\n \"\"\"\n\n def __init__(\n self,\n model_name_or_path: Union[str, Path] = \"cross-encoder/ms-marco-MiniLM-L-6-v2\",\n device: str = \"cpu\",\n token: Union[bool, str, None] = None,\n top_k: int = 10,\n ):\n \"\"\"\n Creates an instance of TransformersSimilarityRanker.\n\n :param model_name_or_path: The name or path of a pre-trained cross-encoder model\n from the Hugging Face Hub.\n :param device: The torch device (for example, cuda:0, cpu, mps) to which you want to limit model inference.\n :param token: The API token used to download private models from Hugging Face.\n If this parameter is set to `True`, the token generated when running\n `transformers-cli login` (stored in ~/.huggingface) is used.\n :param top_k: The maximum number of Documents to return per query.\n \"\"\"\n torch_and_transformers_import.check()\n\n self.model_name_or_path = model_name_or_path\n if top_k <= 0:\n raise ValueError(f\"top_k must be > 0, but got {top_k}\")\n self.top_k = top_k\n self.device = device\n self.token = token\n self.model = None\n self.tokenizer = None\n\n def _get_telemetry_data(self) -> Dict[str, Any]:\n \"\"\"\n Data that is sent to Posthog for usage analytics.\n \"\"\"\n return {\"model\": str(self.model_name_or_path)}\n\n def warm_up(self):\n \"\"\"\n Warm up the model and tokenizer used for scoring the Documents.\n \"\"\"\n if self.model_name_or_path and not self.model:\n self.model = AutoModelForSequenceClassification.from_pretrained(self.model_name_or_path, token=self.token)\n self.model = self.model.to(self.device)\n self.model.eval()\n self.tokenizer = AutoTokenizer.from_pretrained(self.model_name_or_path, token=self.token)\n\n def to_dict(self) -> Dict[str, Any]:\n \"\"\"\n Serialize this component to a dictionary.\n \"\"\"\n return default_to_dict(\n self,\n device=self.device,\n model_name_or_path=self.model_name_or_path,\n token=self.token if not isinstance(self.token, str) else None, # don't serialize valid tokens\n top_k=self.top_k,\n )\n\n @component.output_types(documents=List[Document])\n def run(self, query: str, documents: List[Document], top_k: Optional[int] = None):\n \"\"\"\n Returns a list of Documents ranked by their similarity to the given query.\n\n :param query: Query string.\n :param documents: List of Documents.\n :param top_k: The maximum number of Documents you want the Ranker to return.\n :return: List of Documents sorted by their similarity to the query with the most similar Documents appearing first.\n \"\"\"\n if not documents:\n return {\"documents\": []}\n\n if top_k is None:\n top_k = self.top_k\n\n elif top_k <= 0:\n raise ValueError(f\"top_k must be > 0, but got {top_k}\")\n\n # If a model path is provided but the model isn't loaded\n if self.model_name_or_path and not self.model:\n raise ComponentError(\n f\"The component {self.__class__.__name__} wasn't warmed up. Run 'warm_up()' before calling 'run()'.\"\n )\n\n query_doc_pairs = [[query, doc.content] for doc in documents]\n features = self.tokenizer(\n query_doc_pairs, padding=True, truncation=True, return_tensors=\"pt\"\n ).to( # type: ignore\n self.device\n )\n with torch.inference_mode():\n similarity_scores = self.model(**features).logits.squeeze() # type: ignore\n\n _, sorted_indices = torch.sort(similarity_scores, descending=True)\n ranked_docs = []\n for sorted_index_tensor in sorted_indices:\n i = sorted_index_tensor.item()\n documents[i].score = similarity_scores[i].item()\n ranked_docs.append(documents[i])\n return {\"documents\": ranked_docs[:top_k]}\n", "path": "haystack/components/rankers/transformers_similarity.py"}]} | 3,194 | 149 |
gh_patches_debug_18703 | rasdani/github-patches | git_diff | mdn__kuma-7102 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError on sendinblue
https://sentry.prod.mozaws.net/operations/mdn-prod/issues/8473154/?referrer=github_plugin
```
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
File "celery/app/trace.py", line 385, in trace_task
R = retval = fun(*args, **kwargs)
File "newrelic/hooks/application_celery.py", line 85, in wrapper
return wrapped(*args, **kwargs)
File "celery/app/trace.py", line 650, in __protected_call__
return self.run(*args, **kwargs)
File "kuma/users/newsletter/tasks.py", line 29, in create_or_update_contact
"listIds": [int(settings.SENDINBLUE_LIST_ID)],
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
```
</issue>
<code>
[start of kuma/users/newsletter/apps.py]
1 from django.apps import AppConfig
2 from django.core.checks import register
3 from django.utils.translation import gettext_lazy as _
4
5
6 class UserNewsletterConfig(AppConfig):
7 """
8 The Django App Config class to store information about the users app
9 and do startup time things.
10 """
11
12 name = "kuma.users.newsletter"
13 verbose_name = _("UserNewsletter")
14
15 def ready(self):
16 # Connect signal handlers
17 from . import signal_handlers # noqa
18
19 from .checks import sendinblue_check
20
21 register(sendinblue_check)
22
[end of kuma/users/newsletter/apps.py]
[start of kuma/settings/pytest.py]
1 from .local import *
2
3 DEBUG = False
4 ENABLE_RESTRICTIONS_BY_HOST = False
5 TEMPLATES[0]["OPTIONS"]["debug"] = True # Enable recording of templates
6 CELERY_TASK_ALWAYS_EAGER = True
7 CELERY_EAGER_PROPAGATES_EXCEPTIONS = True
8 ES_LIVE_INDEX = config("ES_LIVE_INDEX", default=False, cast=bool)
9
10 # Disable the Constance database cache
11 CONSTANCE_DATABASE_CACHE_BACKEND = False
12
13 # SHA1 because it is fast, and hard-coded in the test fixture JSON.
14 PASSWORD_HASHERS = ("django.contrib.auth.hashers.SHA1PasswordHasher",)
15
16 INSTALLED_APPS += ("kuma.core.tests.taggit_extras",)
17
18 LOGGING["loggers"].update(
19 {
20 "django.db.backends": {
21 "handlers": ["console"],
22 "propagate": True,
23 "level": "WARNING",
24 },
25 "kuma.search.utils": {"handlers": [], "propagate": False, "level": "CRITICAL"},
26 }
27 )
28
29
30 # Change the cache key prefix for tests, to avoid overwriting runtime.
31 for cache_settings in CACHES.values():
32 current_prefix = cache_settings.get("KEY_PREFIX", "")
33 cache_settings["KEY_PREFIX"] = "test." + current_prefix
34
35 # Use un-versioned file names, like main.css, instead of versioned
36 # filenames requiring hashing, like mdn.1cb62215bf0c.css
37 STATICFILES_STORAGE = "pipeline.storage.PipelineStorage"
38
39 # Switch Pipeline to DEBUG=False / Production values
40
41 # The documents claim True means assets should be compressed, which seems like
42 # more work, but it is 4x slower when False, maybe because it detects the
43 # existence of the file and skips generating a new one.
44 PIPELINE["PIPELINE_ENABLED"] = True
45
46 # The documents suggest this does nothing when PIPELINE_ENABLED=True. But,
47 # testing shows that tests run faster when set to True.
48 PIPELINE["PIPELINE_COLLECTOR_ENABLED"] = True
49
50 # We need the real Sass compiler here instead of the pass-through used for
51 # local dev.
52 PIPELINE["COMPILERS"] = ("pipeline.compilers.sass.SASSCompiler",)
53
54 # Testing with django-pipeline 1.6.8, PipelineStorage
55 # Enabled=T, Collector=T - 482s
56 # Enabled=T, Collector=F - 535s
57 # Enabled=F, Collector=T - 18262s
58 # Enabled=F, Collector=F - 2043s
59
60 # Defer to django-pipeline's finders for testing
61 # This avoids reading the static folder for each test client request, for
62 # a 10x speedup on Docker on MacOS.
63 WHITENOISE_AUTOREFRESH = True
64 WHITENOISE_USE_FINDERS = True
65
66 # Never rely on the .env
67 GOOGLE_ANALYTICS_ACCOUNT = None
68
69 # Silence warnings about defaults that change in django-storages 2.0
70 AWS_BUCKET_ACL = None
71 AWS_DEFAULT_ACL = None
72
73 # Use a dedicated minio bucket for tests
74 ATTACHMENTS_AWS_STORAGE_BUCKET_NAME = "test"
75
76 # Never enabled in tests.
77 SENTRY_DSN = None
78
79 # To make absolutely sure we never accidentally trigger the GA tracking
80 # within tests to the actual (and default) www.google-analytics.com this is
81 # an extra safeguard.
82 GOOGLE_ANALYTICS_TRACKING_URL = "https://thisllneverwork.example.com/collect"
83
84 # Because that's what all the tests presume.
85 SITE_ID = 1
86
87 # Because it's on by default
88 ENABLE_BCD_SIGNAL = True
89
90 # Stripe API KEY settings
91 STRIPE_PUBLIC_KEY = "testing"
92 STRIPE_SECRET_KEY = "testing"
93 STRIPE_PLAN_ID = "testing"
94
95 # For legacy reasons, the tests assume these are always true so don't
96 # let local overrides take effect.
97 INDEX_HTML_ATTRIBUTES = True
98 INDEX_CSS_CLASSNAMES = True
99
100 # Amount for the monthly subscription.
101 # It's hardcoded here in case some test depends on the number and it futureproofs
102 # our tests to not deviate when the actual number changes since that number
103 # change shouldn't affect the tests.
104 CONTRIBUTION_AMOUNT_USD = 4.99
105
[end of kuma/settings/pytest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kuma/settings/pytest.py b/kuma/settings/pytest.py
--- a/kuma/settings/pytest.py
+++ b/kuma/settings/pytest.py
@@ -102,3 +102,6 @@
# our tests to not deviate when the actual number changes since that number
# change shouldn't affect the tests.
CONTRIBUTION_AMOUNT_USD = 4.99
+
+SENDINBLUE_API_KEY = "testing"
+SENDINBLUE_LIST_ID = 7327
diff --git a/kuma/users/newsletter/apps.py b/kuma/users/newsletter/apps.py
--- a/kuma/users/newsletter/apps.py
+++ b/kuma/users/newsletter/apps.py
@@ -1,4 +1,5 @@
from django.apps import AppConfig
+from django.conf import settings
from django.core.checks import register
from django.utils.translation import gettext_lazy as _
@@ -13,6 +14,9 @@
verbose_name = _("UserNewsletter")
def ready(self):
+ if not settings.SENDINBLUE_API_KEY:
+ return
+
# Connect signal handlers
from . import signal_handlers # noqa
| {"golden_diff": "diff --git a/kuma/settings/pytest.py b/kuma/settings/pytest.py\n--- a/kuma/settings/pytest.py\n+++ b/kuma/settings/pytest.py\n@@ -102,3 +102,6 @@\n # our tests to not deviate when the actual number changes since that number\n # change shouldn't affect the tests.\n CONTRIBUTION_AMOUNT_USD = 4.99\n+\n+SENDINBLUE_API_KEY = \"testing\"\n+SENDINBLUE_LIST_ID = 7327\ndiff --git a/kuma/users/newsletter/apps.py b/kuma/users/newsletter/apps.py\n--- a/kuma/users/newsletter/apps.py\n+++ b/kuma/users/newsletter/apps.py\n@@ -1,4 +1,5 @@\n from django.apps import AppConfig\n+from django.conf import settings\n from django.core.checks import register\n from django.utils.translation import gettext_lazy as _\n \n@@ -13,6 +14,9 @@\n verbose_name = _(\"UserNewsletter\")\n \n def ready(self):\n+ if not settings.SENDINBLUE_API_KEY:\n+ return\n+\n # Connect signal handlers\n from . import signal_handlers # noqa\n", "issue": "TypeError on sendinblue\nhttps://sentry.prod.mozaws.net/operations/mdn-prod/issues/8473154/?referrer=github_plugin\n\n```\nTypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'\n File \"celery/app/trace.py\", line 385, in trace_task\n R = retval = fun(*args, **kwargs)\n File \"newrelic/hooks/application_celery.py\", line 85, in wrapper\n return wrapped(*args, **kwargs)\n File \"celery/app/trace.py\", line 650, in __protected_call__\n return self.run(*args, **kwargs)\n File \"kuma/users/newsletter/tasks.py\", line 29, in create_or_update_contact\n \"listIds\": [int(settings.SENDINBLUE_LIST_ID)],\n\nTypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'\n```\n", "before_files": [{"content": "from django.apps import AppConfig\nfrom django.core.checks import register\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass UserNewsletterConfig(AppConfig):\n \"\"\"\n The Django App Config class to store information about the users app\n and do startup time things.\n \"\"\"\n\n name = \"kuma.users.newsletter\"\n verbose_name = _(\"UserNewsletter\")\n\n def ready(self):\n # Connect signal handlers\n from . import signal_handlers # noqa\n\n from .checks import sendinblue_check\n\n register(sendinblue_check)\n", "path": "kuma/users/newsletter/apps.py"}, {"content": "from .local import *\n\nDEBUG = False\nENABLE_RESTRICTIONS_BY_HOST = False\nTEMPLATES[0][\"OPTIONS\"][\"debug\"] = True # Enable recording of templates\nCELERY_TASK_ALWAYS_EAGER = True\nCELERY_EAGER_PROPAGATES_EXCEPTIONS = True\nES_LIVE_INDEX = config(\"ES_LIVE_INDEX\", default=False, cast=bool)\n\n# Disable the Constance database cache\nCONSTANCE_DATABASE_CACHE_BACKEND = False\n\n# SHA1 because it is fast, and hard-coded in the test fixture JSON.\nPASSWORD_HASHERS = (\"django.contrib.auth.hashers.SHA1PasswordHasher\",)\n\nINSTALLED_APPS += (\"kuma.core.tests.taggit_extras\",)\n\nLOGGING[\"loggers\"].update(\n {\n \"django.db.backends\": {\n \"handlers\": [\"console\"],\n \"propagate\": True,\n \"level\": \"WARNING\",\n },\n \"kuma.search.utils\": {\"handlers\": [], \"propagate\": False, \"level\": \"CRITICAL\"},\n }\n)\n\n\n# Change the cache key prefix for tests, to avoid overwriting runtime.\nfor cache_settings in CACHES.values():\n current_prefix = cache_settings.get(\"KEY_PREFIX\", \"\")\n cache_settings[\"KEY_PREFIX\"] = \"test.\" + current_prefix\n\n# Use un-versioned file names, like main.css, instead of versioned\n# filenames requiring hashing, like mdn.1cb62215bf0c.css\nSTATICFILES_STORAGE = \"pipeline.storage.PipelineStorage\"\n\n# Switch Pipeline to DEBUG=False / Production values\n\n# The documents claim True means assets should be compressed, which seems like\n# more work, but it is 4x slower when False, maybe because it detects the\n# existence of the file and skips generating a new one.\nPIPELINE[\"PIPELINE_ENABLED\"] = True\n\n# The documents suggest this does nothing when PIPELINE_ENABLED=True. But,\n# testing shows that tests run faster when set to True.\nPIPELINE[\"PIPELINE_COLLECTOR_ENABLED\"] = True\n\n# We need the real Sass compiler here instead of the pass-through used for\n# local dev.\nPIPELINE[\"COMPILERS\"] = (\"pipeline.compilers.sass.SASSCompiler\",)\n\n# Testing with django-pipeline 1.6.8, PipelineStorage\n# Enabled=T, Collector=T - 482s\n# Enabled=T, Collector=F - 535s\n# Enabled=F, Collector=T - 18262s\n# Enabled=F, Collector=F - 2043s\n\n# Defer to django-pipeline's finders for testing\n# This avoids reading the static folder for each test client request, for\n# a 10x speedup on Docker on MacOS.\nWHITENOISE_AUTOREFRESH = True\nWHITENOISE_USE_FINDERS = True\n\n# Never rely on the .env\nGOOGLE_ANALYTICS_ACCOUNT = None\n\n# Silence warnings about defaults that change in django-storages 2.0\nAWS_BUCKET_ACL = None\nAWS_DEFAULT_ACL = None\n\n# Use a dedicated minio bucket for tests\nATTACHMENTS_AWS_STORAGE_BUCKET_NAME = \"test\"\n\n# Never enabled in tests.\nSENTRY_DSN = None\n\n# To make absolutely sure we never accidentally trigger the GA tracking\n# within tests to the actual (and default) www.google-analytics.com this is\n# an extra safeguard.\nGOOGLE_ANALYTICS_TRACKING_URL = \"https://thisllneverwork.example.com/collect\"\n\n# Because that's what all the tests presume.\nSITE_ID = 1\n\n# Because it's on by default\nENABLE_BCD_SIGNAL = True\n\n# Stripe API KEY settings\nSTRIPE_PUBLIC_KEY = \"testing\"\nSTRIPE_SECRET_KEY = \"testing\"\nSTRIPE_PLAN_ID = \"testing\"\n\n# For legacy reasons, the tests assume these are always true so don't\n# let local overrides take effect.\nINDEX_HTML_ATTRIBUTES = True\nINDEX_CSS_CLASSNAMES = True\n\n# Amount for the monthly subscription.\n# It's hardcoded here in case some test depends on the number and it futureproofs\n# our tests to not deviate when the actual number changes since that number\n# change shouldn't affect the tests.\nCONTRIBUTION_AMOUNT_USD = 4.99\n", "path": "kuma/settings/pytest.py"}]} | 2,048 | 250 |
gh_patches_debug_11361 | rasdani/github-patches | git_diff | nvaccess__nvda-9726 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tests/Python 3: update unittest syntax
Hi,
In order to test this, one must be using threshold_py3_staging branch.
### Steps to reproduce:
1. Checkout latest threshold_py3_staging work.
2. Perform scons tests.
### Actual behavior:
Errors related to syntax in unittest test cases are thrown.
### Expected behavior:
No errors
### System configuration
#### NVDA installed/portable/running from source:
Source
#### NVDA version:
Python 3 source
#### Windows version:
Windows 10 Version 1903 (build 18362)
#### Name and version of other software in use when reproducing the issue:
None
#### Other information about your system:
None
### Other questions
#### Does the issue still occur after restarting your PC?
Yes
#### Have you tried any other versions of NVDA? If so, please report their behaviors.
Yes - no errors in Python 2 version.
### Additional notes:
Scons tests revealed numerous source code errors, including removal of itertools.izip and others. I'll take care of izip issue as part of another pull request.
Thanks.
</issue>
<code>
[start of source/extensionPoints/util.py]
1 #util.py
2 #A part of NonVisual Desktop Access (NVDA)
3 #Copyright (C) 2017 NV Access Limited
4 #This file is covered by the GNU General Public License.
5 #See the file COPYING for more details.
6
7 """Utilities used withing the extension points framework. Generally it is expected that the class in __init__.py are
8 used, however for more advanced requirements these utilities can be used directly.
9 """
10 import weakref
11 import collections
12 import inspect
13
14
15 class AnnotatableWeakref(weakref.ref):
16 """A weakref.ref which allows annotation with custom attributes.
17 """
18
19
20 class BoundMethodWeakref(object):
21 """Weakly references a bound instance method.
22 Instance methods are bound dynamically each time they are fetched.
23 weakref.ref on a bound instance method doesn't work because
24 as soon as you drop the reference, the method object dies.
25 Instead, this class holds weak references to both the instance and the function,
26 which can then be used to bind an instance method.
27 To get the actual method, you call an instance as you would a weakref.ref.
28 """
29
30 def __init__(self, target, onDelete):
31 def onRefDelete(weak):
32 """Calls onDelete for our BoundMethodWeakref when one of the individual weakrefs (instance or function) dies.
33 """
34 onDelete(self)
35 inst = target.__self__
36 func = target.__func__
37 self.weakInst = weakref.ref(inst, onRefDelete)
38 self.weakFunc = weakref.ref(func, onRefDelete)
39
40 def __call__(self):
41 inst = self.weakInst()
42 if not inst:
43 return
44 func = self.weakFunc()
45 assert func, "inst is alive but func is dead"
46 # Get an instancemethod by binding func to inst.
47 return func.__get__(inst)
48
49 def _getHandlerKey(handler):
50 """Get a key which identifies a handler function.
51 This is needed because we store weak references, not the actual functions.
52 We store the key on the weak reference.
53 When the handler dies, we can use the key on the weak reference to remove the handler.
54 """
55 inst = getattr(handler, "__self__", None)
56 if inst:
57 return (id(inst), id(handler.__func__))
58 return id(handler)
59
60
61 class HandlerRegistrar(object):
62 """Base class to Facilitate registration and unregistration of handler functions.
63 The handlers are stored using weak references and are automatically unregistered
64 if the handler dies.
65 Both normal functions, instance methods and lambdas are supported. Ensure to keep lambdas alive by maintaining a
66 reference to them.
67 The handlers are maintained in the order they were registered
68 so that they can be called in a deterministic order across runs.
69 This class doesn't provide any functionality to actually call the handlers.
70 If you want to implement an extension point,
71 you probably want the L{Action} or L{Filter} subclasses instead.
72 """
73
74 def __init__(self):
75 #: Registered handler functions.
76 #: This is an OrderedDict where the keys are unique identifiers (as returned by _getHandlerKey)
77 #: and the values are weak references.
78 self._handlers = collections.OrderedDict()
79
80 def register(self, handler):
81 """You can register functions, bound instance methods, class methods, static methods or lambdas.
82 However, the callable must be kept alive by your code otherwise it will be de-registered. This is due to the use
83 of weak references. This is especially relevant when using lambdas.
84 """
85 if hasattr(handler, "__self__"):
86 if not handler.__self__:
87 raise TypeError("Registering unbound instance methods not supported.")
88 weak = BoundMethodWeakref(handler, self.unregister)
89 else:
90 weak = AnnotatableWeakref(handler, self.unregister)
91 key = _getHandlerKey(handler)
92 # Store the key on the weakref so we can remove the handler when it dies.
93 weak.handlerKey = key
94 self._handlers[key] = weak
95
96 def unregister(self, handler):
97 if isinstance(handler, (AnnotatableWeakref, BoundMethodWeakref)):
98 key = handler.handlerKey
99 else:
100 key = _getHandlerKey(handler)
101 try:
102 del self._handlers[key]
103 except KeyError:
104 return False
105 return True
106
107 @property
108 def handlers(self):
109 """Generator of registered handler functions.
110 This should be used when you want to call the handlers.
111 """
112 # #9067 (py3 review required): handlers is an ordered dictionary.
113 # Therefore wrap this inside a list call.
114 for weak in list(self._handlers.values()):
115 handler = weak()
116 if not handler:
117 continue # Died.
118 yield handler
119
120
121 def callWithSupportedKwargs(func, *args, **kwargs):
122 """Call a function with only the keyword arguments it supports.
123 For example, if myFunc is defined as:
124 C{def myFunc(a=None, b=None):}
125 and you call:
126 C{callWithSupportedKwargs(myFunc, a=1, b=2, c=3)}
127 Instead of raising a TypeError, myFunc will simply be called like this:
128 C{myFunc(a=1, b=2)}
129
130 While C{callWithSupportedKwargs} does support positional arguments (C{*args}), usage is strongly discouraged due to the
131 risk of parameter order differences causing bugs.
132
133 @param func: can be any callable that is not an unbound method. EG:
134 - Bound instance methods
135 - class methods
136 - static methods
137 - functions
138 - lambdas
139
140 The arguments for the supplied callable, C{func}, do not need to have default values, and can take C{**kwargs} to
141 capture all arguments.
142 See C{tests/unit/test_extensionPoints.py:TestCallWithSupportedKwargs} for examples.
143
144 An exception is raised if:
145 - the number of positional arguments given can not be received by C{func}.
146 - parameters required (parameters declared with no default value) by C{func} are not supplied.
147 """
148 spec = inspect.getargspec(func)
149
150 # some handlers are instance/class methods, discard "self"/"cls" because it is typically passed implicitly.
151 if inspect.ismethod(func):
152 spec.args.pop(0) # remove "self"/"cls" for instance methods
153 if not hasattr(func, "__self__"):
154 raise TypeError("Unbound instance methods are not handled.")
155
156 # Ensure that the positional args provided by the caller of `callWithSupportedKwargs` actually have a place to go.
157 # Unfortunately, positional args can not be matched on name (keyword) to the names of the params in the handler,
158 # and so calling `callWithSupportedKwargs` is at risk of causing bugs if parameter order differs.
159 numExpectedArgs = len(spec.args)
160 numGivenPositionalArgs = len(args)
161 if numGivenPositionalArgs > numExpectedArgs:
162 raise TypeError("Expected to be able to pass {} positional arguments.".format(numGivenPositionalArgs))
163
164 # Ensure that all arguments without defaults which are expected by the handler were provided.
165 # `defaults` is a tuple of default argument values or None if there are no default arguments;
166 # if this tuple has N elements, they correspond to the last N elements listed in args.
167 numExpectedArgsWithDefaults = len(spec.defaults) if spec.defaults else 0
168 if not spec.defaults or numExpectedArgsWithDefaults != numExpectedArgs:
169 # get the names of the args without defaults, skipping the N positional args given to `callWithSupportedKwargs`
170 # positionals are required for the Filter extension point.
171 # #9067 (Py3 review required): Set construction will work with iterators, too.
172 givenKwargsKeys = set(kwargs.keys())
173 firstArgWithDefault = numExpectedArgs - numExpectedArgsWithDefaults
174 specArgs = set(spec.args[numGivenPositionalArgs:firstArgWithDefault])
175 for arg in specArgs:
176 # and ensure they are in the kwargs list
177 if arg not in givenKwargsKeys:
178 raise TypeError("Parameter required for handler not provided: {}".format(arg))
179
180 if spec.keywords:
181 # func has a catch-all for kwargs (**kwargs) so we do not need to filter to just the supported args.
182 return func(*args, **kwargs)
183
184 supportedKwargs = set(spec.args)
185 # #9067 (Py3 review required): originally called dict.keys.
186 # Therefore wrap this inside a list call.
187 for kwarg in list(kwargs.keys()):
188 if kwarg not in supportedKwargs:
189 del kwargs[kwarg]
190 return func(*args, **kwargs)
[end of source/extensionPoints/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/source/extensionPoints/util.py b/source/extensionPoints/util.py
--- a/source/extensionPoints/util.py
+++ b/source/extensionPoints/util.py
@@ -82,6 +82,7 @@
However, the callable must be kept alive by your code otherwise it will be de-registered. This is due to the use
of weak references. This is especially relevant when using lambdas.
"""
+ # #9720 (Py3 review required): this method causes unittest to fail in Python 3.
if hasattr(handler, "__self__"):
if not handler.__self__:
raise TypeError("Registering unbound instance methods not supported.")
| {"golden_diff": "diff --git a/source/extensionPoints/util.py b/source/extensionPoints/util.py\n--- a/source/extensionPoints/util.py\n+++ b/source/extensionPoints/util.py\n@@ -82,6 +82,7 @@\n \t\tHowever, the callable must be kept alive by your code otherwise it will be de-registered. This is due to the use\r\n \t\tof weak references. This is especially relevant when using lambdas.\r\n \t\t\"\"\"\r\n+\t\t# #9720 (Py3 review required): this method causes unittest to fail in Python 3.\r\n \t\tif hasattr(handler, \"__self__\"):\r\n \t\t\tif not handler.__self__:\r\n \t\t\t\traise TypeError(\"Registering unbound instance methods not supported.\")\n", "issue": "Tests/Python 3: update unittest syntax\nHi,\r\n\r\nIn order to test this, one must be using threshold_py3_staging branch.\r\n\r\n### Steps to reproduce:\r\n1. Checkout latest threshold_py3_staging work.\r\n2. Perform scons tests.\r\n\r\n### Actual behavior:\r\nErrors related to syntax in unittest test cases are thrown.\r\n\r\n### Expected behavior:\r\nNo errors\r\n\r\n### System configuration\r\n#### NVDA installed/portable/running from source:\r\nSource\r\n\r\n#### NVDA version:\r\nPython 3 source\r\n\r\n#### Windows version:\r\nWindows 10 Version 1903 (build 18362)\r\n\r\n#### Name and version of other software in use when reproducing the issue:\r\nNone\r\n\r\n#### Other information about your system:\r\nNone\r\n\r\n### Other questions\r\n#### Does the issue still occur after restarting your PC?\r\nYes\r\n\r\n#### Have you tried any other versions of NVDA? If so, please report their behaviors.\r\nYes - no errors in Python 2 version.\r\n\r\n### Additional notes:\r\nScons tests revealed numerous source code errors, including removal of itertools.izip and others. I'll take care of izip issue as part of another pull request.\r\n\r\nThanks.\n", "before_files": [{"content": "#util.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#Copyright (C) 2017 NV Access Limited\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n\r\n\"\"\"Utilities used withing the extension points framework. Generally it is expected that the class in __init__.py are\r\nused, however for more advanced requirements these utilities can be used directly.\r\n\"\"\"\r\nimport weakref\r\nimport collections\r\nimport inspect\r\n\r\n\r\nclass AnnotatableWeakref(weakref.ref):\r\n\t\"\"\"A weakref.ref which allows annotation with custom attributes.\r\n\t\"\"\"\r\n\r\n\r\nclass BoundMethodWeakref(object):\r\n\t\"\"\"Weakly references a bound instance method.\r\n\tInstance methods are bound dynamically each time they are fetched.\r\n\tweakref.ref on a bound instance method doesn't work because\r\n\tas soon as you drop the reference, the method object dies.\r\n\tInstead, this class holds weak references to both the instance and the function,\r\n\twhich can then be used to bind an instance method.\r\n\tTo get the actual method, you call an instance as you would a weakref.ref.\r\n\t\"\"\"\r\n\r\n\tdef __init__(self, target, onDelete):\r\n\t\tdef onRefDelete(weak):\r\n\t\t\t\"\"\"Calls onDelete for our BoundMethodWeakref when one of the individual weakrefs (instance or function) dies.\r\n\t\t\t\"\"\"\r\n\t\t\tonDelete(self)\r\n\t\tinst = target.__self__\r\n\t\tfunc = target.__func__\r\n\t\tself.weakInst = weakref.ref(inst, onRefDelete)\r\n\t\tself.weakFunc = weakref.ref(func, onRefDelete)\r\n\r\n\tdef __call__(self):\r\n\t\tinst = self.weakInst()\r\n\t\tif not inst:\r\n\t\t\treturn\r\n\t\tfunc = self.weakFunc()\r\n\t\tassert func, \"inst is alive but func is dead\"\r\n\t\t# Get an instancemethod by binding func to inst.\r\n\t\treturn func.__get__(inst)\r\n\r\ndef _getHandlerKey(handler):\r\n\t\"\"\"Get a key which identifies a handler function.\r\n\tThis is needed because we store weak references, not the actual functions.\r\n\tWe store the key on the weak reference.\r\n\tWhen the handler dies, we can use the key on the weak reference to remove the handler.\r\n\t\"\"\"\r\n\tinst = getattr(handler, \"__self__\", None)\r\n\tif inst:\r\n\t\treturn (id(inst), id(handler.__func__))\r\n\treturn id(handler)\r\n\r\n\r\nclass HandlerRegistrar(object):\r\n\t\"\"\"Base class to Facilitate registration and unregistration of handler functions.\r\n\tThe handlers are stored using weak references and are automatically unregistered\r\n\tif the handler dies.\r\n\tBoth normal functions, instance methods and lambdas are supported. Ensure to keep lambdas alive by maintaining a\r\n\treference to them.\r\n\tThe handlers are maintained in the order they were registered\r\n\tso that they can be called in a deterministic order across runs.\r\n\tThis class doesn't provide any functionality to actually call the handlers.\r\n\tIf you want to implement an extension point,\r\n\tyou probably want the L{Action} or L{Filter} subclasses instead.\r\n\t\"\"\"\r\n\r\n\tdef __init__(self):\r\n\t\t#: Registered handler functions.\r\n\t\t#: This is an OrderedDict where the keys are unique identifiers (as returned by _getHandlerKey)\r\n\t\t#: and the values are weak references.\r\n\t\tself._handlers = collections.OrderedDict()\r\n\r\n\tdef register(self, handler):\r\n\t\t\"\"\"You can register functions, bound instance methods, class methods, static methods or lambdas.\r\n\t\tHowever, the callable must be kept alive by your code otherwise it will be de-registered. This is due to the use\r\n\t\tof weak references. This is especially relevant when using lambdas.\r\n\t\t\"\"\"\r\n\t\tif hasattr(handler, \"__self__\"):\r\n\t\t\tif not handler.__self__:\r\n\t\t\t\traise TypeError(\"Registering unbound instance methods not supported.\")\r\n\t\t\tweak = BoundMethodWeakref(handler, self.unregister)\r\n\t\telse:\r\n\t\t\tweak = AnnotatableWeakref(handler, self.unregister)\r\n\t\tkey = _getHandlerKey(handler)\r\n\t\t# Store the key on the weakref so we can remove the handler when it dies.\r\n\t\tweak.handlerKey = key\r\n\t\tself._handlers[key] = weak\r\n\r\n\tdef unregister(self, handler):\r\n\t\tif isinstance(handler, (AnnotatableWeakref, BoundMethodWeakref)):\r\n\t\t\tkey = handler.handlerKey\r\n\t\telse:\r\n\t\t\tkey = _getHandlerKey(handler)\r\n\t\ttry:\r\n\t\t\tdel self._handlers[key]\r\n\t\texcept KeyError:\r\n\t\t\treturn False\r\n\t\treturn True\r\n\r\n\t@property\r\n\tdef handlers(self):\r\n\t\t\"\"\"Generator of registered handler functions.\r\n\t\tThis should be used when you want to call the handlers.\r\n\t\t\"\"\"\r\n\t\t# #9067 (py3 review required): handlers is an ordered dictionary.\r\n\t\t# Therefore wrap this inside a list call.\r\n\t\tfor weak in list(self._handlers.values()):\r\n\t\t\thandler = weak()\r\n\t\t\tif not handler:\r\n\t\t\t\tcontinue # Died.\r\n\t\t\tyield handler\r\n\r\n\r\ndef callWithSupportedKwargs(func, *args, **kwargs):\r\n\t\"\"\"Call a function with only the keyword arguments it supports.\r\n\tFor example, if myFunc is defined as:\r\n\tC{def myFunc(a=None, b=None):}\r\n\tand you call:\r\n\tC{callWithSupportedKwargs(myFunc, a=1, b=2, c=3)}\r\n\tInstead of raising a TypeError, myFunc will simply be called like this:\r\n\tC{myFunc(a=1, b=2)}\r\n\r\n\tWhile C{callWithSupportedKwargs} does support positional arguments (C{*args}), usage is strongly discouraged due to the\r\n\trisk of parameter order differences causing bugs.\r\n\r\n\t@param func: can be any callable that is not an unbound method. EG:\r\n\t\t- Bound instance methods\r\n\t\t- class methods\r\n\t\t- static methods\r\n\t\t- functions\r\n\t\t- lambdas\r\n\r\n\t\tThe arguments for the supplied callable, C{func}, do not need to have default values, and can take C{**kwargs} to\r\n\t\tcapture all arguments.\r\n\t\tSee C{tests/unit/test_extensionPoints.py:TestCallWithSupportedKwargs} for examples.\r\n\r\n\t\tAn exception is raised if:\r\n\t\t\t- the number of positional arguments given can not be received by C{func}.\r\n\t\t\t- parameters required (parameters declared with no default value) by C{func} are not supplied.\r\n\t\"\"\"\r\n\tspec = inspect.getargspec(func)\r\n\r\n\t# some handlers are instance/class methods, discard \"self\"/\"cls\" because it is typically passed implicitly.\r\n\tif inspect.ismethod(func):\r\n\t\tspec.args.pop(0) # remove \"self\"/\"cls\" for instance methods\r\n\t\tif not hasattr(func, \"__self__\"):\r\n\t\t\traise TypeError(\"Unbound instance methods are not handled.\")\r\n\r\n\t# Ensure that the positional args provided by the caller of `callWithSupportedKwargs` actually have a place to go.\r\n\t# Unfortunately, positional args can not be matched on name (keyword) to the names of the params in the handler,\r\n\t# and so calling `callWithSupportedKwargs` is at risk of causing bugs if parameter order differs.\r\n\tnumExpectedArgs = len(spec.args)\r\n\tnumGivenPositionalArgs = len(args)\r\n\tif numGivenPositionalArgs > numExpectedArgs:\r\n\t\traise TypeError(\"Expected to be able to pass {} positional arguments.\".format(numGivenPositionalArgs))\r\n\r\n\t# Ensure that all arguments without defaults which are expected by the handler were provided.\r\n\t# `defaults` is a tuple of default argument values or None if there are no default arguments;\r\n\t# if this tuple has N elements, they correspond to the last N elements listed in args.\r\n\tnumExpectedArgsWithDefaults = len(spec.defaults) if spec.defaults else 0\r\n\tif not spec.defaults or numExpectedArgsWithDefaults != numExpectedArgs:\r\n\t\t# get the names of the args without defaults, skipping the N positional args given to `callWithSupportedKwargs`\r\n\t\t# positionals are required for the Filter extension point.\r\n\t\t# #9067 (Py3 review required): Set construction will work with iterators, too.\r\n\t\tgivenKwargsKeys = set(kwargs.keys())\r\n\t\tfirstArgWithDefault = numExpectedArgs - numExpectedArgsWithDefaults\r\n\t\tspecArgs = set(spec.args[numGivenPositionalArgs:firstArgWithDefault])\r\n\t\tfor arg in specArgs:\r\n\t\t\t# and ensure they are in the kwargs list\r\n\t\t\tif arg not in givenKwargsKeys:\r\n\t\t\t\traise TypeError(\"Parameter required for handler not provided: {}\".format(arg))\r\n\r\n\tif spec.keywords:\r\n\t\t# func has a catch-all for kwargs (**kwargs) so we do not need to filter to just the supported args.\r\n\t\treturn func(*args, **kwargs)\r\n\r\n\tsupportedKwargs = set(spec.args)\r\n\t# #9067 (Py3 review required): originally called dict.keys.\r\n\t# Therefore wrap this inside a list call.\r\n\tfor kwarg in list(kwargs.keys()):\r\n\t\tif kwarg not in supportedKwargs:\r\n\t\t\tdel kwargs[kwarg]\r\n\treturn func(*args, **kwargs)", "path": "source/extensionPoints/util.py"}]} | 3,184 | 149 |
gh_patches_debug_41693 | rasdani/github-patches | git_diff | getpelican__pelican-2480 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ComplexHTTPRequestHandler is doing excessive amount of work for non-complex cases
Spinning up a Pelican server after #2324 was merged, every static request prints the following in the log:
```
-> Tried to find `/static/ship.jpg.html`, but it doesn't exist.
-> Tried to find `/static/ship.jpg/index.html`, but it doesn't exist.
-> Tried to find `/static/ship.jpg/`, but it doesn't exist.
-> Found `/static/ship.jpg`.
127.0.0.1 - - [16/Nov/2018 19:16:29] "GET /static/ship.jpg HTTP/1.1" 200 -
```
And in case of pages it's like this:
```
-> Tried to find `/css/components/test.html`, but it doesn't exist.
-> Found `/css/components/test/index.html`.
127.0.0.1 - - [16/Nov/2018 19:16:32] "GET /css/components/test/ HTTP/1.1" 200 -
```
To me that seems quite excessive, since the `GET` request is already containing the correct path and so trying that first (as it was *before* #2324) makes the most sense (to me at least). In other words, I have the layout done in a way that doesn't require any complex handling -- the files are just where a simple `GET` expects them to be.
Seeing the amount of PRs submitted just for this alone I realize this is a mine-field and every fix breaks someone's else workflow. Since I didn't find a clear reason in the PR descirption, my question is, @oulenz (since you submitted the aforementioned PR), what exactly was the reason to not try the `''` suffix first (the ideal case, basically) and what would break if it did try that first?
(Looks like this is my first issue for this project ever, interesting.)
</issue>
<code>
[start of pelican/server.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import print_function, unicode_literals
3
4 import argparse
5 import logging
6 import os
7 import posixpath
8 import ssl
9 import sys
10
11 try:
12 from magic import from_file as magic_from_file
13 except ImportError:
14 magic_from_file = None
15
16 from six.moves import BaseHTTPServer
17 from six.moves import SimpleHTTPServer as srvmod
18 from six.moves import urllib
19
20
21 def parse_arguments():
22 parser = argparse.ArgumentParser(
23 description='Pelican Development Server',
24 formatter_class=argparse.ArgumentDefaultsHelpFormatter
25 )
26 parser.add_argument("port", default=8000, type=int, nargs="?",
27 help="Port to Listen On")
28 parser.add_argument("server", default="", nargs="?",
29 help="Interface to Listen On")
30 parser.add_argument('--ssl', action="store_true",
31 help='Activate SSL listener')
32 parser.add_argument('--cert', default="./cert.pem", nargs="?",
33 help='Path to certificate file. ' +
34 'Relative to current directory')
35 parser.add_argument('--key', default="./key.pem", nargs="?",
36 help='Path to certificate key file. ' +
37 'Relative to current directory')
38 parser.add_argument('--path', default=".",
39 help='Path to pelican source directory to serve. ' +
40 'Relative to current directory')
41 return parser.parse_args()
42
43
44 class ComplexHTTPRequestHandler(srvmod.SimpleHTTPRequestHandler):
45 SUFFIXES = ['.html', '/index.html', '/', '']
46
47 def translate_path(self, path):
48 # abandon query parameters
49 path = path.split('?', 1)[0]
50 path = path.split('#', 1)[0]
51 # Don't forget explicit trailing slash when normalizing. Issue17324
52 trailing_slash = path.rstrip().endswith('/')
53 path = urllib.parse.unquote(path)
54 path = posixpath.normpath(path)
55 words = path.split('/')
56 words = filter(None, words)
57 path = self.base_path
58 for word in words:
59 if os.path.dirname(word) or word in (os.curdir, os.pardir):
60 # Ignore components that are not a simple file/directory name
61 continue
62 path = os.path.join(path, word)
63 if trailing_slash:
64 path += '/'
65 return path
66
67 def do_GET(self):
68 # cut off a query string
69 original_path = self.path.split('?', 1)[0]
70 # try to find file
71 self.path = self.get_path_that_exists(original_path)
72
73 if not self.path:
74 logging.warning("Unable to find `%s` or variations.",
75 original_path)
76 return
77
78 logging.info("Found `%s`.", self.path)
79 srvmod.SimpleHTTPRequestHandler.do_GET(self)
80
81 def get_path_that_exists(self, original_path):
82 # Try to strip trailing slash
83 original_path = original_path.rstrip('/')
84 # Try to detect file by applying various suffixes
85 for suffix in self.SUFFIXES:
86 path = original_path + suffix
87 if os.path.exists(self.translate_path(path)):
88 return path
89 logging.info("Tried to find `%s`, but it doesn't exist.", path)
90 return None
91
92 def guess_type(self, path):
93 """Guess at the mime type for the specified file.
94 """
95 mimetype = srvmod.SimpleHTTPRequestHandler.guess_type(self, path)
96
97 # If the default guess is too generic, try the python-magic library
98 if mimetype == 'application/octet-stream' and magic_from_file:
99 mimetype = magic_from_file(path, mime=True)
100
101 return mimetype
102
103
104 class RootedHTTPServer(BaseHTTPServer.HTTPServer):
105 def __init__(self, base_path, *args, **kwargs):
106 BaseHTTPServer.HTTPServer.__init__(self, *args, **kwargs)
107 self.RequestHandlerClass.base_path = base_path
108
109
110 if __name__ == '__main__':
111 logging.warning("'python -m pelican.server' is deprecated. The "
112 "Pelican development server should be run via "
113 "'pelican --listen' or 'pelican -l' (this can be combined "
114 "with regeneration as 'pelican -lr'). Rerun 'pelican-"
115 "quickstart' to get new Makefile and tasks.py files.")
116 args = parse_arguments()
117 RootedHTTPServer.allow_reuse_address = True
118 try:
119 httpd = RootedHTTPServer(
120 args.path, (args.server, args.port), ComplexHTTPRequestHandler)
121 if args.ssl:
122 httpd.socket = ssl.wrap_socket(
123 httpd.socket, keyfile=args.key,
124 certfile=args.cert, server_side=True)
125 except ssl.SSLError as e:
126 logging.error("Couldn't open certificate file %s or key file %s",
127 args.cert, args.key)
128 logging.error("Could not listen on port %s, server %s.",
129 args.port, args.server)
130 sys.exit(getattr(e, 'exitcode', 1))
131
132 logging.info("Serving at port %s, server %s.",
133 args.port, args.server)
134 try:
135 httpd.serve_forever()
136 except KeyboardInterrupt as e:
137 logging.info("Shutting down server.")
138 httpd.socket.close()
139
[end of pelican/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pelican/server.py b/pelican/server.py
--- a/pelican/server.py
+++ b/pelican/server.py
@@ -17,6 +17,9 @@
from six.moves import SimpleHTTPServer as srvmod
from six.moves import urllib
+from pelican.log import init as init_logging
+logger = logging.getLogger(__name__)
+
def parse_arguments():
parser = argparse.ArgumentParser(
@@ -71,22 +74,23 @@
self.path = self.get_path_that_exists(original_path)
if not self.path:
- logging.warning("Unable to find `%s` or variations.",
- original_path)
return
- logging.info("Found `%s`.", self.path)
srvmod.SimpleHTTPRequestHandler.do_GET(self)
def get_path_that_exists(self, original_path):
# Try to strip trailing slash
original_path = original_path.rstrip('/')
# Try to detect file by applying various suffixes
+ tries = []
for suffix in self.SUFFIXES:
path = original_path + suffix
if os.path.exists(self.translate_path(path)):
return path
- logging.info("Tried to find `%s`, but it doesn't exist.", path)
+ tries.append(path)
+ logger.warning("Unable to find `%s` or variations:\n%s",
+ original_path,
+ '\n'.join(tries))
return None
def guess_type(self, path):
@@ -108,11 +112,12 @@
if __name__ == '__main__':
- logging.warning("'python -m pelican.server' is deprecated. The "
- "Pelican development server should be run via "
- "'pelican --listen' or 'pelican -l' (this can be combined "
- "with regeneration as 'pelican -lr'). Rerun 'pelican-"
- "quickstart' to get new Makefile and tasks.py files.")
+ init_logging(level=logging.INFO)
+ logger.warning("'python -m pelican.server' is deprecated.\nThe "
+ "Pelican development server should be run via "
+ "'pelican --listen' or 'pelican -l'.\nThis can be combined "
+ "with regeneration as 'pelican -lr'.\nRerun 'pelican-"
+ "quickstart' to get new Makefile and tasks.py files.")
args = parse_arguments()
RootedHTTPServer.allow_reuse_address = True
try:
@@ -123,16 +128,16 @@
httpd.socket, keyfile=args.key,
certfile=args.cert, server_side=True)
except ssl.SSLError as e:
- logging.error("Couldn't open certificate file %s or key file %s",
- args.cert, args.key)
- logging.error("Could not listen on port %s, server %s.",
- args.port, args.server)
+ logger.error("Couldn't open certificate file %s or key file %s",
+ args.cert, args.key)
+ logger.error("Could not listen on port %s, server %s.",
+ args.port, args.server)
sys.exit(getattr(e, 'exitcode', 1))
- logging.info("Serving at port %s, server %s.",
- args.port, args.server)
+ logger.info("Serving at port %s, server %s.",
+ args.port, args.server)
try:
httpd.serve_forever()
except KeyboardInterrupt as e:
- logging.info("Shutting down server.")
+ logger.info("Shutting down server.")
httpd.socket.close()
| {"golden_diff": "diff --git a/pelican/server.py b/pelican/server.py\n--- a/pelican/server.py\n+++ b/pelican/server.py\n@@ -17,6 +17,9 @@\n from six.moves import SimpleHTTPServer as srvmod\n from six.moves import urllib\n \n+from pelican.log import init as init_logging\n+logger = logging.getLogger(__name__)\n+\n \n def parse_arguments():\n parser = argparse.ArgumentParser(\n@@ -71,22 +74,23 @@\n self.path = self.get_path_that_exists(original_path)\n \n if not self.path:\n- logging.warning(\"Unable to find `%s` or variations.\",\n- original_path)\n return\n \n- logging.info(\"Found `%s`.\", self.path)\n srvmod.SimpleHTTPRequestHandler.do_GET(self)\n \n def get_path_that_exists(self, original_path):\n # Try to strip trailing slash\n original_path = original_path.rstrip('/')\n # Try to detect file by applying various suffixes\n+ tries = []\n for suffix in self.SUFFIXES:\n path = original_path + suffix\n if os.path.exists(self.translate_path(path)):\n return path\n- logging.info(\"Tried to find `%s`, but it doesn't exist.\", path)\n+ tries.append(path)\n+ logger.warning(\"Unable to find `%s` or variations:\\n%s\",\n+ original_path,\n+ '\\n'.join(tries))\n return None\n \n def guess_type(self, path):\n@@ -108,11 +112,12 @@\n \n \n if __name__ == '__main__':\n- logging.warning(\"'python -m pelican.server' is deprecated. The \"\n- \"Pelican development server should be run via \"\n- \"'pelican --listen' or 'pelican -l' (this can be combined \"\n- \"with regeneration as 'pelican -lr'). Rerun 'pelican-\"\n- \"quickstart' to get new Makefile and tasks.py files.\")\n+ init_logging(level=logging.INFO)\n+ logger.warning(\"'python -m pelican.server' is deprecated.\\nThe \"\n+ \"Pelican development server should be run via \"\n+ \"'pelican --listen' or 'pelican -l'.\\nThis can be combined \"\n+ \"with regeneration as 'pelican -lr'.\\nRerun 'pelican-\"\n+ \"quickstart' to get new Makefile and tasks.py files.\")\n args = parse_arguments()\n RootedHTTPServer.allow_reuse_address = True\n try:\n@@ -123,16 +128,16 @@\n httpd.socket, keyfile=args.key,\n certfile=args.cert, server_side=True)\n except ssl.SSLError as e:\n- logging.error(\"Couldn't open certificate file %s or key file %s\",\n- args.cert, args.key)\n- logging.error(\"Could not listen on port %s, server %s.\",\n- args.port, args.server)\n+ logger.error(\"Couldn't open certificate file %s or key file %s\",\n+ args.cert, args.key)\n+ logger.error(\"Could not listen on port %s, server %s.\",\n+ args.port, args.server)\n sys.exit(getattr(e, 'exitcode', 1))\n \n- logging.info(\"Serving at port %s, server %s.\",\n- args.port, args.server)\n+ logger.info(\"Serving at port %s, server %s.\",\n+ args.port, args.server)\n try:\n httpd.serve_forever()\n except KeyboardInterrupt as e:\n- logging.info(\"Shutting down server.\")\n+ logger.info(\"Shutting down server.\")\n httpd.socket.close()\n", "issue": "ComplexHTTPRequestHandler is doing excessive amount of work for non-complex cases\nSpinning up a Pelican server after #2324 was merged, every static request prints the following in the log:\r\n\r\n```\r\n-> Tried to find `/static/ship.jpg.html`, but it doesn't exist.\r\n-> Tried to find `/static/ship.jpg/index.html`, but it doesn't exist.\r\n-> Tried to find `/static/ship.jpg/`, but it doesn't exist.\r\n-> Found `/static/ship.jpg`.\r\n127.0.0.1 - - [16/Nov/2018 19:16:29] \"GET /static/ship.jpg HTTP/1.1\" 200 -\r\n```\r\n\r\nAnd in case of pages it's like this:\r\n\r\n```\r\n-> Tried to find `/css/components/test.html`, but it doesn't exist.\r\n-> Found `/css/components/test/index.html`.\r\n127.0.0.1 - - [16/Nov/2018 19:16:32] \"GET /css/components/test/ HTTP/1.1\" 200 -\r\n```\r\n\r\nTo me that seems quite excessive, since the `GET` request is already containing the correct path and so trying that first (as it was *before* #2324) makes the most sense (to me at least). In other words, I have the layout done in a way that doesn't require any complex handling -- the files are just where a simple `GET` expects them to be.\r\n\r\nSeeing the amount of PRs submitted just for this alone I realize this is a mine-field and every fix breaks someone's else workflow. Since I didn't find a clear reason in the PR descirption, my question is, @oulenz (since you submitted the aforementioned PR), what exactly was the reason to not try the `''` suffix first (the ideal case, basically) and what would break if it did try that first?\r\n\r\n(Looks like this is my first issue for this project ever, interesting.)\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import print_function, unicode_literals\n\nimport argparse\nimport logging\nimport os\nimport posixpath\nimport ssl\nimport sys\n\ntry:\n from magic import from_file as magic_from_file\nexcept ImportError:\n magic_from_file = None\n\nfrom six.moves import BaseHTTPServer\nfrom six.moves import SimpleHTTPServer as srvmod\nfrom six.moves import urllib\n\n\ndef parse_arguments():\n parser = argparse.ArgumentParser(\n description='Pelican Development Server',\n formatter_class=argparse.ArgumentDefaultsHelpFormatter\n )\n parser.add_argument(\"port\", default=8000, type=int, nargs=\"?\",\n help=\"Port to Listen On\")\n parser.add_argument(\"server\", default=\"\", nargs=\"?\",\n help=\"Interface to Listen On\")\n parser.add_argument('--ssl', action=\"store_true\",\n help='Activate SSL listener')\n parser.add_argument('--cert', default=\"./cert.pem\", nargs=\"?\",\n help='Path to certificate file. ' +\n 'Relative to current directory')\n parser.add_argument('--key', default=\"./key.pem\", nargs=\"?\",\n help='Path to certificate key file. ' +\n 'Relative to current directory')\n parser.add_argument('--path', default=\".\",\n help='Path to pelican source directory to serve. ' +\n 'Relative to current directory')\n return parser.parse_args()\n\n\nclass ComplexHTTPRequestHandler(srvmod.SimpleHTTPRequestHandler):\n SUFFIXES = ['.html', '/index.html', '/', '']\n\n def translate_path(self, path):\n # abandon query parameters\n path = path.split('?', 1)[0]\n path = path.split('#', 1)[0]\n # Don't forget explicit trailing slash when normalizing. Issue17324\n trailing_slash = path.rstrip().endswith('/')\n path = urllib.parse.unquote(path)\n path = posixpath.normpath(path)\n words = path.split('/')\n words = filter(None, words)\n path = self.base_path\n for word in words:\n if os.path.dirname(word) or word in (os.curdir, os.pardir):\n # Ignore components that are not a simple file/directory name\n continue\n path = os.path.join(path, word)\n if trailing_slash:\n path += '/'\n return path\n\n def do_GET(self):\n # cut off a query string\n original_path = self.path.split('?', 1)[0]\n # try to find file\n self.path = self.get_path_that_exists(original_path)\n\n if not self.path:\n logging.warning(\"Unable to find `%s` or variations.\",\n original_path)\n return\n\n logging.info(\"Found `%s`.\", self.path)\n srvmod.SimpleHTTPRequestHandler.do_GET(self)\n\n def get_path_that_exists(self, original_path):\n # Try to strip trailing slash\n original_path = original_path.rstrip('/')\n # Try to detect file by applying various suffixes\n for suffix in self.SUFFIXES:\n path = original_path + suffix\n if os.path.exists(self.translate_path(path)):\n return path\n logging.info(\"Tried to find `%s`, but it doesn't exist.\", path)\n return None\n\n def guess_type(self, path):\n \"\"\"Guess at the mime type for the specified file.\n \"\"\"\n mimetype = srvmod.SimpleHTTPRequestHandler.guess_type(self, path)\n\n # If the default guess is too generic, try the python-magic library\n if mimetype == 'application/octet-stream' and magic_from_file:\n mimetype = magic_from_file(path, mime=True)\n\n return mimetype\n\n\nclass RootedHTTPServer(BaseHTTPServer.HTTPServer):\n def __init__(self, base_path, *args, **kwargs):\n BaseHTTPServer.HTTPServer.__init__(self, *args, **kwargs)\n self.RequestHandlerClass.base_path = base_path\n\n\nif __name__ == '__main__':\n logging.warning(\"'python -m pelican.server' is deprecated. The \"\n \"Pelican development server should be run via \"\n \"'pelican --listen' or 'pelican -l' (this can be combined \"\n \"with regeneration as 'pelican -lr'). Rerun 'pelican-\"\n \"quickstart' to get new Makefile and tasks.py files.\")\n args = parse_arguments()\n RootedHTTPServer.allow_reuse_address = True\n try:\n httpd = RootedHTTPServer(\n args.path, (args.server, args.port), ComplexHTTPRequestHandler)\n if args.ssl:\n httpd.socket = ssl.wrap_socket(\n httpd.socket, keyfile=args.key,\n certfile=args.cert, server_side=True)\n except ssl.SSLError as e:\n logging.error(\"Couldn't open certificate file %s or key file %s\",\n args.cert, args.key)\n logging.error(\"Could not listen on port %s, server %s.\",\n args.port, args.server)\n sys.exit(getattr(e, 'exitcode', 1))\n\n logging.info(\"Serving at port %s, server %s.\",\n args.port, args.server)\n try:\n httpd.serve_forever()\n except KeyboardInterrupt as e:\n logging.info(\"Shutting down server.\")\n httpd.socket.close()\n", "path": "pelican/server.py"}]} | 2,400 | 797 |
gh_patches_debug_26536 | rasdani/github-patches | git_diff | getsentry__sentry-53560 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Test `_archived_row` in sentry/replays/post_process.py to make sure that all requested fields are returned, no more KeyError
We saw this "KeyError" in the backend recently: https://sentry.sentry.io/issues/4326971348/?alert_rule_id=14460873&alert_type=issue&project=1&referrer=slack
> KeyError /api/0/organizations/{organization_slug}/replays/ 'count_dead_clicks'
The problem is that we are asking for more fields to be returned in the list query, but when a deleted row exists we call `_archived_row()` and return that object instead. That object doesn't really care about what fields the api client was asking for, therefore we got a key error.
We did a quick fix to update the server so it matches what we're asking for: https://github.com/getsentry/sentry/pull/53225/files
Updating `_archived_row` like this is hard to maintain.
There already exists a `VALID_FIELD_SET` inside ReplayValidator.py. We could write a test to guarantee that `_archived_row` knows about all fields, if a new field is added _archived_row will be updated to match.
Alternatively, we could change the implementation of _archived_row so that it returns an object which implements the `__getitem__()` method, so it returns None for any field requested. Then we don't need to update `_archived_row` each time a new field is added to VALID_FIELD_SET
https://stackoverflow.com/questions/43627405/understanding-getitem-method-in-python
</issue>
<code>
[start of src/sentry/replays/post_process.py]
1 from __future__ import annotations
2
3 import collections
4 from itertools import zip_longest
5 from typing import Any, Generator, Iterable, Iterator, MutableMapping
6
7
8 def process_raw_response(response: list[dict[str, Any]], fields: list[str]) -> list[dict[str, Any]]:
9 """Process the response further into the expected output."""
10 return list(generate_restricted_fieldset(fields, generate_normalized_output(response)))
11
12
13 def generate_restricted_fieldset(
14 fields: list[str] | None,
15 response: Generator[dict[str, Any], None, None],
16 ) -> Iterator[dict[str, Any]]:
17
18 """Return only the fields requested by the client."""
19 if fields:
20 for item in response:
21 yield {field: item[field] for field in fields}
22 else:
23 yield from response
24
25
26 def _strip_dashes(field: str) -> str:
27 if field:
28 return field.replace("-", "")
29 return field
30
31
32 def generate_normalized_output(
33 response: list[dict[str, Any]]
34 ) -> Generator[dict[str, Any], None, None]:
35 """For each payload in the response strip "agg_" prefixes."""
36 for item in response:
37 if item["isArchived"]:
38 yield _archived_row(item["replay_id"], item["project_id"])
39 continue
40
41 item["id"] = _strip_dashes(item.pop("replay_id", None))
42 item["project_id"] = str(item["project_id"])
43 item["trace_ids"] = item.pop("traceIds", [])
44 item["error_ids"] = item.pop("errorIds", [])
45 item["environment"] = item.pop("agg_environment", None)
46 item["tags"] = dict_unique_list(
47 zip(
48 item.pop("tk", None) or [],
49 item.pop("tv", None) or [],
50 )
51 )
52 item["user"] = {
53 "id": item.pop("user_id", None),
54 "username": item.pop("user_username", None),
55 "email": item.pop("user_email", None),
56 "ip": item.pop("user_ip", None),
57 }
58 item["user"]["display_name"] = (
59 item["user"]["username"]
60 or item["user"]["email"]
61 or item["user"]["id"]
62 or item["user"]["ip"]
63 )
64 item["sdk"] = {
65 "name": item.pop("sdk_name", None),
66 "version": item.pop("sdk_version", None),
67 }
68 item["os"] = {
69 "name": item.pop("os_name", None),
70 "version": item.pop("os_version", None),
71 }
72 item["browser"] = {
73 "name": item.pop("browser_name", None),
74 "version": item.pop("browser_version", None),
75 }
76 item["device"] = {
77 "name": item.pop("device_name", None),
78 "brand": item.pop("device_brand", None),
79 "model": item.pop("device_model", None),
80 "family": item.pop("device_family", None),
81 }
82
83 item.pop("agg_urls", None)
84 item["urls"] = item.pop("urls_sorted", None)
85
86 item["is_archived"] = bool(item.pop("isArchived", 0))
87
88 item.pop("clickClass", None)
89 item.pop("click_selector", None)
90 # don't need clickClass or click_selector
91 # for the click field, as they are only used for searching.
92 # (click.classes contains the full list of classes for a click)
93 item["clicks"] = extract_click_fields(item)
94
95 yield item
96
97
98 def generate_sorted_urls(url_groups: list[tuple[int, list[str]]]) -> Iterator[str]:
99 """Return a flat list of ordered urls."""
100 for _, url_group in sorted(url_groups, key=lambda item: item[0]):
101 yield from url_group
102
103
104 def dict_unique_list(items: Iterable[tuple[str, str]]) -> dict[str, list[str]]:
105 """Populate a dictionary with the first key, value pair seen.
106
107 There is a potential for duplicate keys to exist in the result set. When we filter these keys
108 in Clickhouse we will filter by the first value so we ignore subsequent updates to the key.
109 This function ensures what is displayed matches what was filtered.
110 """
111 unique = collections.defaultdict(set)
112 for key, value in items:
113 unique[key].add(value)
114
115 return {key: list(value_set) for key, value_set in unique.items()}
116
117
118 def _archived_row(replay_id: str, project_id: int) -> dict[str, Any]:
119 return {
120 "id": _strip_dashes(replay_id),
121 "project_id": str(project_id),
122 "trace_ids": [],
123 "error_ids": [],
124 "environment": None,
125 "tags": [],
126 "user": {"id": "Archived Replay", "display_name": "Archived Replay"},
127 "sdk": {"name": None, "version": None},
128 "os": {"name": None, "version": None},
129 "browser": {"name": None, "version": None},
130 "device": {"name": None, "brand": None, "model": None, "family": None},
131 "urls": None,
132 "activity": None,
133 "count_dead_clicks": None,
134 "count_rage_clicks": None,
135 "count_errors": None,
136 "duration": None,
137 "finished_at": None,
138 "started_at": None,
139 "is_archived": True,
140 }
141
142
143 CLICK_FIELD_MAP = {
144 "click_alt": "click.alt",
145 "click_aria_label": "click.label",
146 "click_classes": "click.classes",
147 "click_id": "click.id",
148 "click_role": "click.role",
149 "click_tag": "click.tag",
150 "click_testid": "click.testid",
151 "click_text": "click.text",
152 "click_title": "click.title",
153 }
154
155
156 def extract_click_fields(item: MutableMapping[str, Any]) -> list[dict[str, Any]]:
157 """
158 pops all of the click fields from the item and returns a list of the individual clicks as objects
159 """
160 click_dict = {}
161 for click_field in CLICK_FIELD_MAP.keys():
162 click_val = item.pop(click_field, [])
163 # if there is at least one one element, the list will be filled empty strings for the non-click segments
164 # so if there is at least one value, return a list of the truthy values.
165 # if not, return a list with a single None value
166 # also map the clickhouse field values to their query names
167 if click_val:
168 click_dict[CLICK_FIELD_MAP[click_field]] = [element for element in click_val if element]
169 else:
170 click_dict[CLICK_FIELD_MAP[click_field]] = click_val
171
172 list_of_dicts = [dict(zip(click_dict.keys(), row)) for row in zip_longest(*click_dict.values())]
173 return list_of_dicts
174
[end of src/sentry/replays/post_process.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/replays/post_process.py b/src/sentry/replays/post_process.py
--- a/src/sentry/replays/post_process.py
+++ b/src/sentry/replays/post_process.py
@@ -4,6 +4,8 @@
from itertools import zip_longest
from typing import Any, Generator, Iterable, Iterator, MutableMapping
+from sentry.replays.validators import VALID_FIELD_SET
+
def process_raw_response(response: list[dict[str, Any]], fields: list[str]) -> list[dict[str, Any]]:
"""Process the response further into the expected output."""
@@ -116,7 +118,7 @@
def _archived_row(replay_id: str, project_id: int) -> dict[str, Any]:
- return {
+ archived_replay_response = {
"id": _strip_dashes(replay_id),
"project_id": str(project_id),
"trace_ids": [],
@@ -137,7 +139,18 @@
"finished_at": None,
"started_at": None,
"is_archived": True,
+ "count_segments": None,
+ "count_urls": None,
+ "dist": None,
+ "platform": None,
+ "releases": None,
+ "clicks": None,
}
+ for field in VALID_FIELD_SET:
+ if field not in archived_replay_response:
+ archived_replay_response[field] = None
+
+ return archived_replay_response
CLICK_FIELD_MAP = {
| {"golden_diff": "diff --git a/src/sentry/replays/post_process.py b/src/sentry/replays/post_process.py\n--- a/src/sentry/replays/post_process.py\n+++ b/src/sentry/replays/post_process.py\n@@ -4,6 +4,8 @@\n from itertools import zip_longest\n from typing import Any, Generator, Iterable, Iterator, MutableMapping\n \n+from sentry.replays.validators import VALID_FIELD_SET\n+\n \n def process_raw_response(response: list[dict[str, Any]], fields: list[str]) -> list[dict[str, Any]]:\n \"\"\"Process the response further into the expected output.\"\"\"\n@@ -116,7 +118,7 @@\n \n \n def _archived_row(replay_id: str, project_id: int) -> dict[str, Any]:\n- return {\n+ archived_replay_response = {\n \"id\": _strip_dashes(replay_id),\n \"project_id\": str(project_id),\n \"trace_ids\": [],\n@@ -137,7 +139,18 @@\n \"finished_at\": None,\n \"started_at\": None,\n \"is_archived\": True,\n+ \"count_segments\": None,\n+ \"count_urls\": None,\n+ \"dist\": None,\n+ \"platform\": None,\n+ \"releases\": None,\n+ \"clicks\": None,\n }\n+ for field in VALID_FIELD_SET:\n+ if field not in archived_replay_response:\n+ archived_replay_response[field] = None\n+\n+ return archived_replay_response\n \n \n CLICK_FIELD_MAP = {\n", "issue": "Test `_archived_row` in sentry/replays/post_process.py to make sure that all requested fields are returned, no more KeyError\nWe saw this \"KeyError\" in the backend recently: https://sentry.sentry.io/issues/4326971348/?alert_rule_id=14460873&alert_type=issue&project=1&referrer=slack\r\n\r\n> KeyError /api/0/organizations/{organization_slug}/replays/ 'count_dead_clicks'\r\n\r\nThe problem is that we are asking for more fields to be returned in the list query, but when a deleted row exists we call `_archived_row()` and return that object instead. That object doesn't really care about what fields the api client was asking for, therefore we got a key error.\r\nWe did a quick fix to update the server so it matches what we're asking for: https://github.com/getsentry/sentry/pull/53225/files\r\n\r\nUpdating `_archived_row` like this is hard to maintain. \r\nThere already exists a `VALID_FIELD_SET` inside ReplayValidator.py. We could write a test to guarantee that `_archived_row` knows about all fields, if a new field is added _archived_row will be updated to match.\r\n\r\nAlternatively, we could change the implementation of _archived_row so that it returns an object which implements the `__getitem__()` method, so it returns None for any field requested. Then we don't need to update `_archived_row` each time a new field is added to VALID_FIELD_SET\r\nhttps://stackoverflow.com/questions/43627405/understanding-getitem-method-in-python\n", "before_files": [{"content": "from __future__ import annotations\n\nimport collections\nfrom itertools import zip_longest\nfrom typing import Any, Generator, Iterable, Iterator, MutableMapping\n\n\ndef process_raw_response(response: list[dict[str, Any]], fields: list[str]) -> list[dict[str, Any]]:\n \"\"\"Process the response further into the expected output.\"\"\"\n return list(generate_restricted_fieldset(fields, generate_normalized_output(response)))\n\n\ndef generate_restricted_fieldset(\n fields: list[str] | None,\n response: Generator[dict[str, Any], None, None],\n) -> Iterator[dict[str, Any]]:\n\n \"\"\"Return only the fields requested by the client.\"\"\"\n if fields:\n for item in response:\n yield {field: item[field] for field in fields}\n else:\n yield from response\n\n\ndef _strip_dashes(field: str) -> str:\n if field:\n return field.replace(\"-\", \"\")\n return field\n\n\ndef generate_normalized_output(\n response: list[dict[str, Any]]\n) -> Generator[dict[str, Any], None, None]:\n \"\"\"For each payload in the response strip \"agg_\" prefixes.\"\"\"\n for item in response:\n if item[\"isArchived\"]:\n yield _archived_row(item[\"replay_id\"], item[\"project_id\"])\n continue\n\n item[\"id\"] = _strip_dashes(item.pop(\"replay_id\", None))\n item[\"project_id\"] = str(item[\"project_id\"])\n item[\"trace_ids\"] = item.pop(\"traceIds\", [])\n item[\"error_ids\"] = item.pop(\"errorIds\", [])\n item[\"environment\"] = item.pop(\"agg_environment\", None)\n item[\"tags\"] = dict_unique_list(\n zip(\n item.pop(\"tk\", None) or [],\n item.pop(\"tv\", None) or [],\n )\n )\n item[\"user\"] = {\n \"id\": item.pop(\"user_id\", None),\n \"username\": item.pop(\"user_username\", None),\n \"email\": item.pop(\"user_email\", None),\n \"ip\": item.pop(\"user_ip\", None),\n }\n item[\"user\"][\"display_name\"] = (\n item[\"user\"][\"username\"]\n or item[\"user\"][\"email\"]\n or item[\"user\"][\"id\"]\n or item[\"user\"][\"ip\"]\n )\n item[\"sdk\"] = {\n \"name\": item.pop(\"sdk_name\", None),\n \"version\": item.pop(\"sdk_version\", None),\n }\n item[\"os\"] = {\n \"name\": item.pop(\"os_name\", None),\n \"version\": item.pop(\"os_version\", None),\n }\n item[\"browser\"] = {\n \"name\": item.pop(\"browser_name\", None),\n \"version\": item.pop(\"browser_version\", None),\n }\n item[\"device\"] = {\n \"name\": item.pop(\"device_name\", None),\n \"brand\": item.pop(\"device_brand\", None),\n \"model\": item.pop(\"device_model\", None),\n \"family\": item.pop(\"device_family\", None),\n }\n\n item.pop(\"agg_urls\", None)\n item[\"urls\"] = item.pop(\"urls_sorted\", None)\n\n item[\"is_archived\"] = bool(item.pop(\"isArchived\", 0))\n\n item.pop(\"clickClass\", None)\n item.pop(\"click_selector\", None)\n # don't need clickClass or click_selector\n # for the click field, as they are only used for searching.\n # (click.classes contains the full list of classes for a click)\n item[\"clicks\"] = extract_click_fields(item)\n\n yield item\n\n\ndef generate_sorted_urls(url_groups: list[tuple[int, list[str]]]) -> Iterator[str]:\n \"\"\"Return a flat list of ordered urls.\"\"\"\n for _, url_group in sorted(url_groups, key=lambda item: item[0]):\n yield from url_group\n\n\ndef dict_unique_list(items: Iterable[tuple[str, str]]) -> dict[str, list[str]]:\n \"\"\"Populate a dictionary with the first key, value pair seen.\n\n There is a potential for duplicate keys to exist in the result set. When we filter these keys\n in Clickhouse we will filter by the first value so we ignore subsequent updates to the key.\n This function ensures what is displayed matches what was filtered.\n \"\"\"\n unique = collections.defaultdict(set)\n for key, value in items:\n unique[key].add(value)\n\n return {key: list(value_set) for key, value_set in unique.items()}\n\n\ndef _archived_row(replay_id: str, project_id: int) -> dict[str, Any]:\n return {\n \"id\": _strip_dashes(replay_id),\n \"project_id\": str(project_id),\n \"trace_ids\": [],\n \"error_ids\": [],\n \"environment\": None,\n \"tags\": [],\n \"user\": {\"id\": \"Archived Replay\", \"display_name\": \"Archived Replay\"},\n \"sdk\": {\"name\": None, \"version\": None},\n \"os\": {\"name\": None, \"version\": None},\n \"browser\": {\"name\": None, \"version\": None},\n \"device\": {\"name\": None, \"brand\": None, \"model\": None, \"family\": None},\n \"urls\": None,\n \"activity\": None,\n \"count_dead_clicks\": None,\n \"count_rage_clicks\": None,\n \"count_errors\": None,\n \"duration\": None,\n \"finished_at\": None,\n \"started_at\": None,\n \"is_archived\": True,\n }\n\n\nCLICK_FIELD_MAP = {\n \"click_alt\": \"click.alt\",\n \"click_aria_label\": \"click.label\",\n \"click_classes\": \"click.classes\",\n \"click_id\": \"click.id\",\n \"click_role\": \"click.role\",\n \"click_tag\": \"click.tag\",\n \"click_testid\": \"click.testid\",\n \"click_text\": \"click.text\",\n \"click_title\": \"click.title\",\n}\n\n\ndef extract_click_fields(item: MutableMapping[str, Any]) -> list[dict[str, Any]]:\n \"\"\"\n pops all of the click fields from the item and returns a list of the individual clicks as objects\n \"\"\"\n click_dict = {}\n for click_field in CLICK_FIELD_MAP.keys():\n click_val = item.pop(click_field, [])\n # if there is at least one one element, the list will be filled empty strings for the non-click segments\n # so if there is at least one value, return a list of the truthy values.\n # if not, return a list with a single None value\n # also map the clickhouse field values to their query names\n if click_val:\n click_dict[CLICK_FIELD_MAP[click_field]] = [element for element in click_val if element]\n else:\n click_dict[CLICK_FIELD_MAP[click_field]] = click_val\n\n list_of_dicts = [dict(zip(click_dict.keys(), row)) for row in zip_longest(*click_dict.values())]\n return list_of_dicts\n", "path": "src/sentry/replays/post_process.py"}]} | 2,805 | 333 |
gh_patches_debug_7361 | rasdani/github-patches | git_diff | arviz-devs__arviz-2151 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plot_trace assumes specific dimensions ordering for divergences
**Describe the bug**
If an `InferenceData` contains a `diverging` variable in `sample_stats` that uses the dimension ordering `(draw, chain)` instead of `(chain, draw)`, then `plot_trace` fails.
**To Reproduce**
```python
>>> import arviz as az
>>> import numpy as np
>>> idata = az.from_dict(
... dict(x=np.random.normal(size=(4, 100, 2))),
... sample_stats=dict(diverging=np.random.randint(0, 1, size=(4, 100), dtype=bool))
... )
>>> az.plot_trace(idata) # fine
array([[<AxesSubplot:title={'center':'x'}>,
<AxesSubplot:title={'center':'x'}>]], dtype=object)
>>> idata.posterior['x'] = idata.posterior.x.transpose()
>>> idata.sample_stats['diverging'] = idata.sample_stats.diverging.transpose()
>>> az.plot_trace(idata) # error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sethaxen/projects/arviz/arviz/plots/traceplot.py", line 263, in plot_trace
axes = plot(**trace_plot_args)
File "/home/sethaxen/projects/arviz/arviz/plots/backends/matplotlib/traceplot.py", line 347, in plot_trace
div_draws = data.draw.values[chain_divs]
IndexError: boolean index did not match indexed array along dimension 0; dimension is 100 but corresponding boolean dimension is 4
>>> idata = az.InferenceData(posterior=idata.posterior)
>>> az.plot_trace(idata) # fine without diverging
array([[<AxesSubplot:title={'center':'x'}>,
<AxesSubplot:title={'center':'x'}>]], dtype=object)
```
**Expected behavior**
I expected all calls to `plot_trace` to work above.
**Additional context**
These checks were performed on main.
</issue>
<code>
[start of arviz/plots/traceplot.py]
1 """Plot kde or histograms and values from MCMC samples."""
2 import warnings
3 from typing import Any, Callable, List, Mapping, Optional, Tuple, Union, Sequence
4
5 from ..data import CoordSpec, InferenceData, convert_to_dataset
6 from ..labels import BaseLabeller
7 from ..rcparams import rcParams
8 from ..sel_utils import xarray_var_iter
9 from ..utils import _var_names, get_coords
10 from .plot_utils import KwargSpec, get_plotting_function
11
12
13 def plot_trace(
14 data: InferenceData,
15 var_names: Optional[Sequence[str]] = None,
16 filter_vars: Optional[str] = None,
17 transform: Optional[Callable] = None,
18 coords: Optional[CoordSpec] = None,
19 divergences: Optional[str] = "auto",
20 kind: Optional[str] = "trace",
21 figsize: Optional[Tuple[float, float]] = None,
22 rug: bool = False,
23 lines: Optional[List[Tuple[str, CoordSpec, Any]]] = None,
24 circ_var_names: Optional[List[str]] = None,
25 circ_var_units: str = "radians",
26 compact: bool = True,
27 compact_prop: Optional[Union[str, Mapping[str, Any]]] = None,
28 combined: bool = False,
29 chain_prop: Optional[Union[str, Mapping[str, Any]]] = None,
30 legend: bool = False,
31 plot_kwargs: Optional[KwargSpec] = None,
32 fill_kwargs: Optional[KwargSpec] = None,
33 rug_kwargs: Optional[KwargSpec] = None,
34 hist_kwargs: Optional[KwargSpec] = None,
35 trace_kwargs: Optional[KwargSpec] = None,
36 rank_kwargs: Optional[KwargSpec] = None,
37 labeller=None,
38 axes=None,
39 backend: Optional[str] = None,
40 backend_config: Optional[KwargSpec] = None,
41 backend_kwargs: Optional[KwargSpec] = None,
42 show: Optional[bool] = None,
43 ):
44 """Plot distribution (histogram or kernel density estimates) and sampled values or rank plot.
45
46 If `divergences` data is available in `sample_stats`, will plot the location of divergences as
47 dashed vertical lines.
48
49 Parameters
50 ----------
51 data: obj
52 Any object that can be converted to an :class:`arviz.InferenceData` object
53 Refer to documentation of :func:`arviz.convert_to_dataset` for details
54 var_names: str or list of str, optional
55 One or more variables to be plotted. Prefix the variables by ``~`` when you want
56 to exclude them from the plot.
57 filter_vars: {None, "like", "regex"}, optional, default=None
58 If `None` (default), interpret var_names as the real variables names. If "like",
59 interpret var_names as substrings of the real variables names. If "regex",
60 interpret var_names as regular expressions on the real variables names. A la
61 ``pandas.filter``.
62 coords: dict of {str: slice or array_like}, optional
63 Coordinates of var_names to be plotted. Passed to :meth:`xarray.Dataset.sel`
64 divergences: {"bottom", "top", None}, optional
65 Plot location of divergences on the traceplots.
66 kind: {"trace", "rank_bars", "rank_vlines"}, optional
67 Choose between plotting sampled values per iteration and rank plots.
68 transform: callable, optional
69 Function to transform data (defaults to None i.e.the identity function)
70 figsize: tuple of (float, float), optional
71 If None, size is (12, variables * 2)
72 rug: bool, optional
73 If True adds a rugplot of samples. Defaults to False. Ignored for 2D KDE.
74 Only affects continuous variables.
75 lines: list of tuple of (str, dict, array_like), optional
76 List of (var_name, {'coord': selection}, [line, positions]) to be overplotted as
77 vertical lines on the density and horizontal lines on the trace.
78 circ_var_names : str or list of str, optional
79 List of circular variables to account for when plotting KDE.
80 circ_var_units : str
81 Whether the variables in ``circ_var_names`` are in "degrees" or "radians".
82 compact: bool, optional
83 Plot multidimensional variables in a single plot.
84 compact_prop: str or dict {str: array_like}, optional
85 Tuple containing the property name and the property values to distinguish different
86 dimensions with compact=True
87 combined: bool, optional
88 Flag for combining multiple chains into a single line. If False (default), chains will be
89 plotted separately.
90 chain_prop: str or dict {str: array_like}, optional
91 Tuple containing the property name and the property values to distinguish different chains
92 legend: bool, optional
93 Add a legend to the figure with the chain color code.
94 plot_kwargs, fill_kwargs, rug_kwargs, hist_kwargs: dict, optional
95 Extra keyword arguments passed to :func:`arviz.plot_dist`. Only affects continuous
96 variables.
97 trace_kwargs: dict, optional
98 Extra keyword arguments passed to :meth:`matplotlib.axes.Axes.plot`
99 labeller : labeller instance, optional
100 Class providing the method ``make_label_vert`` to generate the labels in the plot titles.
101 Read the :ref:`label_guide` for more details and usage examples.
102 rank_kwargs : dict, optional
103 Extra keyword arguments passed to :func:`arviz.plot_rank`
104 axes: axes, optional
105 Matplotlib axes or bokeh figures.
106 backend: {"matplotlib", "bokeh"}, optional
107 Select plotting backend.
108 backend_config: dict, optional
109 Currently specifies the bounds to use for bokeh axes. Defaults to value set in rcParams.
110 backend_kwargs: dict, optional
111 These are kwargs specific to the backend being used, passed to
112 :func:`matplotlib.pyplot.subplots` or
113 :func:`bokeh.plotting.figure`.
114 show: bool, optional
115 Call backend show function.
116
117 Returns
118 -------
119 axes: matplotlib axes or bokeh figures
120
121 See Also
122 --------
123 plot_rank : Plot rank order statistics of chains.
124
125 Examples
126 --------
127 Plot a subset variables and select them with partial naming
128
129 .. plot::
130 :context: close-figs
131
132 >>> import arviz as az
133 >>> data = az.load_arviz_data('non_centered_eight')
134 >>> coords = {'school': ['Choate', 'Lawrenceville']}
135 >>> az.plot_trace(data, var_names=('theta'), filter_vars="like", coords=coords)
136
137 Show all dimensions of multidimensional variables in the same plot
138
139 .. plot::
140 :context: close-figs
141
142 >>> az.plot_trace(data, compact=True)
143
144 Display a rank plot instead of trace
145
146 .. plot::
147 :context: close-figs
148
149 >>> az.plot_trace(data, var_names=["mu", "tau"], kind="rank_bars")
150
151 Combine all chains into one distribution and select variables with regular expressions
152
153 .. plot::
154 :context: close-figs
155
156 >>> az.plot_trace(
157 >>> data, var_names=('^theta'), filter_vars="regex", coords=coords, combined=True
158 >>> )
159
160
161 Plot reference lines against distribution and trace
162
163 .. plot::
164 :context: close-figs
165
166 >>> lines = (('theta_t',{'school': "Choate"}, [-1]),)
167 >>> az.plot_trace(data, var_names=('theta_t', 'theta'), coords=coords, lines=lines)
168
169 """
170 if kind not in {"trace", "rank_vlines", "rank_bars"}:
171 raise ValueError("The value of kind must be either trace, rank_vlines or rank_bars.")
172
173 if divergences == "auto":
174 divergences = "top" if rug else "bottom"
175 if divergences:
176 try:
177 divergence_data = convert_to_dataset(data, group="sample_stats").diverging
178 except (ValueError, AttributeError): # No sample_stats, or no `.diverging`
179 divergences = None
180
181 if coords is None:
182 coords = {}
183
184 if labeller is None:
185 labeller = BaseLabeller()
186
187 if divergences:
188 divergence_data = get_coords(
189 divergence_data, {k: v for k, v in coords.items() if k in ("chain", "draw")}
190 )
191 else:
192 divergence_data = False
193
194 coords_data = get_coords(convert_to_dataset(data, group="posterior"), coords)
195
196 if transform is not None:
197 coords_data = transform(coords_data)
198
199 var_names = _var_names(var_names, coords_data, filter_vars)
200
201 skip_dims = set(coords_data.dims) - {"chain", "draw"} if compact else set()
202
203 plotters = list(
204 xarray_var_iter(
205 coords_data,
206 var_names=var_names,
207 combined=True,
208 skip_dims=skip_dims,
209 dim_order=["chain", "draw"],
210 )
211 )
212 max_plots = rcParams["plot.max_subplots"]
213 max_plots = len(plotters) if max_plots is None else max(max_plots // 2, 1)
214 if len(plotters) > max_plots:
215 warnings.warn(
216 "rcParams['plot.max_subplots'] ({max_plots}) is smaller than the number "
217 "of variables to plot ({len_plotters}), generating only {max_plots} "
218 "plots".format(max_plots=max_plots, len_plotters=len(plotters)),
219 UserWarning,
220 )
221 plotters = plotters[:max_plots]
222
223 # TODO: Check if this can be further simplified
224 trace_plot_args = dict(
225 # User Kwargs
226 data=coords_data,
227 var_names=var_names,
228 # coords = coords,
229 divergences=divergences,
230 kind=kind,
231 figsize=figsize,
232 rug=rug,
233 lines=lines,
234 circ_var_names=circ_var_names,
235 circ_var_units=circ_var_units,
236 plot_kwargs=plot_kwargs,
237 fill_kwargs=fill_kwargs,
238 rug_kwargs=rug_kwargs,
239 hist_kwargs=hist_kwargs,
240 trace_kwargs=trace_kwargs,
241 rank_kwargs=rank_kwargs,
242 compact=compact,
243 compact_prop=compact_prop,
244 combined=combined,
245 chain_prop=chain_prop,
246 legend=legend,
247 labeller=labeller,
248 # Generated kwargs
249 divergence_data=divergence_data,
250 # skip_dims=skip_dims,
251 plotters=plotters,
252 axes=axes,
253 backend_config=backend_config,
254 backend_kwargs=backend_kwargs,
255 show=show,
256 )
257
258 if backend is None:
259 backend = rcParams["plot.backend"]
260 backend = backend.lower()
261
262 plot = get_plotting_function("plot_trace", "traceplot", backend)
263 axes = plot(**trace_plot_args)
264
265 return axes
266
[end of arviz/plots/traceplot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/arviz/plots/traceplot.py b/arviz/plots/traceplot.py
--- a/arviz/plots/traceplot.py
+++ b/arviz/plots/traceplot.py
@@ -174,7 +174,9 @@
divergences = "top" if rug else "bottom"
if divergences:
try:
- divergence_data = convert_to_dataset(data, group="sample_stats").diverging
+ divergence_data = convert_to_dataset(data, group="sample_stats").diverging.transpose(
+ "chain", "draw"
+ )
except (ValueError, AttributeError): # No sample_stats, or no `.diverging`
divergences = None
| {"golden_diff": "diff --git a/arviz/plots/traceplot.py b/arviz/plots/traceplot.py\n--- a/arviz/plots/traceplot.py\n+++ b/arviz/plots/traceplot.py\n@@ -174,7 +174,9 @@\n divergences = \"top\" if rug else \"bottom\"\n if divergences:\n try:\n- divergence_data = convert_to_dataset(data, group=\"sample_stats\").diverging\n+ divergence_data = convert_to_dataset(data, group=\"sample_stats\").diverging.transpose(\n+ \"chain\", \"draw\"\n+ )\n except (ValueError, AttributeError): # No sample_stats, or no `.diverging`\n divergences = None\n", "issue": "plot_trace assumes specific dimensions ordering for divergences\n**Describe the bug**\r\nIf an `InferenceData` contains a `diverging` variable in `sample_stats` that uses the dimension ordering `(draw, chain)` instead of `(chain, draw)`, then `plot_trace` fails.\r\n\r\n**To Reproduce**\r\n```python\r\n>>> import arviz as az\r\n>>> import numpy as np\r\n>>> idata = az.from_dict(\r\n... dict(x=np.random.normal(size=(4, 100, 2))),\r\n... sample_stats=dict(diverging=np.random.randint(0, 1, size=(4, 100), dtype=bool))\r\n... )\r\n>>> az.plot_trace(idata) # fine\r\narray([[<AxesSubplot:title={'center':'x'}>,\r\n <AxesSubplot:title={'center':'x'}>]], dtype=object)\r\n>>> idata.posterior['x'] = idata.posterior.x.transpose()\r\n>>> idata.sample_stats['diverging'] = idata.sample_stats.diverging.transpose()\r\n>>> az.plot_trace(idata) # error\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/sethaxen/projects/arviz/arviz/plots/traceplot.py\", line 263, in plot_trace\r\n axes = plot(**trace_plot_args)\r\n File \"/home/sethaxen/projects/arviz/arviz/plots/backends/matplotlib/traceplot.py\", line 347, in plot_trace\r\n div_draws = data.draw.values[chain_divs]\r\nIndexError: boolean index did not match indexed array along dimension 0; dimension is 100 but corresponding boolean dimension is 4\r\n>>> idata = az.InferenceData(posterior=idata.posterior)\r\n>>> az.plot_trace(idata) # fine without diverging\r\narray([[<AxesSubplot:title={'center':'x'}>,\r\n <AxesSubplot:title={'center':'x'}>]], dtype=object)\r\n```\r\n\r\n**Expected behavior**\r\nI expected all calls to `plot_trace` to work above.\r\n\r\n**Additional context**\r\nThese checks were performed on main.\r\n\n", "before_files": [{"content": "\"\"\"Plot kde or histograms and values from MCMC samples.\"\"\"\nimport warnings\nfrom typing import Any, Callable, List, Mapping, Optional, Tuple, Union, Sequence\n\nfrom ..data import CoordSpec, InferenceData, convert_to_dataset\nfrom ..labels import BaseLabeller\nfrom ..rcparams import rcParams\nfrom ..sel_utils import xarray_var_iter\nfrom ..utils import _var_names, get_coords\nfrom .plot_utils import KwargSpec, get_plotting_function\n\n\ndef plot_trace(\n data: InferenceData,\n var_names: Optional[Sequence[str]] = None,\n filter_vars: Optional[str] = None,\n transform: Optional[Callable] = None,\n coords: Optional[CoordSpec] = None,\n divergences: Optional[str] = \"auto\",\n kind: Optional[str] = \"trace\",\n figsize: Optional[Tuple[float, float]] = None,\n rug: bool = False,\n lines: Optional[List[Tuple[str, CoordSpec, Any]]] = None,\n circ_var_names: Optional[List[str]] = None,\n circ_var_units: str = \"radians\",\n compact: bool = True,\n compact_prop: Optional[Union[str, Mapping[str, Any]]] = None,\n combined: bool = False,\n chain_prop: Optional[Union[str, Mapping[str, Any]]] = None,\n legend: bool = False,\n plot_kwargs: Optional[KwargSpec] = None,\n fill_kwargs: Optional[KwargSpec] = None,\n rug_kwargs: Optional[KwargSpec] = None,\n hist_kwargs: Optional[KwargSpec] = None,\n trace_kwargs: Optional[KwargSpec] = None,\n rank_kwargs: Optional[KwargSpec] = None,\n labeller=None,\n axes=None,\n backend: Optional[str] = None,\n backend_config: Optional[KwargSpec] = None,\n backend_kwargs: Optional[KwargSpec] = None,\n show: Optional[bool] = None,\n):\n \"\"\"Plot distribution (histogram or kernel density estimates) and sampled values or rank plot.\n\n If `divergences` data is available in `sample_stats`, will plot the location of divergences as\n dashed vertical lines.\n\n Parameters\n ----------\n data: obj\n Any object that can be converted to an :class:`arviz.InferenceData` object\n Refer to documentation of :func:`arviz.convert_to_dataset` for details\n var_names: str or list of str, optional\n One or more variables to be plotted. Prefix the variables by ``~`` when you want\n to exclude them from the plot.\n filter_vars: {None, \"like\", \"regex\"}, optional, default=None\n If `None` (default), interpret var_names as the real variables names. If \"like\",\n interpret var_names as substrings of the real variables names. If \"regex\",\n interpret var_names as regular expressions on the real variables names. A la\n ``pandas.filter``.\n coords: dict of {str: slice or array_like}, optional\n Coordinates of var_names to be plotted. Passed to :meth:`xarray.Dataset.sel`\n divergences: {\"bottom\", \"top\", None}, optional\n Plot location of divergences on the traceplots.\n kind: {\"trace\", \"rank_bars\", \"rank_vlines\"}, optional\n Choose between plotting sampled values per iteration and rank plots.\n transform: callable, optional\n Function to transform data (defaults to None i.e.the identity function)\n figsize: tuple of (float, float), optional\n If None, size is (12, variables * 2)\n rug: bool, optional\n If True adds a rugplot of samples. Defaults to False. Ignored for 2D KDE.\n Only affects continuous variables.\n lines: list of tuple of (str, dict, array_like), optional\n List of (var_name, {'coord': selection}, [line, positions]) to be overplotted as\n vertical lines on the density and horizontal lines on the trace.\n circ_var_names : str or list of str, optional\n List of circular variables to account for when plotting KDE.\n circ_var_units : str\n Whether the variables in ``circ_var_names`` are in \"degrees\" or \"radians\".\n compact: bool, optional\n Plot multidimensional variables in a single plot.\n compact_prop: str or dict {str: array_like}, optional\n Tuple containing the property name and the property values to distinguish different\n dimensions with compact=True\n combined: bool, optional\n Flag for combining multiple chains into a single line. If False (default), chains will be\n plotted separately.\n chain_prop: str or dict {str: array_like}, optional\n Tuple containing the property name and the property values to distinguish different chains\n legend: bool, optional\n Add a legend to the figure with the chain color code.\n plot_kwargs, fill_kwargs, rug_kwargs, hist_kwargs: dict, optional\n Extra keyword arguments passed to :func:`arviz.plot_dist`. Only affects continuous\n variables.\n trace_kwargs: dict, optional\n Extra keyword arguments passed to :meth:`matplotlib.axes.Axes.plot`\n labeller : labeller instance, optional\n Class providing the method ``make_label_vert`` to generate the labels in the plot titles.\n Read the :ref:`label_guide` for more details and usage examples.\n rank_kwargs : dict, optional\n Extra keyword arguments passed to :func:`arviz.plot_rank`\n axes: axes, optional\n Matplotlib axes or bokeh figures.\n backend: {\"matplotlib\", \"bokeh\"}, optional\n Select plotting backend.\n backend_config: dict, optional\n Currently specifies the bounds to use for bokeh axes. Defaults to value set in rcParams.\n backend_kwargs: dict, optional\n These are kwargs specific to the backend being used, passed to\n :func:`matplotlib.pyplot.subplots` or\n :func:`bokeh.plotting.figure`.\n show: bool, optional\n Call backend show function.\n\n Returns\n -------\n axes: matplotlib axes or bokeh figures\n\n See Also\n --------\n plot_rank : Plot rank order statistics of chains.\n\n Examples\n --------\n Plot a subset variables and select them with partial naming\n\n .. plot::\n :context: close-figs\n\n >>> import arviz as az\n >>> data = az.load_arviz_data('non_centered_eight')\n >>> coords = {'school': ['Choate', 'Lawrenceville']}\n >>> az.plot_trace(data, var_names=('theta'), filter_vars=\"like\", coords=coords)\n\n Show all dimensions of multidimensional variables in the same plot\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_trace(data, compact=True)\n\n Display a rank plot instead of trace\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_trace(data, var_names=[\"mu\", \"tau\"], kind=\"rank_bars\")\n\n Combine all chains into one distribution and select variables with regular expressions\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_trace(\n >>> data, var_names=('^theta'), filter_vars=\"regex\", coords=coords, combined=True\n >>> )\n\n\n Plot reference lines against distribution and trace\n\n .. plot::\n :context: close-figs\n\n >>> lines = (('theta_t',{'school': \"Choate\"}, [-1]),)\n >>> az.plot_trace(data, var_names=('theta_t', 'theta'), coords=coords, lines=lines)\n\n \"\"\"\n if kind not in {\"trace\", \"rank_vlines\", \"rank_bars\"}:\n raise ValueError(\"The value of kind must be either trace, rank_vlines or rank_bars.\")\n\n if divergences == \"auto\":\n divergences = \"top\" if rug else \"bottom\"\n if divergences:\n try:\n divergence_data = convert_to_dataset(data, group=\"sample_stats\").diverging\n except (ValueError, AttributeError): # No sample_stats, or no `.diverging`\n divergences = None\n\n if coords is None:\n coords = {}\n\n if labeller is None:\n labeller = BaseLabeller()\n\n if divergences:\n divergence_data = get_coords(\n divergence_data, {k: v for k, v in coords.items() if k in (\"chain\", \"draw\")}\n )\n else:\n divergence_data = False\n\n coords_data = get_coords(convert_to_dataset(data, group=\"posterior\"), coords)\n\n if transform is not None:\n coords_data = transform(coords_data)\n\n var_names = _var_names(var_names, coords_data, filter_vars)\n\n skip_dims = set(coords_data.dims) - {\"chain\", \"draw\"} if compact else set()\n\n plotters = list(\n xarray_var_iter(\n coords_data,\n var_names=var_names,\n combined=True,\n skip_dims=skip_dims,\n dim_order=[\"chain\", \"draw\"],\n )\n )\n max_plots = rcParams[\"plot.max_subplots\"]\n max_plots = len(plotters) if max_plots is None else max(max_plots // 2, 1)\n if len(plotters) > max_plots:\n warnings.warn(\n \"rcParams['plot.max_subplots'] ({max_plots}) is smaller than the number \"\n \"of variables to plot ({len_plotters}), generating only {max_plots} \"\n \"plots\".format(max_plots=max_plots, len_plotters=len(plotters)),\n UserWarning,\n )\n plotters = plotters[:max_plots]\n\n # TODO: Check if this can be further simplified\n trace_plot_args = dict(\n # User Kwargs\n data=coords_data,\n var_names=var_names,\n # coords = coords,\n divergences=divergences,\n kind=kind,\n figsize=figsize,\n rug=rug,\n lines=lines,\n circ_var_names=circ_var_names,\n circ_var_units=circ_var_units,\n plot_kwargs=plot_kwargs,\n fill_kwargs=fill_kwargs,\n rug_kwargs=rug_kwargs,\n hist_kwargs=hist_kwargs,\n trace_kwargs=trace_kwargs,\n rank_kwargs=rank_kwargs,\n compact=compact,\n compact_prop=compact_prop,\n combined=combined,\n chain_prop=chain_prop,\n legend=legend,\n labeller=labeller,\n # Generated kwargs\n divergence_data=divergence_data,\n # skip_dims=skip_dims,\n plotters=plotters,\n axes=axes,\n backend_config=backend_config,\n backend_kwargs=backend_kwargs,\n show=show,\n )\n\n if backend is None:\n backend = rcParams[\"plot.backend\"]\n backend = backend.lower()\n\n plot = get_plotting_function(\"plot_trace\", \"traceplot\", backend)\n axes = plot(**trace_plot_args)\n\n return axes\n", "path": "arviz/plots/traceplot.py"}]} | 4,064 | 156 |
gh_patches_debug_630 | rasdani/github-patches | git_diff | pex-tool__pex-2240 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.146
On the docket:
+ [x] Fix non executable venv sys path bug #2236
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.145"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.145"
+__version__ = "2.1.146"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.145\"\n+__version__ = \"2.1.146\"\n", "issue": "Release 2.1.146\nOn the docket:\r\n+ [x] Fix non executable venv sys path bug #2236\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.145\"\n", "path": "pex/version.py"}]} | 616 | 98 |
gh_patches_debug_38583 | rasdani/github-patches | git_diff | fedora-infra__bodhi-2664 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Give bodhi-push a -y flag so that it can be run in cron
We should give ```bodhi-push``` a -y flag that answers yes to the various questions so that it can be run in cron.
Give bodhi-push a -y flag so that it can be run in cron
We should give ```bodhi-push``` a -y flag that answers yes to the various questions so that it can be run in cron.
</issue>
<code>
[start of bodhi/server/push.py]
1 # -*- coding: utf-8 -*-
2 # Copyright © 2007-2018 Red Hat, Inc.
3 #
4 # This file is part of Bodhi.
5 #
6 # This program is free software; you can redistribute it and/or
7 # modify it under the terms of the GNU General Public License
8 # as published by the Free Software Foundation; either version 2
9 # of the License, or (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this program; if not, write to the Free Software
18 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
19 """The CLI tool for triggering update pushes."""
20 from sqlalchemy.sql import or_
21 import click
22
23 from bodhi.server import (buildsys, initialize_db, get_koji)
24 from bodhi.server.config import config
25 from bodhi.server.models import (Compose, ComposeState, Release, ReleaseState, Build, Update,
26 UpdateRequest)
27 from bodhi.server.util import transactional_session_maker
28 import bodhi.server.notifications
29
30
31 _koji = None
32
33
34 def update_sig_status(update):
35 """Update build signature status for builds in update."""
36 global _koji
37 if _koji is None:
38 # We don't want to authenticate to the buildsystem, because this script is often mistakenly
39 # run as root and this can cause the ticket cache to become root owned with 0600 perms,
40 # which will cause the compose to fail when it tries to use it to authenticate to Koji.
41 buildsys.setup_buildsystem(config, authenticate=False)
42 _koji = get_koji(None)
43 for build in update.builds:
44 if not build.signed:
45 build_tags = build.get_tags(_koji)
46 if update.release.pending_signing_tag not in build_tags:
47 click.echo('Build %s was refreshed as signed' % build.nvr)
48 build.signed = True
49 else:
50 click.echo('Build %s still unsigned' % build.nvr)
51
52
53 @click.command()
54 @click.option('--builds', help='Push updates for a comma-separated list of builds')
55 @click.option('--cert-prefix', default="shell",
56 help="The prefix of a fedmsg cert used to sign the message")
57 @click.option('--releases', help=('Push updates for a comma-separated list of releases (default: '
58 'current and pending releases)'))
59 @click.option('--request', default='testing,stable',
60 help='Push updates with a specific request (default: testing,stable)')
61 @click.option('--resume', help='Resume one or more previously failed pushes',
62 is_flag=True, default=False)
63 @click.option('--username', prompt=True)
64 @click.version_option(message='%(version)s')
65 def push(username, cert_prefix, **kwargs):
66 """Push builds out to the repositories."""
67 resume = kwargs.pop('resume')
68 resume_all = False
69
70 initialize_db(config)
71 db_factory = transactional_session_maker()
72 composes = []
73 with db_factory() as session:
74 if not resume and session.query(Compose).count():
75 click.confirm('Existing composes detected: {}. Do you wish to resume them all?'.format(
76 ', '.join([str(c) for c in session.query(Compose).all()])), abort=True)
77 resume = True
78 resume_all = True
79
80 # If we're resuming a push
81 if resume:
82 for compose in session.query(Compose).all():
83 if len(compose.updates) == 0:
84 # Compose objects can end up with 0 updates in them if the masher ejects all the
85 # updates in a compose for some reason. Composes with no updates cannot be
86 # serialized because their content_type property uses the content_type of the
87 # first update in the Compose. Additionally, it doesn't really make sense to go
88 # forward with running an empty Compose. It makes the most sense to delete them.
89 click.echo("{} has no updates. It is being removed.".format(compose))
90 session.delete(compose)
91 continue
92
93 if not resume_all and not click.confirm('Resume {}?'.format(compose)):
94 continue
95
96 # Reset the Compose's state and error message.
97 compose.state = ComposeState.requested
98 compose.error_message = u''
99
100 composes.append(compose)
101 else:
102 updates = []
103 # Accept both comma and space separated request list
104 requests = kwargs['request'].replace(',', ' ').split(' ')
105 requests = [UpdateRequest.from_string(val) for val in requests]
106
107 query = session.query(Update).filter(Update.request.in_(requests))
108
109 if kwargs.get('builds'):
110 query = query.join(Update.builds)
111 query = query.filter(
112 or_(*[Build.nvr == build for build in kwargs['builds'].split(',')]))
113
114 query = _filter_releases(session, query, kwargs.get('releases'))
115
116 for update in query.all():
117 # Skip unsigned updates (this checks that all builds in the update are signed)
118 update_sig_status(update)
119
120 if not update.signed:
121 click.echo('Warning: %s has unsigned builds and has been skipped' %
122 update.title)
123 continue
124
125 updates.append(update)
126
127 composes = Compose.from_updates(updates)
128 for c in composes:
129 session.add(c)
130
131 # We need to flush so the database knows about the new Compose objects, so the
132 # Compose.updates relationship will work properly. This is due to us overriding the
133 # primaryjoin on the relationship between Composes and Updates.
134 session.flush()
135
136 # Now we need to refresh the composes so their updates property will not be empty.
137 for compose in composes:
138 session.refresh(compose)
139
140 # Now we need to sort the composes so their security property can be used to prioritize
141 # security updates. The security property relies on the updates property being
142 # non-empty, so this must happen after the refresh above.
143 composes = sorted(composes)
144
145 for compose in composes:
146 click.echo('\n\n===== {} =====\n'.format(compose))
147 for update in compose.updates:
148 click.echo(update.title)
149
150 if composes:
151 click.confirm('\n\nPush these {:d} updates?'.format(
152 sum([len(c.updates) for c in composes])), abort=True)
153 click.echo('\nLocking updates...')
154 else:
155 click.echo('\nThere are no updates to push.')
156
157 composes = [c.__json__(composer=True) for c in composes]
158
159 if composes:
160 click.echo('\nSending masher.start fedmsg')
161 # Because we're a script, we want to send to the fedmsg-relay,
162 # that's why we say active=True
163 bodhi.server.notifications.init(active=True, cert_prefix=cert_prefix)
164 bodhi.server.notifications.publish(
165 topic='masher.start',
166 msg=dict(
167 api_version=2,
168 composes=composes,
169 resume=resume,
170 agent=username,
171 ),
172 force=True,
173 )
174
175
176 def _filter_releases(session, query, releases=None):
177 """
178 Filter the given query by releases.
179
180 Apply a filter() transformation to the given query on Updates to filter updates that match the
181 given releases argument. If releases evaluates "Falsey", this function will filter for Updates
182 that are part of a current Release.
183
184 :param session: The database session
185 :param query: An Update query that we want to modify by filtering based on Releases
186 :param releases: A comma-separated string of release names
187
188 :returns: A filtered version of query with an additional filter based on releases.
189 """
190 # We will store models.Release object here that we want to filter by
191 _releases = []
192
193 if releases:
194 for r in releases.split(','):
195 release = session.query(Release).filter(
196 or_(Release.name == r,
197 Release.name == r.upper(),
198 Release.version == r)).first()
199 if not release:
200 raise click.BadParameter('Unknown release: %s' % r)
201 else:
202 _releases.append(release)
203 else:
204 # Since the user didn't ask for specific Releases, let's just filter for releases that are
205 # current or pending.
206 _releases = session.query(Release).filter(or_(Release.state == ReleaseState.current,
207 Release.state == ReleaseState.pending))
208
209 return query.filter(or_(*[Update.release == r for r in _releases]))
210
211
212 if __name__ == '__main__':
213 push()
214
[end of bodhi/server/push.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bodhi/server/push.py b/bodhi/server/push.py
--- a/bodhi/server/push.py
+++ b/bodhi/server/push.py
@@ -61,8 +61,10 @@
@click.option('--resume', help='Resume one or more previously failed pushes',
is_flag=True, default=False)
@click.option('--username', prompt=True)
[email protected]('--yes', '-y', is_flag=True, default=False,
+ help='Answers yes to the various questions')
@click.version_option(message='%(version)s')
-def push(username, cert_prefix, **kwargs):
+def push(username, cert_prefix, yes, **kwargs):
"""Push builds out to the repositories."""
resume = kwargs.pop('resume')
resume_all = False
@@ -72,8 +74,14 @@
composes = []
with db_factory() as session:
if not resume and session.query(Compose).count():
- click.confirm('Existing composes detected: {}. Do you wish to resume them all?'.format(
- ', '.join([str(c) for c in session.query(Compose).all()])), abort=True)
+ if yes:
+ click.echo('Existing composes detected: {}. Resuming all.'.format(
+ ', '.join([str(c) for c in session.query(Compose).all()])))
+ else:
+ click.confirm(
+ 'Existing composes detected: {}. Do you wish to resume them all?'.format(
+ ', '.join([str(c) for c in session.query(Compose).all()])),
+ abort=True)
resume = True
resume_all = True
@@ -90,8 +98,11 @@
session.delete(compose)
continue
- if not resume_all and not click.confirm('Resume {}?'.format(compose)):
- continue
+ if not resume_all:
+ if yes:
+ click.echo('Resuming {}.'.format(compose))
+ elif not click.confirm('Resume {}?'.format(compose)):
+ continue
# Reset the Compose's state and error message.
compose.state = ComposeState.requested
@@ -148,8 +159,12 @@
click.echo(update.title)
if composes:
- click.confirm('\n\nPush these {:d} updates?'.format(
- sum([len(c.updates) for c in composes])), abort=True)
+ if yes:
+ click.echo('\n\nPushing {:d} updates.'.format(
+ sum([len(c.updates) for c in composes])))
+ else:
+ click.confirm('\n\nPush these {:d} updates?'.format(
+ sum([len(c.updates) for c in composes])), abort=True)
click.echo('\nLocking updates...')
else:
click.echo('\nThere are no updates to push.')
| {"golden_diff": "diff --git a/bodhi/server/push.py b/bodhi/server/push.py\n--- a/bodhi/server/push.py\n+++ b/bodhi/server/push.py\n@@ -61,8 +61,10 @@\n @click.option('--resume', help='Resume one or more previously failed pushes',\n is_flag=True, default=False)\n @click.option('--username', prompt=True)\[email protected]('--yes', '-y', is_flag=True, default=False,\n+ help='Answers yes to the various questions')\n @click.version_option(message='%(version)s')\n-def push(username, cert_prefix, **kwargs):\n+def push(username, cert_prefix, yes, **kwargs):\n \"\"\"Push builds out to the repositories.\"\"\"\n resume = kwargs.pop('resume')\n resume_all = False\n@@ -72,8 +74,14 @@\n composes = []\n with db_factory() as session:\n if not resume and session.query(Compose).count():\n- click.confirm('Existing composes detected: {}. Do you wish to resume them all?'.format(\n- ', '.join([str(c) for c in session.query(Compose).all()])), abort=True)\n+ if yes:\n+ click.echo('Existing composes detected: {}. Resuming all.'.format(\n+ ', '.join([str(c) for c in session.query(Compose).all()])))\n+ else:\n+ click.confirm(\n+ 'Existing composes detected: {}. Do you wish to resume them all?'.format(\n+ ', '.join([str(c) for c in session.query(Compose).all()])),\n+ abort=True)\n resume = True\n resume_all = True\n \n@@ -90,8 +98,11 @@\n session.delete(compose)\n continue\n \n- if not resume_all and not click.confirm('Resume {}?'.format(compose)):\n- continue\n+ if not resume_all:\n+ if yes:\n+ click.echo('Resuming {}.'.format(compose))\n+ elif not click.confirm('Resume {}?'.format(compose)):\n+ continue\n \n # Reset the Compose's state and error message.\n compose.state = ComposeState.requested\n@@ -148,8 +159,12 @@\n click.echo(update.title)\n \n if composes:\n- click.confirm('\\n\\nPush these {:d} updates?'.format(\n- sum([len(c.updates) for c in composes])), abort=True)\n+ if yes:\n+ click.echo('\\n\\nPushing {:d} updates.'.format(\n+ sum([len(c.updates) for c in composes])))\n+ else:\n+ click.confirm('\\n\\nPush these {:d} updates?'.format(\n+ sum([len(c.updates) for c in composes])), abort=True)\n click.echo('\\nLocking updates...')\n else:\n click.echo('\\nThere are no updates to push.')\n", "issue": "Give bodhi-push a -y flag so that it can be run in cron\nWe should give ```bodhi-push``` a -y flag that answers yes to the various questions so that it can be run in cron.\nGive bodhi-push a -y flag so that it can be run in cron\nWe should give ```bodhi-push``` a -y flag that answers yes to the various questions so that it can be run in cron.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright \u00a9 2007-2018 Red Hat, Inc.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"The CLI tool for triggering update pushes.\"\"\"\nfrom sqlalchemy.sql import or_\nimport click\n\nfrom bodhi.server import (buildsys, initialize_db, get_koji)\nfrom bodhi.server.config import config\nfrom bodhi.server.models import (Compose, ComposeState, Release, ReleaseState, Build, Update,\n UpdateRequest)\nfrom bodhi.server.util import transactional_session_maker\nimport bodhi.server.notifications\n\n\n_koji = None\n\n\ndef update_sig_status(update):\n \"\"\"Update build signature status for builds in update.\"\"\"\n global _koji\n if _koji is None:\n # We don't want to authenticate to the buildsystem, because this script is often mistakenly\n # run as root and this can cause the ticket cache to become root owned with 0600 perms,\n # which will cause the compose to fail when it tries to use it to authenticate to Koji.\n buildsys.setup_buildsystem(config, authenticate=False)\n _koji = get_koji(None)\n for build in update.builds:\n if not build.signed:\n build_tags = build.get_tags(_koji)\n if update.release.pending_signing_tag not in build_tags:\n click.echo('Build %s was refreshed as signed' % build.nvr)\n build.signed = True\n else:\n click.echo('Build %s still unsigned' % build.nvr)\n\n\[email protected]()\[email protected]('--builds', help='Push updates for a comma-separated list of builds')\[email protected]('--cert-prefix', default=\"shell\",\n help=\"The prefix of a fedmsg cert used to sign the message\")\[email protected]('--releases', help=('Push updates for a comma-separated list of releases (default: '\n 'current and pending releases)'))\[email protected]('--request', default='testing,stable',\n help='Push updates with a specific request (default: testing,stable)')\[email protected]('--resume', help='Resume one or more previously failed pushes',\n is_flag=True, default=False)\[email protected]('--username', prompt=True)\[email protected]_option(message='%(version)s')\ndef push(username, cert_prefix, **kwargs):\n \"\"\"Push builds out to the repositories.\"\"\"\n resume = kwargs.pop('resume')\n resume_all = False\n\n initialize_db(config)\n db_factory = transactional_session_maker()\n composes = []\n with db_factory() as session:\n if not resume and session.query(Compose).count():\n click.confirm('Existing composes detected: {}. Do you wish to resume them all?'.format(\n ', '.join([str(c) for c in session.query(Compose).all()])), abort=True)\n resume = True\n resume_all = True\n\n # If we're resuming a push\n if resume:\n for compose in session.query(Compose).all():\n if len(compose.updates) == 0:\n # Compose objects can end up with 0 updates in them if the masher ejects all the\n # updates in a compose for some reason. Composes with no updates cannot be\n # serialized because their content_type property uses the content_type of the\n # first update in the Compose. Additionally, it doesn't really make sense to go\n # forward with running an empty Compose. It makes the most sense to delete them.\n click.echo(\"{} has no updates. It is being removed.\".format(compose))\n session.delete(compose)\n continue\n\n if not resume_all and not click.confirm('Resume {}?'.format(compose)):\n continue\n\n # Reset the Compose's state and error message.\n compose.state = ComposeState.requested\n compose.error_message = u''\n\n composes.append(compose)\n else:\n updates = []\n # Accept both comma and space separated request list\n requests = kwargs['request'].replace(',', ' ').split(' ')\n requests = [UpdateRequest.from_string(val) for val in requests]\n\n query = session.query(Update).filter(Update.request.in_(requests))\n\n if kwargs.get('builds'):\n query = query.join(Update.builds)\n query = query.filter(\n or_(*[Build.nvr == build for build in kwargs['builds'].split(',')]))\n\n query = _filter_releases(session, query, kwargs.get('releases'))\n\n for update in query.all():\n # Skip unsigned updates (this checks that all builds in the update are signed)\n update_sig_status(update)\n\n if not update.signed:\n click.echo('Warning: %s has unsigned builds and has been skipped' %\n update.title)\n continue\n\n updates.append(update)\n\n composes = Compose.from_updates(updates)\n for c in composes:\n session.add(c)\n\n # We need to flush so the database knows about the new Compose objects, so the\n # Compose.updates relationship will work properly. This is due to us overriding the\n # primaryjoin on the relationship between Composes and Updates.\n session.flush()\n\n # Now we need to refresh the composes so their updates property will not be empty.\n for compose in composes:\n session.refresh(compose)\n\n # Now we need to sort the composes so their security property can be used to prioritize\n # security updates. The security property relies on the updates property being\n # non-empty, so this must happen after the refresh above.\n composes = sorted(composes)\n\n for compose in composes:\n click.echo('\\n\\n===== {} =====\\n'.format(compose))\n for update in compose.updates:\n click.echo(update.title)\n\n if composes:\n click.confirm('\\n\\nPush these {:d} updates?'.format(\n sum([len(c.updates) for c in composes])), abort=True)\n click.echo('\\nLocking updates...')\n else:\n click.echo('\\nThere are no updates to push.')\n\n composes = [c.__json__(composer=True) for c in composes]\n\n if composes:\n click.echo('\\nSending masher.start fedmsg')\n # Because we're a script, we want to send to the fedmsg-relay,\n # that's why we say active=True\n bodhi.server.notifications.init(active=True, cert_prefix=cert_prefix)\n bodhi.server.notifications.publish(\n topic='masher.start',\n msg=dict(\n api_version=2,\n composes=composes,\n resume=resume,\n agent=username,\n ),\n force=True,\n )\n\n\ndef _filter_releases(session, query, releases=None):\n \"\"\"\n Filter the given query by releases.\n\n Apply a filter() transformation to the given query on Updates to filter updates that match the\n given releases argument. If releases evaluates \"Falsey\", this function will filter for Updates\n that are part of a current Release.\n\n :param session: The database session\n :param query: An Update query that we want to modify by filtering based on Releases\n :param releases: A comma-separated string of release names\n\n :returns: A filtered version of query with an additional filter based on releases.\n \"\"\"\n # We will store models.Release object here that we want to filter by\n _releases = []\n\n if releases:\n for r in releases.split(','):\n release = session.query(Release).filter(\n or_(Release.name == r,\n Release.name == r.upper(),\n Release.version == r)).first()\n if not release:\n raise click.BadParameter('Unknown release: %s' % r)\n else:\n _releases.append(release)\n else:\n # Since the user didn't ask for specific Releases, let's just filter for releases that are\n # current or pending.\n _releases = session.query(Release).filter(or_(Release.state == ReleaseState.current,\n Release.state == ReleaseState.pending))\n\n return query.filter(or_(*[Update.release == r for r in _releases]))\n\n\nif __name__ == '__main__':\n push()\n", "path": "bodhi/server/push.py"}]} | 3,086 | 632 |
gh_patches_debug_1360 | rasdani/github-patches | git_diff | pyodide__pyodide-4435 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python 3.12 version
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Hi, I tried [REPL](https://pyodide.org/en/stable/console.html), maybe it uses the latest 0.25.0, and I noticed that the python is 3.11.3.
Python 3.12 has released for a few months with a lot of new features. Since there is no issue track the progress. So, I created this one.
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
N.A.
### Pitch
<!-- A clear and concise description of what you want to happen. -->
N.A.
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
N.A.
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
N.A.
</issue>
<code>
[start of pyodide-build/pyodide_build/pyzip.py]
1 import shutil
2 from collections.abc import Callable
3 from pathlib import Path
4 from tempfile import TemporaryDirectory
5
6 from ._py_compile import _compile
7 from .common import make_zip_archive
8
9 # These files are removed from the stdlib
10 REMOVED_FILES = (
11 # package management
12 "ensurepip/",
13 "venv/",
14 # build system
15 "lib2to3/",
16 # other platforms
17 "_osx_support.py",
18 "_aix_support.py",
19 # Not supported by browser
20 "curses/",
21 "dbm/",
22 "idlelib/",
23 "tkinter/",
24 "turtle.py",
25 "turtledemo",
26 )
27
28 # These files are unvendored from the stdlib and can be loaded with `loadPackage`
29 UNVENDORED_FILES = (
30 "test/",
31 "distutils/",
32 "sqlite3",
33 "ssl.py",
34 "lzma.py",
35 "_pydecimal.py",
36 "pydoc_data",
37 )
38
39 # We have JS implementations of these modules
40 JS_STUB_FILES = ("webbrowser.py",)
41
42
43 def default_filterfunc(
44 root: Path, verbose: bool = False
45 ) -> Callable[[str, list[str]], set[str]]:
46 """
47 The default filter function used by `create_zipfile`.
48
49 This function filters out several modules that are:
50
51 - not supported in Pyodide due to browser limitations (e.g. `tkinter`)
52 - unvendored from the standard library (e.g. `sqlite3`)
53 """
54
55 def _should_skip(path: Path) -> bool:
56 """Skip common files that are not needed in the zip file."""
57 name = path.name
58
59 if path.is_dir() and name in ("__pycache__", "dist"):
60 return True
61
62 if path.is_dir() and name.endswith((".egg-info", ".dist-info")):
63 return True
64
65 if path.is_file() and name in (
66 "LICENSE",
67 "LICENSE.txt",
68 "setup.py",
69 ".gitignore",
70 ):
71 return True
72
73 if path.is_file() and name.endswith(("pyi", "toml", "cfg", "md", "rst")):
74 return True
75
76 return False
77
78 def filterfunc(path: Path | str, names: list[str]) -> set[str]:
79 filtered_files = {
80 (root / f).resolve() for f in REMOVED_FILES + UNVENDORED_FILES
81 }
82
83 # We have JS implementations of these modules, so we don't need to
84 # include the Python ones. Checking the name of the root directory
85 # is a bit of a hack, but it works...
86 if root.name.startswith("python3"):
87 filtered_files.update({root / f for f in JS_STUB_FILES})
88
89 path = Path(path).resolve()
90
91 if _should_skip(path):
92 return set(names)
93
94 _names = []
95 for name in names:
96 fullpath = path / name
97
98 if _should_skip(fullpath) or fullpath in filtered_files:
99 if verbose:
100 print(f"Skipping {fullpath}")
101
102 _names.append(name)
103
104 return set(_names)
105
106 return filterfunc
107
108
109 def create_zipfile(
110 libdirs: list[Path],
111 output: Path | str = "python",
112 pycompile: bool = False,
113 filterfunc: Callable[[str, list[str]], set[str]] | None = None,
114 compression_level: int = 6,
115 ) -> None:
116 """
117 Bundle Python standard libraries into a zip file.
118
119 The basic idea of this function is similar to the standard library's
120 {ref}`zipfile.PyZipFile` class.
121
122 However, we need some additional functionality for Pyodide. For example:
123
124 - We need to remove some unvendored modules, e.g. `sqlite3`
125 - We need an option to "not" compile the files in the zip file
126
127 hence this function.
128
129 Parameters
130 ----------
131 libdirs
132 List of paths to the directory containing the Python standard library or extra packages.
133
134 output
135 Path to the output zip file. Defaults to python.zip.
136
137 pycompile
138 Whether to compile the .py files into .pyc, by default False
139
140 filterfunc
141 A function that filters the files to be included in the zip file.
142 This function will be passed to {ref}`shutil.copytree` 's ignore argument.
143 By default, Pyodide's default filter function is used.
144
145 compression_level
146 Level of zip compression to apply. 0 means no compression. If a strictly
147 positive integer is provided, ZIP_DEFLATED option is used.
148
149 Returns
150 -------
151 BytesIO
152 A BytesIO object containing the zip file.
153 """
154
155 archive = Path(output)
156
157 with TemporaryDirectory() as temp_dir_str:
158 temp_dir = Path(temp_dir_str)
159
160 for libdir in libdirs:
161 libdir = Path(libdir)
162
163 if filterfunc is None:
164 _filterfunc = default_filterfunc(libdir)
165
166 shutil.copytree(libdir, temp_dir, ignore=_filterfunc, dirs_exist_ok=True)
167
168 make_zip_archive(
169 archive,
170 temp_dir,
171 compression_level=compression_level,
172 )
173
174 if pycompile:
175 _compile(
176 archive,
177 archive,
178 verbose=False,
179 keep=False,
180 compression_level=compression_level,
181 )
182
[end of pyodide-build/pyodide_build/pyzip.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyodide-build/pyodide_build/pyzip.py b/pyodide-build/pyodide_build/pyzip.py
--- a/pyodide-build/pyodide_build/pyzip.py
+++ b/pyodide-build/pyodide_build/pyzip.py
@@ -28,7 +28,6 @@
# These files are unvendored from the stdlib and can be loaded with `loadPackage`
UNVENDORED_FILES = (
"test/",
- "distutils/",
"sqlite3",
"ssl.py",
"lzma.py",
| {"golden_diff": "diff --git a/pyodide-build/pyodide_build/pyzip.py b/pyodide-build/pyodide_build/pyzip.py\n--- a/pyodide-build/pyodide_build/pyzip.py\n+++ b/pyodide-build/pyodide_build/pyzip.py\n@@ -28,7 +28,6 @@\n # These files are unvendored from the stdlib and can be loaded with `loadPackage`\n UNVENDORED_FILES = (\n \"test/\",\n- \"distutils/\",\n \"sqlite3\",\n \"ssl.py\",\n \"lzma.py\",\n", "issue": "Python 3.12 version\n## \ud83d\ude80 Feature\r\n\r\n<!-- A clear and concise description of the feature proposal -->\r\n\r\nHi, I tried [REPL](https://pyodide.org/en/stable/console.html), maybe it uses the latest 0.25.0, and I noticed that the python is 3.11.3.\r\n\r\nPython 3.12 has released for a few months with a lot of new features. Since there is no issue track the progress. So, I created this one.\r\n\r\n### Motivation\r\n\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\n\r\nN.A.\r\n\r\n### Pitch\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\nN.A.\r\n\r\n### Alternatives\r\n\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->\r\n\r\nN.A.\r\n\r\n### Additional context\r\n\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n\r\nN.A.\n", "before_files": [{"content": "import shutil\nfrom collections.abc import Callable\nfrom pathlib import Path\nfrom tempfile import TemporaryDirectory\n\nfrom ._py_compile import _compile\nfrom .common import make_zip_archive\n\n# These files are removed from the stdlib\nREMOVED_FILES = (\n # package management\n \"ensurepip/\",\n \"venv/\",\n # build system\n \"lib2to3/\",\n # other platforms\n \"_osx_support.py\",\n \"_aix_support.py\",\n # Not supported by browser\n \"curses/\",\n \"dbm/\",\n \"idlelib/\",\n \"tkinter/\",\n \"turtle.py\",\n \"turtledemo\",\n)\n\n# These files are unvendored from the stdlib and can be loaded with `loadPackage`\nUNVENDORED_FILES = (\n \"test/\",\n \"distutils/\",\n \"sqlite3\",\n \"ssl.py\",\n \"lzma.py\",\n \"_pydecimal.py\",\n \"pydoc_data\",\n)\n\n# We have JS implementations of these modules\nJS_STUB_FILES = (\"webbrowser.py\",)\n\n\ndef default_filterfunc(\n root: Path, verbose: bool = False\n) -> Callable[[str, list[str]], set[str]]:\n \"\"\"\n The default filter function used by `create_zipfile`.\n\n This function filters out several modules that are:\n\n - not supported in Pyodide due to browser limitations (e.g. `tkinter`)\n - unvendored from the standard library (e.g. `sqlite3`)\n \"\"\"\n\n def _should_skip(path: Path) -> bool:\n \"\"\"Skip common files that are not needed in the zip file.\"\"\"\n name = path.name\n\n if path.is_dir() and name in (\"__pycache__\", \"dist\"):\n return True\n\n if path.is_dir() and name.endswith((\".egg-info\", \".dist-info\")):\n return True\n\n if path.is_file() and name in (\n \"LICENSE\",\n \"LICENSE.txt\",\n \"setup.py\",\n \".gitignore\",\n ):\n return True\n\n if path.is_file() and name.endswith((\"pyi\", \"toml\", \"cfg\", \"md\", \"rst\")):\n return True\n\n return False\n\n def filterfunc(path: Path | str, names: list[str]) -> set[str]:\n filtered_files = {\n (root / f).resolve() for f in REMOVED_FILES + UNVENDORED_FILES\n }\n\n # We have JS implementations of these modules, so we don't need to\n # include the Python ones. Checking the name of the root directory\n # is a bit of a hack, but it works...\n if root.name.startswith(\"python3\"):\n filtered_files.update({root / f for f in JS_STUB_FILES})\n\n path = Path(path).resolve()\n\n if _should_skip(path):\n return set(names)\n\n _names = []\n for name in names:\n fullpath = path / name\n\n if _should_skip(fullpath) or fullpath in filtered_files:\n if verbose:\n print(f\"Skipping {fullpath}\")\n\n _names.append(name)\n\n return set(_names)\n\n return filterfunc\n\n\ndef create_zipfile(\n libdirs: list[Path],\n output: Path | str = \"python\",\n pycompile: bool = False,\n filterfunc: Callable[[str, list[str]], set[str]] | None = None,\n compression_level: int = 6,\n) -> None:\n \"\"\"\n Bundle Python standard libraries into a zip file.\n\n The basic idea of this function is similar to the standard library's\n {ref}`zipfile.PyZipFile` class.\n\n However, we need some additional functionality for Pyodide. For example:\n\n - We need to remove some unvendored modules, e.g. `sqlite3`\n - We need an option to \"not\" compile the files in the zip file\n\n hence this function.\n\n Parameters\n ----------\n libdirs\n List of paths to the directory containing the Python standard library or extra packages.\n\n output\n Path to the output zip file. Defaults to python.zip.\n\n pycompile\n Whether to compile the .py files into .pyc, by default False\n\n filterfunc\n A function that filters the files to be included in the zip file.\n This function will be passed to {ref}`shutil.copytree` 's ignore argument.\n By default, Pyodide's default filter function is used.\n\n compression_level\n Level of zip compression to apply. 0 means no compression. If a strictly\n positive integer is provided, ZIP_DEFLATED option is used.\n\n Returns\n -------\n BytesIO\n A BytesIO object containing the zip file.\n \"\"\"\n\n archive = Path(output)\n\n with TemporaryDirectory() as temp_dir_str:\n temp_dir = Path(temp_dir_str)\n\n for libdir in libdirs:\n libdir = Path(libdir)\n\n if filterfunc is None:\n _filterfunc = default_filterfunc(libdir)\n\n shutil.copytree(libdir, temp_dir, ignore=_filterfunc, dirs_exist_ok=True)\n\n make_zip_archive(\n archive,\n temp_dir,\n compression_level=compression_level,\n )\n\n if pycompile:\n _compile(\n archive,\n archive,\n verbose=False,\n keep=False,\n compression_level=compression_level,\n )\n", "path": "pyodide-build/pyodide_build/pyzip.py"}]} | 2,384 | 122 |
gh_patches_debug_17884 | rasdani/github-patches | git_diff | deis__deis-1517 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`deis run` generates 500 error
[Integration tests](http://ci.deis.io/view/example-apps/job/test-integration-clojure-ring/47/console) against master found an error in `deis run`:
```
=== appssample Domains
No domains
ok
/home/jenkins/workspace/test-integration-clojure-ring/src/github.com/deis/deis/tests/example-clojure-ring
apps:run echo hello
500 INTERNAL SERVER ERROR
<h1>Server Error (500)</h1>
error at command wait
--- FAIL: TestApps (76.15 seconds)
itutils.go:199: Failed:
exit status 1
FAIL
exit status 1
```
</issue>
<code>
[start of controller/api/tasks.py]
1 """
2 Long-running tasks for the Deis Controller API
3
4 This module orchestrates the real "heavy lifting" of Deis, and as such these
5 functions are decorated to run as asynchronous celery tasks.
6 """
7
8 from __future__ import unicode_literals
9
10 import requests
11 import threading
12
13 from celery import task
14 from django.conf import settings
15
16
17 @task
18 def create_cluster(cluster):
19 cluster._scheduler.setUp()
20
21
22 @task
23 def destroy_cluster(cluster):
24 for app in cluster.app_set.all():
25 app.destroy()
26 cluster._scheduler.tearDown()
27
28
29 @task
30 def deploy_release(app, release):
31 containers = app.container_set.all()
32 threads = []
33 for c in containers:
34 threads.append(threading.Thread(target=c.deploy, args=(release,)))
35 [t.start() for t in threads]
36 [t.join() for t in threads]
37
38
39 @task
40 def import_repository(source, target_repository):
41 """Imports an image from a remote registry into our own private registry"""
42 data = {
43 'src': source,
44 }
45 requests.post(
46 '{}/v1/repositories/{}/tags'.format(settings.REGISTRY_URL,
47 target_repository),
48 data=data,
49 )
50
51
52 @task
53 def start_containers(containers):
54 create_threads = []
55 start_threads = []
56 for c in containers:
57 create_threads.append(threading.Thread(target=c.create))
58 start_threads.append(threading.Thread(target=c.start))
59 [t.start() for t in create_threads]
60 [t.join() for t in create_threads]
61 [t.start() for t in start_threads]
62 [t.join() for t in start_threads]
63
64
65 @task
66 def stop_containers(containers):
67 destroy_threads = []
68 delete_threads = []
69 for c in containers:
70 destroy_threads.append(threading.Thread(target=c.destroy))
71 delete_threads.append(threading.Thread(target=c.delete))
72 [t.start() for t in destroy_threads]
73 [t.join() for t in destroy_threads]
74 [t.start() for t in delete_threads]
75 [t.join() for t in delete_threads]
76
77
78 @task
79 def run_command(c, command):
80 release = c.release
81 version = release.version
82 image = release.image
83 try:
84 # pull the image first
85 rc, pull_output = c.run("docker pull {image}".format(**locals()))
86 if rc != 0:
87 raise EnvironmentError('Could not pull image: {pull_image}'.format(**locals()))
88 # run the command
89 docker_args = ' '.join(['--entrypoint=/bin/sh',
90 '-a', 'stdout', '-a', 'stderr', '--rm', image])
91 escaped_command = command.replace("'", "'\\''")
92 command = r"docker run {docker_args} -c \'{escaped_command}\'".format(**locals())
93 return c.run(command)
94 finally:
95 c.delete()
96
[end of controller/api/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/controller/api/tasks.py b/controller/api/tasks.py
--- a/controller/api/tasks.py
+++ b/controller/api/tasks.py
@@ -79,12 +79,14 @@
def run_command(c, command):
release = c.release
version = release.version
- image = release.image
+ image = '{}:{}/{}'.format(settings.REGISTRY_HOST,
+ settings.REGISTRY_PORT,
+ release.image)
try:
# pull the image first
rc, pull_output = c.run("docker pull {image}".format(**locals()))
if rc != 0:
- raise EnvironmentError('Could not pull image: {pull_image}'.format(**locals()))
+ raise EnvironmentError('Could not pull image: {image}'.format(**locals()))
# run the command
docker_args = ' '.join(['--entrypoint=/bin/sh',
'-a', 'stdout', '-a', 'stderr', '--rm', image])
| {"golden_diff": "diff --git a/controller/api/tasks.py b/controller/api/tasks.py\n--- a/controller/api/tasks.py\n+++ b/controller/api/tasks.py\n@@ -79,12 +79,14 @@\n def run_command(c, command):\n release = c.release\n version = release.version\n- image = release.image\n+ image = '{}:{}/{}'.format(settings.REGISTRY_HOST,\n+ settings.REGISTRY_PORT,\n+ release.image)\n try:\n # pull the image first\n rc, pull_output = c.run(\"docker pull {image}\".format(**locals()))\n if rc != 0:\n- raise EnvironmentError('Could not pull image: {pull_image}'.format(**locals()))\n+ raise EnvironmentError('Could not pull image: {image}'.format(**locals()))\n # run the command\n docker_args = ' '.join(['--entrypoint=/bin/sh',\n '-a', 'stdout', '-a', 'stderr', '--rm', image])\n", "issue": "`deis run` generates 500 error\n[Integration tests](http://ci.deis.io/view/example-apps/job/test-integration-clojure-ring/47/console) against master found an error in `deis run`:\n\n```\n=== appssample Domains\nNo domains\n\n\nok\n/home/jenkins/workspace/test-integration-clojure-ring/src/github.com/deis/deis/tests/example-clojure-ring\napps:run echo hello\n\n500 INTERNAL SERVER ERROR\n<h1>Server Error (500)</h1>\n\nerror at command wait\n--- FAIL: TestApps (76.15 seconds)\n itutils.go:199: Failed:\n exit status 1\nFAIL\nexit status 1\n```\n\n", "before_files": [{"content": "\"\"\"\nLong-running tasks for the Deis Controller API\n\nThis module orchestrates the real \"heavy lifting\" of Deis, and as such these\nfunctions are decorated to run as asynchronous celery tasks.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport requests\nimport threading\n\nfrom celery import task\nfrom django.conf import settings\n\n\n@task\ndef create_cluster(cluster):\n cluster._scheduler.setUp()\n\n\n@task\ndef destroy_cluster(cluster):\n for app in cluster.app_set.all():\n app.destroy()\n cluster._scheduler.tearDown()\n\n\n@task\ndef deploy_release(app, release):\n containers = app.container_set.all()\n threads = []\n for c in containers:\n threads.append(threading.Thread(target=c.deploy, args=(release,)))\n [t.start() for t in threads]\n [t.join() for t in threads]\n\n\n@task\ndef import_repository(source, target_repository):\n \"\"\"Imports an image from a remote registry into our own private registry\"\"\"\n data = {\n 'src': source,\n }\n requests.post(\n '{}/v1/repositories/{}/tags'.format(settings.REGISTRY_URL,\n target_repository),\n data=data,\n )\n\n\n@task\ndef start_containers(containers):\n create_threads = []\n start_threads = []\n for c in containers:\n create_threads.append(threading.Thread(target=c.create))\n start_threads.append(threading.Thread(target=c.start))\n [t.start() for t in create_threads]\n [t.join() for t in create_threads]\n [t.start() for t in start_threads]\n [t.join() for t in start_threads]\n\n\n@task\ndef stop_containers(containers):\n destroy_threads = []\n delete_threads = []\n for c in containers:\n destroy_threads.append(threading.Thread(target=c.destroy))\n delete_threads.append(threading.Thread(target=c.delete))\n [t.start() for t in destroy_threads]\n [t.join() for t in destroy_threads]\n [t.start() for t in delete_threads]\n [t.join() for t in delete_threads]\n\n\n@task\ndef run_command(c, command):\n release = c.release\n version = release.version\n image = release.image\n try:\n # pull the image first\n rc, pull_output = c.run(\"docker pull {image}\".format(**locals()))\n if rc != 0:\n raise EnvironmentError('Could not pull image: {pull_image}'.format(**locals()))\n # run the command\n docker_args = ' '.join(['--entrypoint=/bin/sh',\n '-a', 'stdout', '-a', 'stderr', '--rm', image])\n escaped_command = command.replace(\"'\", \"'\\\\''\")\n command = r\"docker run {docker_args} -c \\'{escaped_command}\\'\".format(**locals())\n return c.run(command)\n finally:\n c.delete()\n", "path": "controller/api/tasks.py"}]} | 1,485 | 205 |
gh_patches_debug_38869 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3133 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement 'Shares'
## Issues
- [x] https://github.com/centerofci/mathesar/issues/3033
- [x] https://github.com/centerofci/mathesar/issues/3034
- [x] https://github.com/centerofci/mathesar/issues/3035
- [x] https://github.com/centerofci/mathesar/issues/3036
## Tasks:
- [ ] Add regenerate slug endpoints
### https://github.com/centerofci/mathesar/pull/3093#pullrequestreview-1546069582
- [ ] Address the following in shared table consumer page
- [ ] Disable re-reordering of columns
- [ ] Don't show the icon hyperlink to the record page within the PK cell
- [ ] Remove the following entries in the cell context menu:
- "Set to NULL"
- "Go to Record Page"
- "Go to Linked Record" (shown only for FK columns)
- [ ] Remove the "Go to Record Page" entry from the row header context menu
- [ ] Disable record selector in filtering for FK columns
- [ ] Come up with a better term for 'ShareConsumer'. Some suggestions:
- ShareAccessInfo
- SharedLink
- ConsumableShare
## Related:
* [Product spec](https://wiki.mathesar.org/en/product/specs/publicly-shareable-links)
</issue>
<code>
[start of mathesar/api/ui/viewsets/shares.py]
1 from rest_framework import viewsets
2 from rest_access_policy import AccessViewSetMixin
3
4 from mathesar.api.pagination import DefaultLimitOffsetPagination
5 from mathesar.api.ui.serializers.shares import SharedTableSerializer, SharedQuerySerializer
6 from mathesar.api.ui.permissions.shares import SharedTableAccessPolicy, SharedQueryAccessPolicy
7 from mathesar.models.shares import SharedTable, SharedQuery
8
9
10 class SharedTableViewSet(AccessViewSetMixin, viewsets.ModelViewSet):
11 pagination_class = DefaultLimitOffsetPagination
12 serializer_class = SharedTableSerializer
13 access_policy = SharedTableAccessPolicy
14
15 def get_queryset(self):
16 return SharedTable.objects.filter(table_id=self.kwargs['table_pk']).order_by('-created_at')
17
18 def perform_create(self, serializer):
19 serializer.save(table_id=self.kwargs['table_pk'])
20
21
22 class SharedQueryViewSet(AccessViewSetMixin, viewsets.ModelViewSet):
23 pagination_class = DefaultLimitOffsetPagination
24 serializer_class = SharedQuerySerializer
25 access_policy = SharedQueryAccessPolicy
26
27 def get_queryset(self):
28 return SharedQuery.objects.filter(query_id=self.kwargs['query_pk']).order_by('-created_at')
29
30 def perform_create(self, serializer):
31 serializer.save(query_id=self.kwargs['query_pk'])
32
[end of mathesar/api/ui/viewsets/shares.py]
[start of mathesar/api/ui/permissions/shares.py]
1 from rest_access_policy import AccessPolicy
2
3 from mathesar.api.utils import get_query_or_404
4 from mathesar.api.permission_utils import QueryAccessInspector
5
6
7 class SharedTableAccessPolicy(AccessPolicy):
8 statements = [
9 {
10 'action': ['list', 'retrieve'],
11 'principal': 'authenticated',
12 'effect': 'allow',
13 'condition_expression': 'is_atleast_viewer_nested_table_resource'
14 },
15 {
16 'action': ['create', 'destroy', 'update', 'partial_update'],
17 'principal': 'authenticated',
18 'effect': 'allow',
19 'condition_expression': 'is_atleast_editor_nested_table_resource'
20 },
21 ]
22
23
24 class SharedQueryAccessPolicy(AccessPolicy):
25 statements = [
26 {
27 'action': ['list', 'retrieve'],
28 'principal': 'authenticated',
29 'effect': 'allow',
30 'condition_expression': 'is_atleast_query_viewer'
31 },
32 {
33 'action': ['create', 'destroy', 'update', 'partial_update'],
34 'principal': 'authenticated',
35 'effect': 'allow',
36 'condition_expression': 'is_atleast_query_editor'
37 },
38 ]
39
40 def is_atleast_query_viewer(self, request, view, action):
41 query = get_query_or_404(view.kwargs['query_pk'])
42 return QueryAccessInspector(request.user, query).is_atleast_viewer()
43
44 def is_atleast_query_editor(self, request, view, action):
45 query = get_query_or_404(view.kwargs['query_pk'])
46 return QueryAccessInspector(request.user, query).is_atleast_editor()
47
[end of mathesar/api/ui/permissions/shares.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mathesar/api/ui/permissions/shares.py b/mathesar/api/ui/permissions/shares.py
--- a/mathesar/api/ui/permissions/shares.py
+++ b/mathesar/api/ui/permissions/shares.py
@@ -13,7 +13,7 @@
'condition_expression': 'is_atleast_viewer_nested_table_resource'
},
{
- 'action': ['create', 'destroy', 'update', 'partial_update'],
+ 'action': ['create', 'destroy', 'update', 'partial_update', 'regenerate'],
'principal': 'authenticated',
'effect': 'allow',
'condition_expression': 'is_atleast_editor_nested_table_resource'
@@ -30,7 +30,7 @@
'condition_expression': 'is_atleast_query_viewer'
},
{
- 'action': ['create', 'destroy', 'update', 'partial_update'],
+ 'action': ['create', 'destroy', 'update', 'partial_update', 'regenerate'],
'principal': 'authenticated',
'effect': 'allow',
'condition_expression': 'is_atleast_query_editor'
diff --git a/mathesar/api/ui/viewsets/shares.py b/mathesar/api/ui/viewsets/shares.py
--- a/mathesar/api/ui/viewsets/shares.py
+++ b/mathesar/api/ui/viewsets/shares.py
@@ -1,5 +1,8 @@
+import uuid
from rest_framework import viewsets
from rest_access_policy import AccessViewSetMixin
+from rest_framework.decorators import action
+from rest_framework.response import Response
from mathesar.api.pagination import DefaultLimitOffsetPagination
from mathesar.api.ui.serializers.shares import SharedTableSerializer, SharedQuerySerializer
@@ -7,7 +10,17 @@
from mathesar.models.shares import SharedTable, SharedQuery
-class SharedTableViewSet(AccessViewSetMixin, viewsets.ModelViewSet):
+class RegenerateSlugMixin(viewsets.GenericViewSet):
+ @action(methods=['post'], detail=True)
+ def regenerate(self, *args, **kwargs):
+ share = self.get_object()
+ share.slug = uuid.uuid4()
+ share.save()
+ serializer = self.get_serializer(share)
+ return Response(serializer.data)
+
+
+class SharedTableViewSet(AccessViewSetMixin, viewsets.ModelViewSet, RegenerateSlugMixin):
pagination_class = DefaultLimitOffsetPagination
serializer_class = SharedTableSerializer
access_policy = SharedTableAccessPolicy
@@ -19,7 +32,7 @@
serializer.save(table_id=self.kwargs['table_pk'])
-class SharedQueryViewSet(AccessViewSetMixin, viewsets.ModelViewSet):
+class SharedQueryViewSet(AccessViewSetMixin, viewsets.ModelViewSet, RegenerateSlugMixin):
pagination_class = DefaultLimitOffsetPagination
serializer_class = SharedQuerySerializer
access_policy = SharedQueryAccessPolicy
| {"golden_diff": "diff --git a/mathesar/api/ui/permissions/shares.py b/mathesar/api/ui/permissions/shares.py\n--- a/mathesar/api/ui/permissions/shares.py\n+++ b/mathesar/api/ui/permissions/shares.py\n@@ -13,7 +13,7 @@\n 'condition_expression': 'is_atleast_viewer_nested_table_resource'\n },\n {\n- 'action': ['create', 'destroy', 'update', 'partial_update'],\n+ 'action': ['create', 'destroy', 'update', 'partial_update', 'regenerate'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_editor_nested_table_resource'\n@@ -30,7 +30,7 @@\n 'condition_expression': 'is_atleast_query_viewer'\n },\n {\n- 'action': ['create', 'destroy', 'update', 'partial_update'],\n+ 'action': ['create', 'destroy', 'update', 'partial_update', 'regenerate'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_query_editor'\ndiff --git a/mathesar/api/ui/viewsets/shares.py b/mathesar/api/ui/viewsets/shares.py\n--- a/mathesar/api/ui/viewsets/shares.py\n+++ b/mathesar/api/ui/viewsets/shares.py\n@@ -1,5 +1,8 @@\n+import uuid\n from rest_framework import viewsets\n from rest_access_policy import AccessViewSetMixin\n+from rest_framework.decorators import action\n+from rest_framework.response import Response\n \n from mathesar.api.pagination import DefaultLimitOffsetPagination\n from mathesar.api.ui.serializers.shares import SharedTableSerializer, SharedQuerySerializer\n@@ -7,7 +10,17 @@\n from mathesar.models.shares import SharedTable, SharedQuery\n \n \n-class SharedTableViewSet(AccessViewSetMixin, viewsets.ModelViewSet):\n+class RegenerateSlugMixin(viewsets.GenericViewSet):\n+ @action(methods=['post'], detail=True)\n+ def regenerate(self, *args, **kwargs):\n+ share = self.get_object()\n+ share.slug = uuid.uuid4()\n+ share.save()\n+ serializer = self.get_serializer(share)\n+ return Response(serializer.data)\n+\n+\n+class SharedTableViewSet(AccessViewSetMixin, viewsets.ModelViewSet, RegenerateSlugMixin):\n pagination_class = DefaultLimitOffsetPagination\n serializer_class = SharedTableSerializer\n access_policy = SharedTableAccessPolicy\n@@ -19,7 +32,7 @@\n serializer.save(table_id=self.kwargs['table_pk'])\n \n \n-class SharedQueryViewSet(AccessViewSetMixin, viewsets.ModelViewSet):\n+class SharedQueryViewSet(AccessViewSetMixin, viewsets.ModelViewSet, RegenerateSlugMixin):\n pagination_class = DefaultLimitOffsetPagination\n serializer_class = SharedQuerySerializer\n access_policy = SharedQueryAccessPolicy\n", "issue": "Implement 'Shares'\n## Issues\r\n- [x] https://github.com/centerofci/mathesar/issues/3033\r\n- [x] https://github.com/centerofci/mathesar/issues/3034\r\n- [x] https://github.com/centerofci/mathesar/issues/3035\r\n- [x] https://github.com/centerofci/mathesar/issues/3036\r\n\r\n## Tasks:\r\n- [ ] Add regenerate slug endpoints\r\n\r\n### https://github.com/centerofci/mathesar/pull/3093#pullrequestreview-1546069582\r\n- [ ] Address the following in shared table consumer page\r\n - [ ] Disable re-reordering of columns\r\n - [ ] Don't show the icon hyperlink to the record page within the PK cell\r\n - [ ] Remove the following entries in the cell context menu:\r\n - \"Set to NULL\"\r\n - \"Go to Record Page\"\r\n - \"Go to Linked Record\" (shown only for FK columns)\r\n - [ ] Remove the \"Go to Record Page\" entry from the row header context menu\r\n - [ ] Disable record selector in filtering for FK columns\r\n- [ ] Come up with a better term for 'ShareConsumer'. Some suggestions:\r\n - ShareAccessInfo\r\n - SharedLink\r\n - ConsumableShare\r\n\r\n## Related:\r\n* [Product spec](https://wiki.mathesar.org/en/product/specs/publicly-shareable-links)\n", "before_files": [{"content": "from rest_framework import viewsets\nfrom rest_access_policy import AccessViewSetMixin\n\nfrom mathesar.api.pagination import DefaultLimitOffsetPagination\nfrom mathesar.api.ui.serializers.shares import SharedTableSerializer, SharedQuerySerializer\nfrom mathesar.api.ui.permissions.shares import SharedTableAccessPolicy, SharedQueryAccessPolicy\nfrom mathesar.models.shares import SharedTable, SharedQuery\n\n\nclass SharedTableViewSet(AccessViewSetMixin, viewsets.ModelViewSet):\n pagination_class = DefaultLimitOffsetPagination\n serializer_class = SharedTableSerializer\n access_policy = SharedTableAccessPolicy\n\n def get_queryset(self):\n return SharedTable.objects.filter(table_id=self.kwargs['table_pk']).order_by('-created_at')\n\n def perform_create(self, serializer):\n serializer.save(table_id=self.kwargs['table_pk'])\n\n\nclass SharedQueryViewSet(AccessViewSetMixin, viewsets.ModelViewSet):\n pagination_class = DefaultLimitOffsetPagination\n serializer_class = SharedQuerySerializer\n access_policy = SharedQueryAccessPolicy\n\n def get_queryset(self):\n return SharedQuery.objects.filter(query_id=self.kwargs['query_pk']).order_by('-created_at')\n\n def perform_create(self, serializer):\n serializer.save(query_id=self.kwargs['query_pk'])\n", "path": "mathesar/api/ui/viewsets/shares.py"}, {"content": "from rest_access_policy import AccessPolicy\n\nfrom mathesar.api.utils import get_query_or_404\nfrom mathesar.api.permission_utils import QueryAccessInspector\n\n\nclass SharedTableAccessPolicy(AccessPolicy):\n statements = [\n {\n 'action': ['list', 'retrieve'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_viewer_nested_table_resource'\n },\n {\n 'action': ['create', 'destroy', 'update', 'partial_update'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_editor_nested_table_resource'\n },\n ]\n\n\nclass SharedQueryAccessPolicy(AccessPolicy):\n statements = [\n {\n 'action': ['list', 'retrieve'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_query_viewer'\n },\n {\n 'action': ['create', 'destroy', 'update', 'partial_update'],\n 'principal': 'authenticated',\n 'effect': 'allow',\n 'condition_expression': 'is_atleast_query_editor'\n },\n ]\n\n def is_atleast_query_viewer(self, request, view, action):\n query = get_query_or_404(view.kwargs['query_pk'])\n return QueryAccessInspector(request.user, query).is_atleast_viewer()\n\n def is_atleast_query_editor(self, request, view, action):\n query = get_query_or_404(view.kwargs['query_pk'])\n return QueryAccessInspector(request.user, query).is_atleast_editor()\n", "path": "mathesar/api/ui/permissions/shares.py"}]} | 1,617 | 609 |
gh_patches_debug_39187 | rasdani/github-patches | git_diff | deepset-ai__haystack-7205 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docstrings - `haystack.components.samplers`
</issue>
<code>
[start of haystack/components/samplers/top_p.py]
1 import logging
2 from typing import List, Optional
3
4 from haystack import ComponentError, Document, component
5 from haystack.lazy_imports import LazyImport
6
7 logger = logging.getLogger(__name__)
8
9
10 with LazyImport(message="Run 'pip install \"torch>=1.13\"'") as torch_import:
11 import torch
12
13
14 @component
15 class TopPSampler:
16 """
17 Implements top-p (nucleus) sampling for document filtering based on cumulative probability scores.
18
19 This class provides functionality to filter a list of documents by selecting those whose scores fall
20 within the top 'p' percent of the cumulative distribution. The method is useful for focusing on high-probability
21 documents while filtering out less relevant ones based on their assigned scores.
22
23 Usage example:
24
25 ```python
26 from haystack import Document
27 from haystack.components.samplers import TopPSampler
28
29 sampler = TopPSampler(top_p=0.95, score_field="similarity_score")
30 docs = [
31 Document(text="Berlin", meta={"similarity_score": -10.6}),
32 Document(text="Belgrade", meta={"similarity_score": -8.9}),
33 Document(text="Sarajevo", meta={"similarity_score": -4.6}),
34 ]
35 output = sampler.run(documents=docs)
36 docs = output["documents"]
37 assert len(docs) == 1
38 assert docs[0].content == "Sarajevo"
39 ```
40 """
41
42 def __init__(self, top_p: float = 1.0, score_field: Optional[str] = None):
43 """
44 Creates an instance of TopPSampler.
45
46 :param top_p: Float between 0 and 1 representing the cumulative probability threshold for document selection.
47 Defaults to 1.0, indicating no filtering (all documents are retained).
48 :param score_field: Name of the field in each document's metadata that contains the score. If None, the default
49 document score field is used.
50 """
51 torch_import.check()
52
53 self.top_p = top_p
54 self.score_field = score_field
55
56 @component.output_types(documents=List[Document])
57 def run(self, documents: List[Document], top_p: Optional[float] = None):
58 """
59 Filters documents using top-p sampling based on their scores.
60
61 :param documents: List of Document objects to be filtered.
62 :param top_p: Optional. A float to override the cumulative probability threshold set during initialization.
63 If None, the class's top_p value is used.
64 :return: A dictionary with a key 'documents' containing the list of filtered Document objects.
65
66 This method applies top-p sampling to filter out documents. It selects those documents whose similarity scores
67 are within the top 'p' percent of the cumulative distribution, based on the specified or default top_p value.
68
69 If the specified top_p results in no documents being selected (especially in cases of a low top_p value), the
70 method defaults to returning the document with the highest similarity score.
71
72 :raises ValueError: If the top_p value is not within the range [0, 1].
73 """
74 if not documents:
75 return {"documents": []}
76
77 top_p = top_p or self.top_p or 1.0 # default to 1.0 if both are None
78
79 if not 0 <= top_p <= 1:
80 raise ValueError(f"top_p must be between 0 and 1. Got {top_p}.")
81
82 similarity_scores = torch.tensor(self._collect_scores(documents), dtype=torch.float32)
83
84 # Apply softmax normalization to the similarity scores
85 probs = torch.nn.functional.softmax(similarity_scores, dim=-1)
86
87 # Sort the probabilities and calculate their cumulative sum
88 sorted_probs, sorted_indices = torch.sort(probs, descending=True)
89 cumulative_probs = torch.cumsum(sorted_probs, dim=-1)
90
91 # Check if the cumulative probabilities are close to top_p with a 1e-6 tolerance
92 close_to_top_p = torch.isclose(cumulative_probs, torch.tensor(top_p, device=cumulative_probs.device), atol=1e-6)
93
94 # Combine the close_to_top_p with original condition using logical OR
95 condition = (cumulative_probs <= top_p) | close_to_top_p
96
97 # Find the indices with cumulative probabilities that exceed top_p
98 top_p_indices = torch.where(torch.BoolTensor(condition))[0]
99
100 # Map the selected indices back to their original indices
101 original_indices = sorted_indices[top_p_indices]
102 selected_docs = [documents[i.item()] for i in original_indices]
103
104 # If low p resulted in no documents being selected, then
105 # return at least one document
106 if not selected_docs:
107 logger.warning(
108 "Top-p sampling with p=%s resulted in no documents being selected. "
109 "Returning the document with the highest similarity score.",
110 top_p,
111 )
112 highest_prob_indices = torch.argsort(probs, descending=True)
113 selected_docs = [documents[int(highest_prob_indices[0].item())]]
114
115 return {"documents": selected_docs}
116
117 def _collect_scores(self, documents: List[Document]) -> List[float]:
118 """
119 Collect the scores from the documents' metadata.
120 :param documents: List of Documents.
121 :return: List of scores.
122 """
123 if self.score_field:
124 missing_scores_docs = [d for d in documents if self.score_field not in d.meta]
125 if missing_scores_docs:
126 missing_scores_docs_ids = [d.id for d in missing_scores_docs if d.id]
127 raise ComponentError(
128 f"Score field '{self.score_field}' not found in metadata of documents "
129 f"with IDs: {missing_scores_docs_ids}."
130 f"Make sure that all documents have a score field '{self.score_field}' in their metadata."
131 )
132 return [d.meta[self.score_field] for d in documents]
133 else:
134 missing_scores_docs = [d for d in documents if d.score is None]
135 if missing_scores_docs:
136 missing_scores_docs_ids = [d.id for d in missing_scores_docs if d.id]
137 raise ComponentError(
138 f"Ensure all documents have a valid score value. These docs {missing_scores_docs_ids} don't."
139 )
140 return [d.score for d in documents] # type: ignore ## because Document score is Optional
141
[end of haystack/components/samplers/top_p.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/haystack/components/samplers/top_p.py b/haystack/components/samplers/top_p.py
--- a/haystack/components/samplers/top_p.py
+++ b/haystack/components/samplers/top_p.py
@@ -16,8 +16,8 @@
"""
Implements top-p (nucleus) sampling for document filtering based on cumulative probability scores.
- This class provides functionality to filter a list of documents by selecting those whose scores fall
- within the top 'p' percent of the cumulative distribution. The method is useful for focusing on high-probability
+ This component provides functionality to filter a list of documents by selecting those whose scores fall
+ within the top 'p' percent of the cumulative distribution. It is useful for focusing on high-probability
documents while filtering out less relevant ones based on their assigned scores.
Usage example:
@@ -44,9 +44,9 @@
Creates an instance of TopPSampler.
:param top_p: Float between 0 and 1 representing the cumulative probability threshold for document selection.
- Defaults to 1.0, indicating no filtering (all documents are retained).
+ A value of 1.0 indicates no filtering (all documents are retained).
:param score_field: Name of the field in each document's metadata that contains the score. If None, the default
- document score field is used.
+ document score field is used.
"""
torch_import.check()
@@ -57,17 +57,14 @@
def run(self, documents: List[Document], top_p: Optional[float] = None):
"""
Filters documents using top-p sampling based on their scores.
+ If the specified top_p results in no documents being selected (especially in cases of a low top_p value), the
+ method returns the document with the highest similarity score.
:param documents: List of Document objects to be filtered.
:param top_p: Optional. A float to override the cumulative probability threshold set during initialization.
- If None, the class's top_p value is used.
- :return: A dictionary with a key 'documents' containing the list of filtered Document objects.
-
- This method applies top-p sampling to filter out documents. It selects those documents whose similarity scores
- are within the top 'p' percent of the cumulative distribution, based on the specified or default top_p value.
- If the specified top_p results in no documents being selected (especially in cases of a low top_p value), the
- method defaults to returning the document with the highest similarity score.
+ :returns: A dictionary with the following key:
+ - `documents`: List of Document objects that have been selected based on the top-p sampling.
:raises ValueError: If the top_p value is not within the range [0, 1].
"""
| {"golden_diff": "diff --git a/haystack/components/samplers/top_p.py b/haystack/components/samplers/top_p.py\n--- a/haystack/components/samplers/top_p.py\n+++ b/haystack/components/samplers/top_p.py\n@@ -16,8 +16,8 @@\n \"\"\"\n Implements top-p (nucleus) sampling for document filtering based on cumulative probability scores.\n \n- This class provides functionality to filter a list of documents by selecting those whose scores fall\n- within the top 'p' percent of the cumulative distribution. The method is useful for focusing on high-probability\n+ This component provides functionality to filter a list of documents by selecting those whose scores fall\n+ within the top 'p' percent of the cumulative distribution. It is useful for focusing on high-probability\n documents while filtering out less relevant ones based on their assigned scores.\n \n Usage example:\n@@ -44,9 +44,9 @@\n Creates an instance of TopPSampler.\n \n :param top_p: Float between 0 and 1 representing the cumulative probability threshold for document selection.\n- Defaults to 1.0, indicating no filtering (all documents are retained).\n+ A value of 1.0 indicates no filtering (all documents are retained).\n :param score_field: Name of the field in each document's metadata that contains the score. If None, the default\n- document score field is used.\n+ document score field is used.\n \"\"\"\n torch_import.check()\n \n@@ -57,17 +57,14 @@\n def run(self, documents: List[Document], top_p: Optional[float] = None):\n \"\"\"\n Filters documents using top-p sampling based on their scores.\n+ If the specified top_p results in no documents being selected (especially in cases of a low top_p value), the\n+ method returns the document with the highest similarity score.\n \n :param documents: List of Document objects to be filtered.\n :param top_p: Optional. A float to override the cumulative probability threshold set during initialization.\n- If None, the class's top_p value is used.\n- :return: A dictionary with a key 'documents' containing the list of filtered Document objects.\n-\n- This method applies top-p sampling to filter out documents. It selects those documents whose similarity scores\n- are within the top 'p' percent of the cumulative distribution, based on the specified or default top_p value.\n \n- If the specified top_p results in no documents being selected (especially in cases of a low top_p value), the\n- method defaults to returning the document with the highest similarity score.\n+ :returns: A dictionary with the following key:\n+ - `documents`: List of Document objects that have been selected based on the top-p sampling.\n \n :raises ValueError: If the top_p value is not within the range [0, 1].\n \"\"\"\n", "issue": "Docstrings - `haystack.components.samplers`\n\n", "before_files": [{"content": "import logging\nfrom typing import List, Optional\n\nfrom haystack import ComponentError, Document, component\nfrom haystack.lazy_imports import LazyImport\n\nlogger = logging.getLogger(__name__)\n\n\nwith LazyImport(message=\"Run 'pip install \\\"torch>=1.13\\\"'\") as torch_import:\n import torch\n\n\n@component\nclass TopPSampler:\n \"\"\"\n Implements top-p (nucleus) sampling for document filtering based on cumulative probability scores.\n\n This class provides functionality to filter a list of documents by selecting those whose scores fall\n within the top 'p' percent of the cumulative distribution. The method is useful for focusing on high-probability\n documents while filtering out less relevant ones based on their assigned scores.\n\n Usage example:\n\n ```python\n from haystack import Document\n from haystack.components.samplers import TopPSampler\n\n sampler = TopPSampler(top_p=0.95, score_field=\"similarity_score\")\n docs = [\n Document(text=\"Berlin\", meta={\"similarity_score\": -10.6}),\n Document(text=\"Belgrade\", meta={\"similarity_score\": -8.9}),\n Document(text=\"Sarajevo\", meta={\"similarity_score\": -4.6}),\n ]\n output = sampler.run(documents=docs)\n docs = output[\"documents\"]\n assert len(docs) == 1\n assert docs[0].content == \"Sarajevo\"\n ```\n \"\"\"\n\n def __init__(self, top_p: float = 1.0, score_field: Optional[str] = None):\n \"\"\"\n Creates an instance of TopPSampler.\n\n :param top_p: Float between 0 and 1 representing the cumulative probability threshold for document selection.\n Defaults to 1.0, indicating no filtering (all documents are retained).\n :param score_field: Name of the field in each document's metadata that contains the score. If None, the default\n document score field is used.\n \"\"\"\n torch_import.check()\n\n self.top_p = top_p\n self.score_field = score_field\n\n @component.output_types(documents=List[Document])\n def run(self, documents: List[Document], top_p: Optional[float] = None):\n \"\"\"\n Filters documents using top-p sampling based on their scores.\n\n :param documents: List of Document objects to be filtered.\n :param top_p: Optional. A float to override the cumulative probability threshold set during initialization.\n If None, the class's top_p value is used.\n :return: A dictionary with a key 'documents' containing the list of filtered Document objects.\n\n This method applies top-p sampling to filter out documents. It selects those documents whose similarity scores\n are within the top 'p' percent of the cumulative distribution, based on the specified or default top_p value.\n\n If the specified top_p results in no documents being selected (especially in cases of a low top_p value), the\n method defaults to returning the document with the highest similarity score.\n\n :raises ValueError: If the top_p value is not within the range [0, 1].\n \"\"\"\n if not documents:\n return {\"documents\": []}\n\n top_p = top_p or self.top_p or 1.0 # default to 1.0 if both are None\n\n if not 0 <= top_p <= 1:\n raise ValueError(f\"top_p must be between 0 and 1. Got {top_p}.\")\n\n similarity_scores = torch.tensor(self._collect_scores(documents), dtype=torch.float32)\n\n # Apply softmax normalization to the similarity scores\n probs = torch.nn.functional.softmax(similarity_scores, dim=-1)\n\n # Sort the probabilities and calculate their cumulative sum\n sorted_probs, sorted_indices = torch.sort(probs, descending=True)\n cumulative_probs = torch.cumsum(sorted_probs, dim=-1)\n\n # Check if the cumulative probabilities are close to top_p with a 1e-6 tolerance\n close_to_top_p = torch.isclose(cumulative_probs, torch.tensor(top_p, device=cumulative_probs.device), atol=1e-6)\n\n # Combine the close_to_top_p with original condition using logical OR\n condition = (cumulative_probs <= top_p) | close_to_top_p\n\n # Find the indices with cumulative probabilities that exceed top_p\n top_p_indices = torch.where(torch.BoolTensor(condition))[0]\n\n # Map the selected indices back to their original indices\n original_indices = sorted_indices[top_p_indices]\n selected_docs = [documents[i.item()] for i in original_indices]\n\n # If low p resulted in no documents being selected, then\n # return at least one document\n if not selected_docs:\n logger.warning(\n \"Top-p sampling with p=%s resulted in no documents being selected. \"\n \"Returning the document with the highest similarity score.\",\n top_p,\n )\n highest_prob_indices = torch.argsort(probs, descending=True)\n selected_docs = [documents[int(highest_prob_indices[0].item())]]\n\n return {\"documents\": selected_docs}\n\n def _collect_scores(self, documents: List[Document]) -> List[float]:\n \"\"\"\n Collect the scores from the documents' metadata.\n :param documents: List of Documents.\n :return: List of scores.\n \"\"\"\n if self.score_field:\n missing_scores_docs = [d for d in documents if self.score_field not in d.meta]\n if missing_scores_docs:\n missing_scores_docs_ids = [d.id for d in missing_scores_docs if d.id]\n raise ComponentError(\n f\"Score field '{self.score_field}' not found in metadata of documents \"\n f\"with IDs: {missing_scores_docs_ids}.\"\n f\"Make sure that all documents have a score field '{self.score_field}' in their metadata.\"\n )\n return [d.meta[self.score_field] for d in documents]\n else:\n missing_scores_docs = [d for d in documents if d.score is None]\n if missing_scores_docs:\n missing_scores_docs_ids = [d.id for d in missing_scores_docs if d.id]\n raise ComponentError(\n f\"Ensure all documents have a valid score value. These docs {missing_scores_docs_ids} don't.\"\n )\n return [d.score for d in documents] # type: ignore ## because Document score is Optional\n", "path": "haystack/components/samplers/top_p.py"}]} | 2,228 | 624 |
gh_patches_debug_40586 | rasdani/github-patches | git_diff | ansible-collections__community.general-1479 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
xfconf not setting return values as stated in documentation
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
The xfconf documentations that it will return several values related to the operation (channel, previous_name, property, value, value_type). None of these are getting set ever.
This looks like it might have got missed during the recent refactor in #1322 by @russoz. There is a method named `update_xfconf_output()` in the `XFConfProperty` class that looks like it would be the right thing to use for this, it just isn't called anywhere.
I think I see what needs to be fixed for this and will take a stab at a PR.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
plugins/modules/system/xfconf.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.3
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
Not relevant
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Not relevant
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Attempting to get or set any value doesn't set the correct return values. In the case below, you would expect the return to contain a value named `value` with the current value of `/panels/panel-0/plugin-ids`, but it is missing.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Get current panel layout
xfconf:
channel: xfce4-panel
property: /panels/panel-0/plugin-ids
state: get
register: panel_order
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Expect to see the values set per the documentation in the return.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
None of the expected values are there in the return. Note, the values do appear in the stdout from running the command, but this is a side effect and not following the docs.
<!--- Paste verbatim command output between quotes -->
```paste below
ok: [localhost] => {
"changed": false,
"force_lang": "C",
"invocation": {
"module_args": {
"channel": "xfce4-panel",
"force_array": false,
"property": "/panels/panel-0/plugin-ids",
"state": "get",
"value": null,
"value_type": null
}
},
"rc": 0,
"stderr": "",
"stderr_lines": [],
"stdout": "Value is an array with 13 items:\n\n1\n2\n3\n4\n15\n5\n6\n7\n8\n9\n10\n11\n12\n",
"stdout_lines": [
"Value is an array with 13 items:",
"",
"1",
"2",
"3",
"4",
"15",
"5",
"6",
"7",
"8",
"9",
"10",
"11",
"12"
]
}
```
</issue>
<code>
[start of plugins/modules/system/xfconf.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3 # (c) 2017, Joseph Benden <[email protected]>
4 #
5 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
6
7 from __future__ import absolute_import, division, print_function
8 __metaclass__ = type
9
10 DOCUMENTATION = '''
11 module: xfconf
12 author:
13 - "Joseph Benden (@jbenden)"
14 - "Alexei Znamensky (@russoz)"
15 short_description: Edit XFCE4 Configurations
16 description:
17 - This module allows for the manipulation of Xfce 4 Configuration via
18 xfconf-query. Please see the xfconf-query(1) man pages for more details.
19 options:
20 channel:
21 description:
22 - A Xfconf preference channel is a top-level tree key, inside of the
23 Xfconf repository that corresponds to the location for which all
24 application properties/keys are stored. See man xfconf-query(1)
25 required: yes
26 type: str
27 property:
28 description:
29 - A Xfce preference key is an element in the Xfconf repository
30 that corresponds to an application preference. See man xfconf-query(1)
31 required: yes
32 type: str
33 value:
34 description:
35 - Preference properties typically have simple values such as strings,
36 integers, or lists of strings and integers. This is ignored if the state
37 is "get". For array mode, use a list of values. See man xfconf-query(1)
38 type: list
39 elements: raw
40 value_type:
41 description:
42 - The type of value being set. This is ignored if the state is "get".
43 For array mode, use a list of types.
44 type: list
45 elements: str
46 choices: [ int, uint, bool, float, double, string ]
47 state:
48 type: str
49 description:
50 - The action to take upon the property/value.
51 choices: [ get, present, absent ]
52 default: "present"
53 force_array:
54 description:
55 - Force array even if only one element
56 type: bool
57 default: 'no'
58 aliases: ['array']
59 version_added: 1.0.0
60 '''
61
62 EXAMPLES = """
63 - name: Change the DPI to "192"
64 xfconf:
65 channel: "xsettings"
66 property: "/Xft/DPI"
67 value_type: "int"
68 value: "192"
69
70 - name: Set workspace names (4)
71 xfconf:
72 channel: xfwm4
73 property: /general/workspace_names
74 value_type: string
75 value: ['Main', 'Work1', 'Work2', 'Tmp']
76
77 - name: Set workspace names (1)
78 xfconf:
79 channel: xfwm4
80 property: /general/workspace_names
81 value_type: string
82 value: ['Main']
83 force_array: yes
84 """
85
86 RETURN = '''
87 channel:
88 description: The channel specified in the module parameters
89 returned: success
90 type: str
91 sample: "xsettings"
92 property:
93 description: The property specified in the module parameters
94 returned: success
95 type: str
96 sample: "/Xft/DPI"
97 value_type:
98 description: The type of the value that was changed
99 returned: success
100 type: str
101 sample: "int"
102 value:
103 description: The value of the preference key after executing the module
104 returned: success
105 type: str
106 sample: "192"
107 previous_value:
108 description: The value of the preference key before executing the module (None for "get" state).
109 returned: success
110 type: str
111 sample: "96"
112 '''
113
114 from ansible_collections.community.general.plugins.module_utils.module_helper import (
115 ModuleHelper, CmdMixin, StateMixin, ArgFormat
116 )
117
118
119 def fix_bool(value):
120 vl = value.lower()
121 return vl if vl in ("true", "false") else value
122
123
124 @ArgFormat.stars_deco(1)
125 def values_fmt(values, value_types):
126 result = []
127 for value, value_type in zip(values, value_types):
128 if value_type == 'bool':
129 value = fix_bool(value)
130 result.append('--type')
131 result.append('{0}'.format(value_type))
132 result.append('--set')
133 result.append('{0}'.format(value))
134 return result
135
136
137 class XFConfException(Exception):
138 pass
139
140
141 class XFConfProperty(CmdMixin, StateMixin, ModuleHelper):
142 module = dict(
143 argument_spec=dict(
144 state=dict(default="present",
145 choices=("present", "get", "absent"),
146 type='str'),
147 channel=dict(required=True, type='str'),
148 property=dict(required=True, type='str'),
149 value_type=dict(required=False, type='list',
150 elements='str', choices=('int', 'uint', 'bool', 'float', 'double', 'string')),
151 value=dict(required=False, type='list', elements='raw'),
152 force_array=dict(default=False, type='bool', aliases=['array']),
153 ),
154 required_if=[('state', 'present', ['value', 'value_type'])],
155 required_together=[('value', 'value_type')],
156 supports_check_mode=True,
157 )
158
159 facts_name = "xfconf"
160 default_state = 'present'
161 command = 'xfconf-query'
162 command_args_formats = dict(
163 channel=dict(fmt=('--channel', '{0}'),),
164 property=dict(fmt=('--property', '{0}'),),
165 is_array=dict(fmt="--force-array", style=ArgFormat.BOOLEAN),
166 reset=dict(fmt="--reset", style=ArgFormat.BOOLEAN),
167 create=dict(fmt="--create", style=ArgFormat.BOOLEAN),
168 values_and_types=dict(fmt=values_fmt)
169 )
170
171 def update_xfconf_output(self, **kwargs):
172 self.update_output(**kwargs)
173 self.update_facts(**kwargs)
174
175 def __init_module__(self):
176 self.does_not = 'Property "{0}" does not exist on channel "{1}".'.format(self.module.params['property'],
177 self.module.params['channel'])
178 self.vars.previous_value = self._get()
179
180 def process_command_output(self, rc, out, err):
181 if err.rstrip() == self.does_not:
182 return None
183 if rc or len(err):
184 raise XFConfException('xfconf-query failed with error (rc={0}): {1}'.format(rc, err))
185
186 result = out.rstrip()
187 if "Value is an array with" in result:
188 result = result.split("\n")
189 result.pop(0)
190 result.pop(0)
191
192 return result
193
194 @property
195 def changed(self):
196 if self.vars.previous_value is None:
197 return self.vars.value is not None
198 elif self.vars.value is None:
199 return self.vars.previous_value is not None
200 else:
201 return set(self.vars.previous_value) != set(self.vars.value)
202
203 def _get(self):
204 return self.run_command(params=('channel', 'property'))
205
206 def state_get(self):
207 self.vars.value = self.vars.previous_value
208
209 def state_absent(self):
210 self.vars.value = None
211 self.run_command(params=('channel', 'property', 'reset'), extra_params={"reset": True})
212
213 def state_present(self):
214 # stringify all values - in the CLI they will all be happy strings anyway
215 # and by doing this here the rest of the code can be agnostic to it
216 self.vars.value = [str(v) for v in self.module.params['value']]
217 value_type = self.module.params['value_type']
218
219 values_len = len(self.vars.value)
220 types_len = len(value_type)
221
222 if types_len == 1:
223 # use one single type for the entire list
224 value_type = value_type * values_len
225 elif types_len != values_len:
226 # or complain if lists' lengths are different
227 raise XFConfException('Number of elements in "value" and "value_type" must be the same')
228
229 # fix boolean values
230 self.vars.value = [fix_bool(v[0]) if v[1] == 'bool' else v[0] for v in zip(self.vars.value, value_type)]
231
232 # calculates if it is an array
233 self.vars.is_array = \
234 bool(self.module.params['force_array']) or \
235 isinstance(self.vars.previous_value, list) or \
236 values_len > 1
237
238 params = ['channel', 'property', 'create']
239 if self.vars.is_array:
240 params.append('is_array')
241 params.append('values_and_types')
242
243 extra_params = dict(values_and_types=(self.vars.value, value_type))
244 extra_params['create'] = True
245 extra_params['is_array'] = self.vars.is_array
246
247 if not self.module.check_mode:
248 self.run_command(params=params, extra_params=extra_params)
249
250 if not self.vars.is_array:
251 self.vars.value = self.vars.value[0]
252
253
254 def main():
255 xfconf = XFConfProperty()
256 xfconf.run()
257
258
259 if __name__ == '__main__':
260 main()
261
[end of plugins/modules/system/xfconf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/modules/system/xfconf.py b/plugins/modules/system/xfconf.py
--- a/plugins/modules/system/xfconf.py
+++ b/plugins/modules/system/xfconf.py
@@ -95,20 +95,28 @@
type: str
sample: "/Xft/DPI"
value_type:
- description: The type of the value that was changed
+ description:
+ - The type of the value that was changed (C(none) for C(get) and C(reset)
+ state). Either a single string value or a list of strings for array
+ types.
returned: success
- type: str
- sample: "int"
+ type: string or list of strings
+ sample: '"int" or ["str", "str", "str"]'
value:
- description: The value of the preference key after executing the module
+ description:
+ - The value of the preference key after executing the module. Either a
+ single string value or a list of strings for array types.
returned: success
- type: str
- sample: "192"
+ type: string or list of strings
+ sample: '"192" or ["orange", "yellow", "violet"]'
previous_value:
- description: The value of the preference key before executing the module (None for "get" state).
+ description:
+ - The value of the preference key before executing the module (C(none) for
+ C(get) state). Either a single string value or a list of strings for array
+ types.
returned: success
- type: str
- sample: "96"
+ type: string or list of strings
+ sample: '"96" or ["red", "blue", "green"]'
'''
from ansible_collections.community.general.plugins.module_utils.module_helper import (
@@ -176,6 +184,9 @@
self.does_not = 'Property "{0}" does not exist on channel "{1}".'.format(self.module.params['property'],
self.module.params['channel'])
self.vars.previous_value = self._get()
+ self.update_xfconf_output(property=self.module.params['property'],
+ channel=self.module.params['channel'],
+ previous_value=None)
def process_command_output(self, rc, out, err):
if err.rstrip() == self.does_not:
@@ -205,10 +216,13 @@
def state_get(self):
self.vars.value = self.vars.previous_value
+ self.update_xfconf_output(value=self.vars.value)
def state_absent(self):
self.vars.value = None
self.run_command(params=('channel', 'property', 'reset'), extra_params={"reset": True})
+ self.update_xfconf_output(previous_value=self.vars.previous_value,
+ value=None)
def state_present(self):
# stringify all values - in the CLI they will all be happy strings anyway
@@ -249,6 +263,11 @@
if not self.vars.is_array:
self.vars.value = self.vars.value[0]
+ value_type = value_type[0]
+
+ self.update_xfconf_output(previous_value=self.vars.previous_value,
+ value=self.vars.value,
+ type=value_type)
def main():
| {"golden_diff": "diff --git a/plugins/modules/system/xfconf.py b/plugins/modules/system/xfconf.py\n--- a/plugins/modules/system/xfconf.py\n+++ b/plugins/modules/system/xfconf.py\n@@ -95,20 +95,28 @@\n type: str\n sample: \"/Xft/DPI\"\n value_type:\n- description: The type of the value that was changed\n+ description:\n+ - The type of the value that was changed (C(none) for C(get) and C(reset)\n+ state). Either a single string value or a list of strings for array\n+ types.\n returned: success\n- type: str\n- sample: \"int\"\n+ type: string or list of strings\n+ sample: '\"int\" or [\"str\", \"str\", \"str\"]'\n value:\n- description: The value of the preference key after executing the module\n+ description:\n+ - The value of the preference key after executing the module. Either a\n+ single string value or a list of strings for array types.\n returned: success\n- type: str\n- sample: \"192\"\n+ type: string or list of strings\n+ sample: '\"192\" or [\"orange\", \"yellow\", \"violet\"]'\n previous_value:\n- description: The value of the preference key before executing the module (None for \"get\" state).\n+ description:\n+ - The value of the preference key before executing the module (C(none) for\n+ C(get) state). Either a single string value or a list of strings for array\n+ types.\n returned: success\n- type: str\n- sample: \"96\"\n+ type: string or list of strings\n+ sample: '\"96\" or [\"red\", \"blue\", \"green\"]'\n '''\n \n from ansible_collections.community.general.plugins.module_utils.module_helper import (\n@@ -176,6 +184,9 @@\n self.does_not = 'Property \"{0}\" does not exist on channel \"{1}\".'.format(self.module.params['property'],\n self.module.params['channel'])\n self.vars.previous_value = self._get()\n+ self.update_xfconf_output(property=self.module.params['property'],\n+ channel=self.module.params['channel'],\n+ previous_value=None)\n \n def process_command_output(self, rc, out, err):\n if err.rstrip() == self.does_not:\n@@ -205,10 +216,13 @@\n \n def state_get(self):\n self.vars.value = self.vars.previous_value\n+ self.update_xfconf_output(value=self.vars.value)\n \n def state_absent(self):\n self.vars.value = None\n self.run_command(params=('channel', 'property', 'reset'), extra_params={\"reset\": True})\n+ self.update_xfconf_output(previous_value=self.vars.previous_value,\n+ value=None)\n \n def state_present(self):\n # stringify all values - in the CLI they will all be happy strings anyway\n@@ -249,6 +263,11 @@\n \n if not self.vars.is_array:\n self.vars.value = self.vars.value[0]\n+ value_type = value_type[0]\n+\n+ self.update_xfconf_output(previous_value=self.vars.previous_value,\n+ value=self.vars.value,\n+ type=value_type)\n \n \n def main():\n", "issue": "xfconf not setting return values as stated in documentation\n<!--- Verify first that your issue is not already reported on GitHub -->\r\n<!--- Also test if the latest release and devel branch are affected too -->\r\n<!--- Complete *all* sections as described, this form is processed automatically -->\r\n\r\n##### SUMMARY\r\n<!--- Explain the problem briefly below -->\r\nThe xfconf documentations that it will return several values related to the operation (channel, previous_name, property, value, value_type). None of these are getting set ever.\r\n\r\nThis looks like it might have got missed during the recent refactor in #1322 by @russoz. There is a method named `update_xfconf_output()` in the `XFConfProperty` class that looks like it would be the right thing to use for this, it just isn't called anywhere.\r\n\r\nI think I see what needs to be fixed for this and will take a stab at a PR.\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->\r\nplugins/modules/system/xfconf.py\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \"ansible --version\" between quotes -->\r\n```paste below\r\nansible 2.10.3\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- Paste verbatim output from \"ansible-config dump --only-changed\" between quotes -->\r\n```paste below\r\nNot relevant\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->\r\nNot relevant\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->\r\nAttempting to get or set any value doesn't set the correct return values. In the case below, you would expect the return to contain a value named `value` with the current value of `/panels/panel-0/plugin-ids`, but it is missing.\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: Get current panel layout\r\n xfconf:\r\n channel: xfce4-panel\r\n property: /panels/panel-0/plugin-ids\r\n state: get\r\n register: panel_order\r\n```\r\n\r\n<!--- HINT: You can paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- Describe what you expected to happen when running the steps above -->\r\nExpect to see the values set per the documentation in the return.\r\n\r\n##### ACTUAL RESULTS\r\n<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->\r\nNone of the expected values are there in the return. Note, the values do appear in the stdout from running the command, but this is a side effect and not following the docs.\r\n\r\n<!--- Paste verbatim command output between quotes -->\r\n```paste below\r\nok: [localhost] => {\r\n \"changed\": false,\r\n \"force_lang\": \"C\",\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"channel\": \"xfce4-panel\",\r\n \"force_array\": false,\r\n \"property\": \"/panels/panel-0/plugin-ids\",\r\n \"state\": \"get\",\r\n \"value\": null,\r\n \"value_type\": null\r\n }\r\n },\r\n \"rc\": 0,\r\n \"stderr\": \"\",\r\n \"stderr_lines\": [],\r\n \"stdout\": \"Value is an array with 13 items:\\n\\n1\\n2\\n3\\n4\\n15\\n5\\n6\\n7\\n8\\n9\\n10\\n11\\n12\\n\",\r\n \"stdout_lines\": [\r\n \"Value is an array with 13 items:\",\r\n \"\",\r\n \"1\",\r\n \"2\",\r\n \"3\",\r\n \"4\",\r\n \"15\",\r\n \"5\",\r\n \"6\",\r\n \"7\",\r\n \"8\",\r\n \"9\",\r\n \"10\",\r\n \"11\",\r\n \"12\"\r\n ]\r\n}\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n# (c) 2017, Joseph Benden <[email protected]>\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\nDOCUMENTATION = '''\nmodule: xfconf\nauthor:\n - \"Joseph Benden (@jbenden)\"\n - \"Alexei Znamensky (@russoz)\"\nshort_description: Edit XFCE4 Configurations\ndescription:\n - This module allows for the manipulation of Xfce 4 Configuration via\n xfconf-query. Please see the xfconf-query(1) man pages for more details.\noptions:\n channel:\n description:\n - A Xfconf preference channel is a top-level tree key, inside of the\n Xfconf repository that corresponds to the location for which all\n application properties/keys are stored. See man xfconf-query(1)\n required: yes\n type: str\n property:\n description:\n - A Xfce preference key is an element in the Xfconf repository\n that corresponds to an application preference. See man xfconf-query(1)\n required: yes\n type: str\n value:\n description:\n - Preference properties typically have simple values such as strings,\n integers, or lists of strings and integers. This is ignored if the state\n is \"get\". For array mode, use a list of values. See man xfconf-query(1)\n type: list\n elements: raw\n value_type:\n description:\n - The type of value being set. This is ignored if the state is \"get\".\n For array mode, use a list of types.\n type: list\n elements: str\n choices: [ int, uint, bool, float, double, string ]\n state:\n type: str\n description:\n - The action to take upon the property/value.\n choices: [ get, present, absent ]\n default: \"present\"\n force_array:\n description:\n - Force array even if only one element\n type: bool\n default: 'no'\n aliases: ['array']\n version_added: 1.0.0\n'''\n\nEXAMPLES = \"\"\"\n- name: Change the DPI to \"192\"\n xfconf:\n channel: \"xsettings\"\n property: \"/Xft/DPI\"\n value_type: \"int\"\n value: \"192\"\n\n- name: Set workspace names (4)\n xfconf:\n channel: xfwm4\n property: /general/workspace_names\n value_type: string\n value: ['Main', 'Work1', 'Work2', 'Tmp']\n\n- name: Set workspace names (1)\n xfconf:\n channel: xfwm4\n property: /general/workspace_names\n value_type: string\n value: ['Main']\n force_array: yes\n\"\"\"\n\nRETURN = '''\n channel:\n description: The channel specified in the module parameters\n returned: success\n type: str\n sample: \"xsettings\"\n property:\n description: The property specified in the module parameters\n returned: success\n type: str\n sample: \"/Xft/DPI\"\n value_type:\n description: The type of the value that was changed\n returned: success\n type: str\n sample: \"int\"\n value:\n description: The value of the preference key after executing the module\n returned: success\n type: str\n sample: \"192\"\n previous_value:\n description: The value of the preference key before executing the module (None for \"get\" state).\n returned: success\n type: str\n sample: \"96\"\n'''\n\nfrom ansible_collections.community.general.plugins.module_utils.module_helper import (\n ModuleHelper, CmdMixin, StateMixin, ArgFormat\n)\n\n\ndef fix_bool(value):\n vl = value.lower()\n return vl if vl in (\"true\", \"false\") else value\n\n\[email protected]_deco(1)\ndef values_fmt(values, value_types):\n result = []\n for value, value_type in zip(values, value_types):\n if value_type == 'bool':\n value = fix_bool(value)\n result.append('--type')\n result.append('{0}'.format(value_type))\n result.append('--set')\n result.append('{0}'.format(value))\n return result\n\n\nclass XFConfException(Exception):\n pass\n\n\nclass XFConfProperty(CmdMixin, StateMixin, ModuleHelper):\n module = dict(\n argument_spec=dict(\n state=dict(default=\"present\",\n choices=(\"present\", \"get\", \"absent\"),\n type='str'),\n channel=dict(required=True, type='str'),\n property=dict(required=True, type='str'),\n value_type=dict(required=False, type='list',\n elements='str', choices=('int', 'uint', 'bool', 'float', 'double', 'string')),\n value=dict(required=False, type='list', elements='raw'),\n force_array=dict(default=False, type='bool', aliases=['array']),\n ),\n required_if=[('state', 'present', ['value', 'value_type'])],\n required_together=[('value', 'value_type')],\n supports_check_mode=True,\n )\n\n facts_name = \"xfconf\"\n default_state = 'present'\n command = 'xfconf-query'\n command_args_formats = dict(\n channel=dict(fmt=('--channel', '{0}'),),\n property=dict(fmt=('--property', '{0}'),),\n is_array=dict(fmt=\"--force-array\", style=ArgFormat.BOOLEAN),\n reset=dict(fmt=\"--reset\", style=ArgFormat.BOOLEAN),\n create=dict(fmt=\"--create\", style=ArgFormat.BOOLEAN),\n values_and_types=dict(fmt=values_fmt)\n )\n\n def update_xfconf_output(self, **kwargs):\n self.update_output(**kwargs)\n self.update_facts(**kwargs)\n\n def __init_module__(self):\n self.does_not = 'Property \"{0}\" does not exist on channel \"{1}\".'.format(self.module.params['property'],\n self.module.params['channel'])\n self.vars.previous_value = self._get()\n\n def process_command_output(self, rc, out, err):\n if err.rstrip() == self.does_not:\n return None\n if rc or len(err):\n raise XFConfException('xfconf-query failed with error (rc={0}): {1}'.format(rc, err))\n\n result = out.rstrip()\n if \"Value is an array with\" in result:\n result = result.split(\"\\n\")\n result.pop(0)\n result.pop(0)\n\n return result\n\n @property\n def changed(self):\n if self.vars.previous_value is None:\n return self.vars.value is not None\n elif self.vars.value is None:\n return self.vars.previous_value is not None\n else:\n return set(self.vars.previous_value) != set(self.vars.value)\n\n def _get(self):\n return self.run_command(params=('channel', 'property'))\n\n def state_get(self):\n self.vars.value = self.vars.previous_value\n\n def state_absent(self):\n self.vars.value = None\n self.run_command(params=('channel', 'property', 'reset'), extra_params={\"reset\": True})\n\n def state_present(self):\n # stringify all values - in the CLI they will all be happy strings anyway\n # and by doing this here the rest of the code can be agnostic to it\n self.vars.value = [str(v) for v in self.module.params['value']]\n value_type = self.module.params['value_type']\n\n values_len = len(self.vars.value)\n types_len = len(value_type)\n\n if types_len == 1:\n # use one single type for the entire list\n value_type = value_type * values_len\n elif types_len != values_len:\n # or complain if lists' lengths are different\n raise XFConfException('Number of elements in \"value\" and \"value_type\" must be the same')\n\n # fix boolean values\n self.vars.value = [fix_bool(v[0]) if v[1] == 'bool' else v[0] for v in zip(self.vars.value, value_type)]\n\n # calculates if it is an array\n self.vars.is_array = \\\n bool(self.module.params['force_array']) or \\\n isinstance(self.vars.previous_value, list) or \\\n values_len > 1\n\n params = ['channel', 'property', 'create']\n if self.vars.is_array:\n params.append('is_array')\n params.append('values_and_types')\n\n extra_params = dict(values_and_types=(self.vars.value, value_type))\n extra_params['create'] = True\n extra_params['is_array'] = self.vars.is_array\n\n if not self.module.check_mode:\n self.run_command(params=params, extra_params=extra_params)\n\n if not self.vars.is_array:\n self.vars.value = self.vars.value[0]\n\n\ndef main():\n xfconf = XFConfProperty()\n xfconf.run()\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/system/xfconf.py"}]} | 4,034 | 738 |
gh_patches_debug_29257 | rasdani/github-patches | git_diff | yt-project__yt-3831 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Convert to using tomli
The library currently depends on the dead "toml" library, instead of the "tomli" library (which was just accepted into the stdlib as "tomllib" for Python 3.11. It should be a really easy swap; it's almost the same API as toml, it just expects binary files instead of unicode ones if using `toml{i,lib}.load`. `loads` is the same. Writing is a separate library.
Working on pyodide at https://github.com/pyodide/pyodide/pull/2234 and don't want to have to port a dead library.
</issue>
<code>
[start of yt/config.py]
1 import os
2 import warnings
3
4 import toml
5 from more_itertools import always_iterable
6
7 from yt.utilities.configuration_tree import ConfigLeaf, ConfigNode
8
9 ytcfg_defaults = {}
10
11 ytcfg_defaults["yt"] = dict(
12 serialize=False,
13 only_deserialize=False,
14 time_functions=False,
15 colored_logs=False,
16 suppress_stream_logging=False,
17 stdout_stream_logging=False,
18 log_level=20,
19 inline=False,
20 num_threads=-1,
21 store_parameter_files=False,
22 parameter_file_store="parameter_files.csv",
23 maximum_stored_datasets=500,
24 skip_dataset_cache=True,
25 load_field_plugins=False,
26 plugin_filename="my_plugins.py",
27 parallel_traceback=False,
28 pasteboard_repo="",
29 reconstruct_index=True,
30 test_storage_dir="/does/not/exist",
31 test_data_dir="/does/not/exist",
32 enzo_db="",
33 notebook_password="",
34 answer_testing_tolerance=3,
35 answer_testing_bitwise=False,
36 gold_standard_filename="gold311",
37 local_standard_filename="local001",
38 answer_tests_url="http://answers.yt-project.org/{1}_{2}",
39 sketchfab_api_key="None",
40 imagebin_api_key="e1977d9195fe39e",
41 imagebin_upload_url="https://api.imgur.com/3/image",
42 imagebin_delete_url="https://api.imgur.com/3/image/{delete_hash}",
43 curldrop_upload_url="http://use.yt/upload",
44 thread_field_detection=False,
45 ignore_invalid_unit_operation_errors=False,
46 chunk_size=1000,
47 xray_data_dir="/does/not/exist",
48 supp_data_dir="/does/not/exist",
49 default_colormap="cmyt.arbre",
50 ray_tracing_engine="embree",
51 internals=dict(
52 within_testing=False,
53 within_pytest=False,
54 parallel=False,
55 strict_requires=False,
56 global_parallel_rank=0,
57 global_parallel_size=1,
58 topcomm_parallel_rank=0,
59 topcomm_parallel_size=1,
60 command_line=False,
61 ),
62 )
63
64
65 def config_dir():
66 config_root = os.environ.get(
67 "XDG_CONFIG_HOME", os.path.join(os.path.expanduser("~"), ".config")
68 )
69 conf_dir = os.path.join(config_root, "yt")
70
71 if not os.path.exists(conf_dir):
72 try:
73 os.makedirs(conf_dir)
74 except OSError:
75 warnings.warn("unable to create yt config directory")
76 return conf_dir
77
78
79 # For backward compatibility, do not use these vars internally in yt
80 CONFIG_DIR = config_dir()
81
82
83 class YTConfig:
84 def __init__(self, defaults=None):
85 if defaults is None:
86 defaults = {}
87 self.config_root = ConfigNode(None)
88
89 def get(self, section, *keys, callback=None):
90 node_or_leaf = self.config_root.get(section, *keys)
91 if isinstance(node_or_leaf, ConfigLeaf):
92 if callback is not None:
93 return callback(node_or_leaf)
94 return node_or_leaf.value
95 return node_or_leaf
96
97 def get_most_specific(self, section, *keys, **kwargs):
98 use_fallback = "fallback" in kwargs
99 fallback = kwargs.pop("fallback", None)
100 try:
101 return self.config_root.get_deepest_leaf(section, *keys)
102 except KeyError as err:
103 if use_fallback:
104 return fallback
105 else:
106 raise err
107
108 def update(self, new_values, metadata=None):
109 if metadata is None:
110 metadata = {}
111 self.config_root.update(new_values, metadata)
112
113 def has_section(self, section):
114 try:
115 self.config_root.get_child(section)
116 return True
117 except KeyError:
118 return False
119
120 def add_section(self, section):
121 self.config_root.add_child(section)
122
123 def remove_section(self, section):
124 if self.has_section(section):
125 self.config_root.remove_child(section)
126 return True
127 else:
128 return False
129
130 def set(self, *args, metadata=None):
131 section, *keys, value = args
132 if metadata is None:
133 metadata = {"source": "runtime"}
134 self.config_root.upsert_from_list(
135 [section] + list(keys), value, extra_data=metadata
136 )
137
138 def remove(self, *args):
139 self.config_root.pop_leaf(args)
140
141 def read(self, file_names):
142 file_names_read = []
143 for fname in always_iterable(file_names):
144 if not os.path.exists(fname):
145 continue
146 metadata = {"source": f"file: {fname}"}
147 self.update(toml.load(fname), metadata=metadata)
148 file_names_read.append(fname)
149
150 return file_names_read
151
152 def write(self, file_handler):
153 value = self.config_root.as_dict()
154 config_as_str = toml.dumps(value)
155
156 try:
157 # Assuming file_handler has a write attribute
158 file_handler.write(config_as_str)
159 except AttributeError:
160 # Otherwise we expect a path to a file
161 with open(file_handler, mode="w") as fh:
162 fh.write(config_as_str)
163
164 @staticmethod
165 def get_global_config_file():
166 return os.path.join(config_dir(), "yt.toml")
167
168 @staticmethod
169 def get_local_config_file():
170 return os.path.join(os.path.abspath(os.curdir), "yt.toml")
171
172 def __setitem__(self, args, value):
173 section, *keys = always_iterable(args)
174 self.set(section, *keys, value, metadata=None)
175
176 def __getitem__(self, key):
177 section, *keys = always_iterable(key)
178 return self.get(section, *keys)
179
180 def __contains__(self, item):
181 return item in self.config_root
182
183 # Add support for IPython rich display
184 # see https://ipython.readthedocs.io/en/stable/config/integrating.html
185 def _repr_json_(self):
186 return self.config_root._repr_json_()
187
188
189 _global_config_file = YTConfig.get_global_config_file()
190 _local_config_file = YTConfig.get_local_config_file()
191
192 if not os.path.exists(_global_config_file):
193 cfg = {"yt": {}} # type: ignore
194 try:
195 with open(_global_config_file, mode="w") as fd:
196 toml.dump(cfg, fd)
197 except OSError:
198 warnings.warn("unable to write new config file")
199
200
201 # Load the config
202 ytcfg = YTConfig()
203 ytcfg.update(ytcfg_defaults, metadata={"source": "defaults"})
204
205 # Try loading the local config first, otherwise fall back to global config
206 if os.path.exists(_local_config_file):
207 ytcfg.read(_local_config_file)
208 elif os.path.exists(_global_config_file):
209 ytcfg.read(_global_config_file)
210
[end of yt/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/yt/config.py b/yt/config.py
--- a/yt/config.py
+++ b/yt/config.py
@@ -1,7 +1,9 @@
import os
import warnings
-import toml
+# TODO: import tomllib from the standard library instead in Python >= 3.11
+import tomli as tomllib
+import tomli_w
from more_itertools import always_iterable
from yt.utilities.configuration_tree import ConfigLeaf, ConfigNode
@@ -144,14 +146,16 @@
if not os.path.exists(fname):
continue
metadata = {"source": f"file: {fname}"}
- self.update(toml.load(fname), metadata=metadata)
+ with open(fname, "rb") as fh:
+ data = tomllib.load(fh)
+ self.update(data, metadata=metadata)
file_names_read.append(fname)
return file_names_read
def write(self, file_handler):
value = self.config_root.as_dict()
- config_as_str = toml.dumps(value)
+ config_as_str = tomli_w.dumps(value)
try:
# Assuming file_handler has a write attribute
@@ -192,8 +196,8 @@
if not os.path.exists(_global_config_file):
cfg = {"yt": {}} # type: ignore
try:
- with open(_global_config_file, mode="w") as fd:
- toml.dump(cfg, fd)
+ with open(_global_config_file, mode="wb") as fd:
+ tomli_w.dump(cfg, fd)
except OSError:
warnings.warn("unable to write new config file")
| {"golden_diff": "diff --git a/yt/config.py b/yt/config.py\n--- a/yt/config.py\n+++ b/yt/config.py\n@@ -1,7 +1,9 @@\n import os\n import warnings\n \n-import toml\n+# TODO: import tomllib from the standard library instead in Python >= 3.11\n+import tomli as tomllib\n+import tomli_w\n from more_itertools import always_iterable\n \n from yt.utilities.configuration_tree import ConfigLeaf, ConfigNode\n@@ -144,14 +146,16 @@\n if not os.path.exists(fname):\n continue\n metadata = {\"source\": f\"file: {fname}\"}\n- self.update(toml.load(fname), metadata=metadata)\n+ with open(fname, \"rb\") as fh:\n+ data = tomllib.load(fh)\n+ self.update(data, metadata=metadata)\n file_names_read.append(fname)\n \n return file_names_read\n \n def write(self, file_handler):\n value = self.config_root.as_dict()\n- config_as_str = toml.dumps(value)\n+ config_as_str = tomli_w.dumps(value)\n \n try:\n # Assuming file_handler has a write attribute\n@@ -192,8 +196,8 @@\n if not os.path.exists(_global_config_file):\n cfg = {\"yt\": {}} # type: ignore\n try:\n- with open(_global_config_file, mode=\"w\") as fd:\n- toml.dump(cfg, fd)\n+ with open(_global_config_file, mode=\"wb\") as fd:\n+ tomli_w.dump(cfg, fd)\n except OSError:\n warnings.warn(\"unable to write new config file\")\n", "issue": "Convert to using tomli\nThe library currently depends on the dead \"toml\" library, instead of the \"tomli\" library (which was just accepted into the stdlib as \"tomllib\" for Python 3.11. It should be a really easy swap; it's almost the same API as toml, it just expects binary files instead of unicode ones if using `toml{i,lib}.load`. `loads` is the same. Writing is a separate library.\r\n\r\nWorking on pyodide at https://github.com/pyodide/pyodide/pull/2234 and don't want to have to port a dead library.\r\n\n", "before_files": [{"content": "import os\nimport warnings\n\nimport toml\nfrom more_itertools import always_iterable\n\nfrom yt.utilities.configuration_tree import ConfigLeaf, ConfigNode\n\nytcfg_defaults = {}\n\nytcfg_defaults[\"yt\"] = dict(\n serialize=False,\n only_deserialize=False,\n time_functions=False,\n colored_logs=False,\n suppress_stream_logging=False,\n stdout_stream_logging=False,\n log_level=20,\n inline=False,\n num_threads=-1,\n store_parameter_files=False,\n parameter_file_store=\"parameter_files.csv\",\n maximum_stored_datasets=500,\n skip_dataset_cache=True,\n load_field_plugins=False,\n plugin_filename=\"my_plugins.py\",\n parallel_traceback=False,\n pasteboard_repo=\"\",\n reconstruct_index=True,\n test_storage_dir=\"/does/not/exist\",\n test_data_dir=\"/does/not/exist\",\n enzo_db=\"\",\n notebook_password=\"\",\n answer_testing_tolerance=3,\n answer_testing_bitwise=False,\n gold_standard_filename=\"gold311\",\n local_standard_filename=\"local001\",\n answer_tests_url=\"http://answers.yt-project.org/{1}_{2}\",\n sketchfab_api_key=\"None\",\n imagebin_api_key=\"e1977d9195fe39e\",\n imagebin_upload_url=\"https://api.imgur.com/3/image\",\n imagebin_delete_url=\"https://api.imgur.com/3/image/{delete_hash}\",\n curldrop_upload_url=\"http://use.yt/upload\",\n thread_field_detection=False,\n ignore_invalid_unit_operation_errors=False,\n chunk_size=1000,\n xray_data_dir=\"/does/not/exist\",\n supp_data_dir=\"/does/not/exist\",\n default_colormap=\"cmyt.arbre\",\n ray_tracing_engine=\"embree\",\n internals=dict(\n within_testing=False,\n within_pytest=False,\n parallel=False,\n strict_requires=False,\n global_parallel_rank=0,\n global_parallel_size=1,\n topcomm_parallel_rank=0,\n topcomm_parallel_size=1,\n command_line=False,\n ),\n)\n\n\ndef config_dir():\n config_root = os.environ.get(\n \"XDG_CONFIG_HOME\", os.path.join(os.path.expanduser(\"~\"), \".config\")\n )\n conf_dir = os.path.join(config_root, \"yt\")\n\n if not os.path.exists(conf_dir):\n try:\n os.makedirs(conf_dir)\n except OSError:\n warnings.warn(\"unable to create yt config directory\")\n return conf_dir\n\n\n# For backward compatibility, do not use these vars internally in yt\nCONFIG_DIR = config_dir()\n\n\nclass YTConfig:\n def __init__(self, defaults=None):\n if defaults is None:\n defaults = {}\n self.config_root = ConfigNode(None)\n\n def get(self, section, *keys, callback=None):\n node_or_leaf = self.config_root.get(section, *keys)\n if isinstance(node_or_leaf, ConfigLeaf):\n if callback is not None:\n return callback(node_or_leaf)\n return node_or_leaf.value\n return node_or_leaf\n\n def get_most_specific(self, section, *keys, **kwargs):\n use_fallback = \"fallback\" in kwargs\n fallback = kwargs.pop(\"fallback\", None)\n try:\n return self.config_root.get_deepest_leaf(section, *keys)\n except KeyError as err:\n if use_fallback:\n return fallback\n else:\n raise err\n\n def update(self, new_values, metadata=None):\n if metadata is None:\n metadata = {}\n self.config_root.update(new_values, metadata)\n\n def has_section(self, section):\n try:\n self.config_root.get_child(section)\n return True\n except KeyError:\n return False\n\n def add_section(self, section):\n self.config_root.add_child(section)\n\n def remove_section(self, section):\n if self.has_section(section):\n self.config_root.remove_child(section)\n return True\n else:\n return False\n\n def set(self, *args, metadata=None):\n section, *keys, value = args\n if metadata is None:\n metadata = {\"source\": \"runtime\"}\n self.config_root.upsert_from_list(\n [section] + list(keys), value, extra_data=metadata\n )\n\n def remove(self, *args):\n self.config_root.pop_leaf(args)\n\n def read(self, file_names):\n file_names_read = []\n for fname in always_iterable(file_names):\n if not os.path.exists(fname):\n continue\n metadata = {\"source\": f\"file: {fname}\"}\n self.update(toml.load(fname), metadata=metadata)\n file_names_read.append(fname)\n\n return file_names_read\n\n def write(self, file_handler):\n value = self.config_root.as_dict()\n config_as_str = toml.dumps(value)\n\n try:\n # Assuming file_handler has a write attribute\n file_handler.write(config_as_str)\n except AttributeError:\n # Otherwise we expect a path to a file\n with open(file_handler, mode=\"w\") as fh:\n fh.write(config_as_str)\n\n @staticmethod\n def get_global_config_file():\n return os.path.join(config_dir(), \"yt.toml\")\n\n @staticmethod\n def get_local_config_file():\n return os.path.join(os.path.abspath(os.curdir), \"yt.toml\")\n\n def __setitem__(self, args, value):\n section, *keys = always_iterable(args)\n self.set(section, *keys, value, metadata=None)\n\n def __getitem__(self, key):\n section, *keys = always_iterable(key)\n return self.get(section, *keys)\n\n def __contains__(self, item):\n return item in self.config_root\n\n # Add support for IPython rich display\n # see https://ipython.readthedocs.io/en/stable/config/integrating.html\n def _repr_json_(self):\n return self.config_root._repr_json_()\n\n\n_global_config_file = YTConfig.get_global_config_file()\n_local_config_file = YTConfig.get_local_config_file()\n\nif not os.path.exists(_global_config_file):\n cfg = {\"yt\": {}} # type: ignore\n try:\n with open(_global_config_file, mode=\"w\") as fd:\n toml.dump(cfg, fd)\n except OSError:\n warnings.warn(\"unable to write new config file\")\n\n\n# Load the config\nytcfg = YTConfig()\nytcfg.update(ytcfg_defaults, metadata={\"source\": \"defaults\"})\n\n# Try loading the local config first, otherwise fall back to global config\nif os.path.exists(_local_config_file):\n ytcfg.read(_local_config_file)\nelif os.path.exists(_global_config_file):\n ytcfg.read(_global_config_file)\n", "path": "yt/config.py"}]} | 2,644 | 364 |
gh_patches_debug_28163 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1614 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow to hide/show actions directly from the Actions control panel list
As @esteele mentionned in #1342
</issue>
<code>
[start of Products/CMFPlone/controlpanel/browser/actions.py]
1 from plone.autoform.form import AutoExtensibleForm
2 from Products.CMFCore.ActionInformation import Action
3 from Products.CMFCore.interfaces import IAction, IActionCategory
4 from Products.CMFCore.utils import getToolByName
5 from Products.CMFPlone import PloneMessageFactory as _
6 from Products.CMFPlone.interfaces import IActionSchema, INewActionSchema
7 from Products.Five import BrowserView
8 from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
9 from z3c.form import form
10 from zope.component import adapts
11 from zope.event import notify
12 from zope.interface import implements
13 from zope.lifecycleevent import ObjectCreatedEvent
14
15
16 class ActionListControlPanel(BrowserView):
17 """Control panel for the portal actions."""
18
19 template = ViewPageTemplateFile("actions.pt")
20
21 def __init__(self, context, request):
22 self.context = context
23 self.request = request
24 self.portal_actions = getToolByName(self.context, 'portal_actions')
25
26 def display(self):
27 actions = []
28 for category in self.portal_actions.objectValues():
29 if category.id == 'controlpanel':
30 continue
31 if not IActionCategory.providedBy(category):
32 continue
33 cat_infos = {
34 'id': category.id,
35 'title': category.title or category.id,
36 }
37 action_list = []
38 for action in category.objectValues():
39 if IAction.providedBy(action):
40 action_list.append({
41 'id': action.id,
42 'title': action.title,
43 'url': action.absolute_url(),
44 })
45 cat_infos['actions'] = action_list
46 actions.append(cat_infos)
47
48 self.actions = actions
49 return self.template()
50
51 def __call__(self):
52 if self.request.get('deleteaction'):
53 action_id = self.request['deleteaction']
54 category = self.portal_actions[self.request['category']]
55 category.manage_delObjects([action_id])
56 self.request.RESPONSE.redirect('@@actions-controlpanel')
57 return self.display()
58
59
60 class ActionControlPanelAdapter(object):
61 """Adapter for action form."""
62
63 adapts(IAction)
64 implements(IActionSchema)
65
66 def __init__(self, context):
67 self.context = context
68 self.current_category = self.context.getParentNode()
69
70 def get_category(self):
71 return self.current_category.id
72
73 def set_category(self, value):
74 portal_actions = getToolByName(self.context, 'portal_actions')
75 new_category = portal_actions.get(value)
76 cookie = self.current_category.manage_cutObjects(ids=[self.context.id])
77 new_category.manage_pasteObjects(cookie)
78
79 category = property(get_category, set_category)
80
81 def get_title(self):
82 return self.context.title
83
84 def set_title(self, value):
85 self.context._setPropValue('title', value)
86
87 title = property(get_title, set_title)
88
89 def get_description(self):
90 return self.context.description
91
92 def set_description(self, value):
93 self.context._setPropValue('description', value)
94
95 description = property(get_description, set_description)
96
97 def get_i18n_domain(self):
98 return self.context.i18n_domain
99
100 def set_i18n_domain(self, value):
101 self.context._setPropValue('i18n_domain', value)
102
103 i18n_domain = property(get_i18n_domain, set_i18n_domain)
104
105 def get_url_expr(self):
106 return self.context.url_expr
107
108 def set_url_expr(self, value):
109 self.context._setPropValue('url_expr', value)
110
111 url_expr = property(get_url_expr, set_url_expr)
112
113 def get_available_expr(self):
114 return self.context.available_expr
115
116 def set_available_expr(self, value):
117 self.context._setPropValue('available_expr', value)
118
119 available_expr = property(get_available_expr, set_available_expr)
120
121 def get_permissions(self):
122 return self.context.permissions
123
124 def set_permissions(self, value):
125 self.context._setPropValue('permissions', value)
126
127 permissions = property(get_permissions, set_permissions)
128
129 def get_visible(self):
130 return self.context.visible
131
132 def set_visible(self, value):
133 self.context._setPropValue('visible', value)
134
135 visible = property(get_visible, set_visible)
136
137 def get_position(self):
138 position = self.current_category.objectIds().index(self.context.id)
139 return position + 1
140
141 def set_position(self, value):
142 current_position = self.current_category.objectIds().index(
143 self.context.id)
144 all_actions = list(self.current_category._objects)
145 current_action = all_actions.pop(current_position)
146 new_position = value - 1
147 all_actions = all_actions[0:new_position] + [current_action] + \
148 all_actions[new_position:]
149 self.current_category._objects = tuple(all_actions)
150
151 position = property(get_position, set_position)
152
153
154 class ActionControlPanel(AutoExtensibleForm, form.EditForm):
155 """A form to edit a portal action."""
156
157 schema = IActionSchema
158 ignoreContext = False
159 label = _(u'Action Settings')
160
161
162 class NewActionControlPanel(AutoExtensibleForm, form.AddForm):
163 """A form to add a new portal action."""
164
165 schema = INewActionSchema
166 ignoreContext = True
167 label = _(u'New action')
168
169 def createAndAdd(self, data):
170 portal_actions = getToolByName(self.context, 'portal_actions')
171 category = portal_actions.get(data['category'])
172 action_id = data['id']
173 action = Action(
174 action_id,
175 title=action_id,
176 i18n_domain='plone',
177 permissions=['View'],
178 )
179 category[action_id] = action
180 notify(ObjectCreatedEvent(action))
181
[end of Products/CMFPlone/controlpanel/browser/actions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/controlpanel/browser/actions.py b/Products/CMFPlone/controlpanel/browser/actions.py
--- a/Products/CMFPlone/controlpanel/browser/actions.py
+++ b/Products/CMFPlone/controlpanel/browser/actions.py
@@ -41,6 +41,7 @@
'id': action.id,
'title': action.title,
'url': action.absolute_url(),
+ 'visible': action.visible,
})
cat_infos['actions'] = action_list
actions.append(cat_infos)
@@ -49,11 +50,21 @@
return self.template()
def __call__(self):
- if self.request.get('deleteaction'):
- action_id = self.request['deleteaction']
+ if self.request.get('delete'):
+ action_id = self.request['actionid']
category = self.portal_actions[self.request['category']]
category.manage_delObjects([action_id])
self.request.RESPONSE.redirect('@@actions-controlpanel')
+ if self.request.get('hide'):
+ action_id = self.request['actionid']
+ category = self.portal_actions[self.request['category']]
+ category[action_id].visible = False
+ self.request.RESPONSE.redirect('@@actions-controlpanel')
+ if self.request.get('show'):
+ action_id = self.request['actionid']
+ category = self.portal_actions[self.request['category']]
+ category[action_id].visible = True
+ self.request.RESPONSE.redirect('@@actions-controlpanel')
return self.display()
| {"golden_diff": "diff --git a/Products/CMFPlone/controlpanel/browser/actions.py b/Products/CMFPlone/controlpanel/browser/actions.py\n--- a/Products/CMFPlone/controlpanel/browser/actions.py\n+++ b/Products/CMFPlone/controlpanel/browser/actions.py\n@@ -41,6 +41,7 @@\n 'id': action.id,\n 'title': action.title,\n 'url': action.absolute_url(),\n+ 'visible': action.visible,\n })\n cat_infos['actions'] = action_list\n actions.append(cat_infos)\n@@ -49,11 +50,21 @@\n return self.template()\n \n def __call__(self):\n- if self.request.get('deleteaction'):\n- action_id = self.request['deleteaction']\n+ if self.request.get('delete'):\n+ action_id = self.request['actionid']\n category = self.portal_actions[self.request['category']]\n category.manage_delObjects([action_id])\n self.request.RESPONSE.redirect('@@actions-controlpanel')\n+ if self.request.get('hide'):\n+ action_id = self.request['actionid']\n+ category = self.portal_actions[self.request['category']]\n+ category[action_id].visible = False\n+ self.request.RESPONSE.redirect('@@actions-controlpanel')\n+ if self.request.get('show'):\n+ action_id = self.request['actionid']\n+ category = self.portal_actions[self.request['category']]\n+ category[action_id].visible = True\n+ self.request.RESPONSE.redirect('@@actions-controlpanel')\n return self.display()\n", "issue": "Allow to hide/show actions directly from the Actions control panel list\nAs @esteele mentionned in #1342\n\n", "before_files": [{"content": "from plone.autoform.form import AutoExtensibleForm\nfrom Products.CMFCore.ActionInformation import Action\nfrom Products.CMFCore.interfaces import IAction, IActionCategory\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone import PloneMessageFactory as _\nfrom Products.CMFPlone.interfaces import IActionSchema, INewActionSchema\nfrom Products.Five import BrowserView\nfrom Products.Five.browser.pagetemplatefile import ViewPageTemplateFile\nfrom z3c.form import form\nfrom zope.component import adapts\nfrom zope.event import notify\nfrom zope.interface import implements\nfrom zope.lifecycleevent import ObjectCreatedEvent\n\n\nclass ActionListControlPanel(BrowserView):\n \"\"\"Control panel for the portal actions.\"\"\"\n\n template = ViewPageTemplateFile(\"actions.pt\")\n\n def __init__(self, context, request):\n self.context = context\n self.request = request\n self.portal_actions = getToolByName(self.context, 'portal_actions')\n\n def display(self):\n actions = []\n for category in self.portal_actions.objectValues():\n if category.id == 'controlpanel':\n continue\n if not IActionCategory.providedBy(category):\n continue\n cat_infos = {\n 'id': category.id,\n 'title': category.title or category.id,\n }\n action_list = []\n for action in category.objectValues():\n if IAction.providedBy(action):\n action_list.append({\n 'id': action.id,\n 'title': action.title,\n 'url': action.absolute_url(),\n })\n cat_infos['actions'] = action_list\n actions.append(cat_infos)\n\n self.actions = actions\n return self.template()\n\n def __call__(self):\n if self.request.get('deleteaction'):\n action_id = self.request['deleteaction']\n category = self.portal_actions[self.request['category']]\n category.manage_delObjects([action_id])\n self.request.RESPONSE.redirect('@@actions-controlpanel')\n return self.display()\n\n\nclass ActionControlPanelAdapter(object):\n \"\"\"Adapter for action form.\"\"\"\n\n adapts(IAction)\n implements(IActionSchema)\n\n def __init__(self, context):\n self.context = context\n self.current_category = self.context.getParentNode()\n\n def get_category(self):\n return self.current_category.id\n\n def set_category(self, value):\n portal_actions = getToolByName(self.context, 'portal_actions')\n new_category = portal_actions.get(value)\n cookie = self.current_category.manage_cutObjects(ids=[self.context.id])\n new_category.manage_pasteObjects(cookie)\n\n category = property(get_category, set_category)\n\n def get_title(self):\n return self.context.title\n\n def set_title(self, value):\n self.context._setPropValue('title', value)\n\n title = property(get_title, set_title)\n\n def get_description(self):\n return self.context.description\n\n def set_description(self, value):\n self.context._setPropValue('description', value)\n\n description = property(get_description, set_description)\n\n def get_i18n_domain(self):\n return self.context.i18n_domain\n\n def set_i18n_domain(self, value):\n self.context._setPropValue('i18n_domain', value)\n\n i18n_domain = property(get_i18n_domain, set_i18n_domain)\n\n def get_url_expr(self):\n return self.context.url_expr\n\n def set_url_expr(self, value):\n self.context._setPropValue('url_expr', value)\n\n url_expr = property(get_url_expr, set_url_expr)\n\n def get_available_expr(self):\n return self.context.available_expr\n\n def set_available_expr(self, value):\n self.context._setPropValue('available_expr', value)\n\n available_expr = property(get_available_expr, set_available_expr)\n\n def get_permissions(self):\n return self.context.permissions\n\n def set_permissions(self, value):\n self.context._setPropValue('permissions', value)\n\n permissions = property(get_permissions, set_permissions)\n\n def get_visible(self):\n return self.context.visible\n\n def set_visible(self, value):\n self.context._setPropValue('visible', value)\n\n visible = property(get_visible, set_visible)\n\n def get_position(self):\n position = self.current_category.objectIds().index(self.context.id)\n return position + 1\n\n def set_position(self, value):\n current_position = self.current_category.objectIds().index(\n self.context.id)\n all_actions = list(self.current_category._objects)\n current_action = all_actions.pop(current_position)\n new_position = value - 1\n all_actions = all_actions[0:new_position] + [current_action] + \\\n all_actions[new_position:]\n self.current_category._objects = tuple(all_actions)\n\n position = property(get_position, set_position)\n\n\nclass ActionControlPanel(AutoExtensibleForm, form.EditForm):\n \"\"\"A form to edit a portal action.\"\"\"\n\n schema = IActionSchema\n ignoreContext = False\n label = _(u'Action Settings')\n\n\nclass NewActionControlPanel(AutoExtensibleForm, form.AddForm):\n \"\"\"A form to add a new portal action.\"\"\"\n\n schema = INewActionSchema\n ignoreContext = True\n label = _(u'New action')\n\n def createAndAdd(self, data):\n portal_actions = getToolByName(self.context, 'portal_actions')\n category = portal_actions.get(data['category'])\n action_id = data['id']\n action = Action(\n action_id,\n title=action_id,\n i18n_domain='plone',\n permissions=['View'],\n )\n category[action_id] = action\n notify(ObjectCreatedEvent(action))\n", "path": "Products/CMFPlone/controlpanel/browser/actions.py"}]} | 2,247 | 338 |
gh_patches_debug_20202 | rasdani/github-patches | git_diff | googleapis__google-auth-library-python-73 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support `GCLOUD_PROJECT` environment variable
</issue>
<code>
[start of google/auth/environment_vars.py]
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Environment variables used by :mod:`google.auth`."""
16
17
18 PROJECT = 'GOOGLE_CLOUD_PROJECT'
19 """Environment variable defining default project.
20
21 This used by :func:`google.auth.default` to explicitly set a project ID. This
22 environment variable is also used by the Google Cloud Python Library.
23 """
24
25 CREDENTIALS = 'GOOGLE_APPLICATION_CREDENTIALS'
26 """Environment variable defining the location of Google application default
27 credentials."""
28
29 # The environment variable name which can replace ~/.config if set.
30 CLOUD_SDK_CONFIG_DIR = 'CLOUDSDK_CONFIG'
31 """Environment variable defines the location of Google Cloud SDK's config
32 files."""
33
[end of google/auth/environment_vars.py]
[start of google/auth/_default.py]
1 # Copyright 2015 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Application default credentials.
16
17 Implements application default credentials and project ID detection.
18 """
19
20 import io
21 import json
22 import logging
23 import os
24
25 from google.auth import _cloud_sdk
26 from google.auth import app_engine
27 from google.auth import compute_engine
28 from google.auth import environment_vars
29 from google.auth import exceptions
30 from google.auth.compute_engine import _metadata
31 import google.auth.transport._http_client
32 from google.oauth2 import service_account
33 import google.oauth2.credentials
34
35 _LOGGER = logging.getLogger(__name__)
36
37 # Valid types accepted for file-based credentials.
38 _AUTHORIZED_USER_TYPE = 'authorized_user'
39 _SERVICE_ACCOUNT_TYPE = 'service_account'
40 _VALID_TYPES = (_AUTHORIZED_USER_TYPE, _SERVICE_ACCOUNT_TYPE)
41
42 # Help message when no credentials can be found.
43 _HELP_MESSAGE = """
44 Could not automatically determine credentials. Please set {env} or
45 explicitly create credential and re-run the application. For more
46 information, please see
47 https://developers.google.com/accounts/docs/application-default-credentials.
48 """.format(env=environment_vars.CREDENTIALS).strip()
49
50
51 def _load_credentials_from_file(filename):
52 """Loads credentials from a file.
53
54 The credentials file must be a service account key or stored authorized
55 user credentials.
56
57 Args:
58 filename (str): The full path to the credentials file.
59
60 Returns:
61 Tuple[google.auth.credentials.Credentials, Optional[str]]: Loaded
62 credentials and the project ID. Authorized user credentials do not
63 have the project ID information.
64
65 Raises:
66 google.auth.exceptions.DefaultCredentialsError: if the file is in the
67 wrong format.
68 """
69 with io.open(filename, 'r') as file_obj:
70 try:
71 info = json.load(file_obj)
72 except ValueError as exc:
73 raise exceptions.DefaultCredentialsError(
74 'File {} is not a valid json file.'.format(filename), exc)
75
76 # The type key should indicate that the file is either a service account
77 # credentials file or an authorized user credentials file.
78 credential_type = info.get('type')
79
80 if credential_type == _AUTHORIZED_USER_TYPE:
81 try:
82 credentials = _cloud_sdk.load_authorized_user_credentials(info)
83 except ValueError as exc:
84 raise exceptions.DefaultCredentialsError(
85 'Failed to load authorized user credentials from {}'.format(
86 filename), exc)
87 # Authorized user credentials do not contain the project ID.
88 return credentials, None
89
90 elif credential_type == _SERVICE_ACCOUNT_TYPE:
91 try:
92 credentials = (
93 service_account.Credentials.from_service_account_info(info))
94 except ValueError as exc:
95 raise exceptions.DefaultCredentialsError(
96 'Failed to load service account credentials from {}'.format(
97 filename), exc)
98 return credentials, info.get('project_id')
99
100 else:
101 raise exceptions.DefaultCredentialsError(
102 'The file {file} does not have a valid type. '
103 'Type is {type}, expected one of {valid_types}.'.format(
104 file=filename, type=credential_type, valid_types=_VALID_TYPES))
105
106
107 def _get_gcloud_sdk_credentials():
108 """Gets the credentials and project ID from the Cloud SDK."""
109 # Check if application default credentials exist.
110 credentials_filename = (
111 _cloud_sdk.get_application_default_credentials_path())
112
113 if not os.path.isfile(credentials_filename):
114 return None, None
115
116 credentials, project_id = _load_credentials_from_file(
117 credentials_filename)
118
119 if not project_id:
120 project_id = _cloud_sdk.get_project_id()
121
122 if not project_id:
123 _LOGGER.warning(
124 'No project ID could be determined from the Cloud SDK '
125 'configuration. Consider running `gcloud config set project` or '
126 'setting the %s environment variable', environment_vars.PROJECT)
127
128 return credentials, project_id
129
130
131 def _get_explicit_environ_credentials():
132 """Gets credentials from the GOOGLE_APPLICATION_CREDENTIALS environment
133 variable."""
134 explicit_file = os.environ.get(environment_vars.CREDENTIALS)
135
136 if explicit_file is not None:
137 credentials, project_id = _load_credentials_from_file(
138 os.environ[environment_vars.CREDENTIALS])
139
140 if not project_id:
141 _LOGGER.warning(
142 'No project ID could be determined from the credentials at %s '
143 'Consider setting the %s environment variable',
144 environment_vars.CREDENTIALS, environment_vars.PROJECT)
145
146 return credentials, project_id
147
148 else:
149 return None, None
150
151
152 def _get_gae_credentials():
153 """Gets Google App Engine App Identity credentials and project ID."""
154 try:
155 credentials = app_engine.Credentials()
156 project_id = app_engine.get_project_id()
157 return credentials, project_id
158 except EnvironmentError:
159 return None, None
160
161
162 def _get_gce_credentials(request=None):
163 """Gets credentials and project ID from the GCE Metadata Service."""
164 # Ping requires a transport, but we want application default credentials
165 # to require no arguments. So, we'll use the _http_client transport which
166 # uses http.client. This is only acceptable because the metadata server
167 # doesn't do SSL and never requires proxies.
168
169 if request is None:
170 request = google.auth.transport._http_client.Request()
171
172 if _metadata.ping(request=request):
173 # Get the project ID.
174 try:
175 project_id = _metadata.get_project_id(request=request)
176 except exceptions.TransportError:
177 _LOGGER.warning(
178 'No project ID could be determined from the Compute Engine '
179 'metadata service. Consider setting the %s environment '
180 'variable.', environment_vars.PROJECT)
181 project_id = None
182
183 return compute_engine.Credentials(), project_id
184 else:
185 return None, None
186
187
188 def default(request=None):
189 """Gets the default credentials for the current environment.
190
191 `Application Default Credentials`_ provides an easy way to obtain
192 credentials to call Google APIs for server-to-server or local applications.
193 This function acquires credentials from the environment in the following
194 order:
195
196 1. If the environment variable ``GOOGLE_APPLICATION_CREDENTIALS`` is set
197 to the path of a valid service account JSON private key file, then it is
198 loaded and returned. The project ID returned is the project ID defined
199 in the service account file if available (some older files do not
200 contain project ID information).
201 2. If the `Google Cloud SDK`_ is installed and has application default
202 credentials set they are loaded and returned.
203
204 To enable application default credentials with the Cloud SDK run::
205
206 gcloud auth application-default login
207
208 If the Cloud SDK has an active project, the project ID is returned. The
209 active project can be set using::
210
211 gcloud config set project
212
213 3. If the application is running in the `App Engine standard environment`_
214 then the credentials and project ID from the `App Identity Service`_
215 are used.
216 4. If the application is running in `Compute Engine`_ or the
217 `App Engine flexible environment`_ then the credentials and project ID
218 are obtained from the `Metadata Service`_.
219 5. If no credentials are found,
220 :class:`~google.auth.exceptions.DefaultCredentialsError` will be raised.
221
222 .. _Application Default Credentials: https://developers.google.com\
223 /identity/protocols/application-default-credentials
224 .. _Google Cloud SDK: https://cloud.google.com/sdk
225 .. _App Engine standard environment: https://cloud.google.com/appengine
226 .. _App Identity Service: https://cloud.google.com/appengine/docs/python\
227 /appidentity/
228 .. _Compute Engine: https://cloud.google.com/compute
229 .. _App Engine flexible environment: https://cloud.google.com\
230 /appengine/flexible
231 .. _Metadata Service: https://cloud.google.com/compute/docs\
232 /storing-retrieving-metadata
233
234 Example::
235
236 import google.auth
237
238 credentials, project_id = google.auth.default()
239
240 Args:
241 request (google.auth.transport.Request): An object used to make
242 HTTP requests. This is used to detect whether the application
243 is running on Compute Engine. If not specified, then it will
244 use the standard library http client to make requests.
245
246 Returns:
247 Tuple[~google.auth.credentials.Credentials, Optional[str]]:
248 the current environment's credentials and project ID. Project ID
249 may be None, which indicates that the Project ID could not be
250 ascertained from the environment.
251
252 Raises:
253 ~google.auth.exceptions.DefaultCredentialsError:
254 If no credentials were found, or if the credentials found were
255 invalid.
256 """
257 explicit_project_id = os.environ.get(environment_vars.PROJECT)
258
259 checkers = (
260 _get_explicit_environ_credentials,
261 _get_gcloud_sdk_credentials,
262 _get_gae_credentials,
263 lambda: _get_gce_credentials(request))
264
265 for checker in checkers:
266 credentials, project_id = checker()
267 if credentials is not None:
268 return credentials, explicit_project_id or project_id
269
270 raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
271
[end of google/auth/_default.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/google/auth/_default.py b/google/auth/_default.py
--- a/google/auth/_default.py
+++ b/google/auth/_default.py
@@ -254,7 +254,9 @@
If no credentials were found, or if the credentials found were
invalid.
"""
- explicit_project_id = os.environ.get(environment_vars.PROJECT)
+ explicit_project_id = os.environ.get(
+ environment_vars.PROJECT,
+ os.environ.get(environment_vars.LEGACY_PROJECT))
checkers = (
_get_explicit_environ_credentials,
diff --git a/google/auth/environment_vars.py b/google/auth/environment_vars.py
--- a/google/auth/environment_vars.py
+++ b/google/auth/environment_vars.py
@@ -22,6 +22,13 @@
environment variable is also used by the Google Cloud Python Library.
"""
+LEGACY_PROJECT = 'GCLOUD_PROJECT'
+"""Previously used environment variable defining the default project.
+
+This environment variable is used instead of the current one in some
+situations (such as Google App Engine).
+"""
+
CREDENTIALS = 'GOOGLE_APPLICATION_CREDENTIALS'
"""Environment variable defining the location of Google application default
credentials."""
| {"golden_diff": "diff --git a/google/auth/_default.py b/google/auth/_default.py\n--- a/google/auth/_default.py\n+++ b/google/auth/_default.py\n@@ -254,7 +254,9 @@\n If no credentials were found, or if the credentials found were\n invalid.\n \"\"\"\n- explicit_project_id = os.environ.get(environment_vars.PROJECT)\n+ explicit_project_id = os.environ.get(\n+ environment_vars.PROJECT,\n+ os.environ.get(environment_vars.LEGACY_PROJECT))\n \n checkers = (\n _get_explicit_environ_credentials,\ndiff --git a/google/auth/environment_vars.py b/google/auth/environment_vars.py\n--- a/google/auth/environment_vars.py\n+++ b/google/auth/environment_vars.py\n@@ -22,6 +22,13 @@\n environment variable is also used by the Google Cloud Python Library.\n \"\"\"\n \n+LEGACY_PROJECT = 'GCLOUD_PROJECT'\n+\"\"\"Previously used environment variable defining the default project.\n+\n+This environment variable is used instead of the current one in some\n+situations (such as Google App Engine).\n+\"\"\"\n+\n CREDENTIALS = 'GOOGLE_APPLICATION_CREDENTIALS'\n \"\"\"Environment variable defining the location of Google application default\n credentials.\"\"\"\n", "issue": "Support `GCLOUD_PROJECT` environment variable\n\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Environment variables used by :mod:`google.auth`.\"\"\"\n\n\nPROJECT = 'GOOGLE_CLOUD_PROJECT'\n\"\"\"Environment variable defining default project.\n\nThis used by :func:`google.auth.default` to explicitly set a project ID. This\nenvironment variable is also used by the Google Cloud Python Library.\n\"\"\"\n\nCREDENTIALS = 'GOOGLE_APPLICATION_CREDENTIALS'\n\"\"\"Environment variable defining the location of Google application default\ncredentials.\"\"\"\n\n# The environment variable name which can replace ~/.config if set.\nCLOUD_SDK_CONFIG_DIR = 'CLOUDSDK_CONFIG'\n\"\"\"Environment variable defines the location of Google Cloud SDK's config\nfiles.\"\"\"\n", "path": "google/auth/environment_vars.py"}, {"content": "# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Application default credentials.\n\nImplements application default credentials and project ID detection.\n\"\"\"\n\nimport io\nimport json\nimport logging\nimport os\n\nfrom google.auth import _cloud_sdk\nfrom google.auth import app_engine\nfrom google.auth import compute_engine\nfrom google.auth import environment_vars\nfrom google.auth import exceptions\nfrom google.auth.compute_engine import _metadata\nimport google.auth.transport._http_client\nfrom google.oauth2 import service_account\nimport google.oauth2.credentials\n\n_LOGGER = logging.getLogger(__name__)\n\n# Valid types accepted for file-based credentials.\n_AUTHORIZED_USER_TYPE = 'authorized_user'\n_SERVICE_ACCOUNT_TYPE = 'service_account'\n_VALID_TYPES = (_AUTHORIZED_USER_TYPE, _SERVICE_ACCOUNT_TYPE)\n\n# Help message when no credentials can be found.\n_HELP_MESSAGE = \"\"\"\nCould not automatically determine credentials. Please set {env} or\nexplicitly create credential and re-run the application. For more\ninformation, please see\nhttps://developers.google.com/accounts/docs/application-default-credentials.\n\"\"\".format(env=environment_vars.CREDENTIALS).strip()\n\n\ndef _load_credentials_from_file(filename):\n \"\"\"Loads credentials from a file.\n\n The credentials file must be a service account key or stored authorized\n user credentials.\n\n Args:\n filename (str): The full path to the credentials file.\n\n Returns:\n Tuple[google.auth.credentials.Credentials, Optional[str]]: Loaded\n credentials and the project ID. Authorized user credentials do not\n have the project ID information.\n\n Raises:\n google.auth.exceptions.DefaultCredentialsError: if the file is in the\n wrong format.\n \"\"\"\n with io.open(filename, 'r') as file_obj:\n try:\n info = json.load(file_obj)\n except ValueError as exc:\n raise exceptions.DefaultCredentialsError(\n 'File {} is not a valid json file.'.format(filename), exc)\n\n # The type key should indicate that the file is either a service account\n # credentials file or an authorized user credentials file.\n credential_type = info.get('type')\n\n if credential_type == _AUTHORIZED_USER_TYPE:\n try:\n credentials = _cloud_sdk.load_authorized_user_credentials(info)\n except ValueError as exc:\n raise exceptions.DefaultCredentialsError(\n 'Failed to load authorized user credentials from {}'.format(\n filename), exc)\n # Authorized user credentials do not contain the project ID.\n return credentials, None\n\n elif credential_type == _SERVICE_ACCOUNT_TYPE:\n try:\n credentials = (\n service_account.Credentials.from_service_account_info(info))\n except ValueError as exc:\n raise exceptions.DefaultCredentialsError(\n 'Failed to load service account credentials from {}'.format(\n filename), exc)\n return credentials, info.get('project_id')\n\n else:\n raise exceptions.DefaultCredentialsError(\n 'The file {file} does not have a valid type. '\n 'Type is {type}, expected one of {valid_types}.'.format(\n file=filename, type=credential_type, valid_types=_VALID_TYPES))\n\n\ndef _get_gcloud_sdk_credentials():\n \"\"\"Gets the credentials and project ID from the Cloud SDK.\"\"\"\n # Check if application default credentials exist.\n credentials_filename = (\n _cloud_sdk.get_application_default_credentials_path())\n\n if not os.path.isfile(credentials_filename):\n return None, None\n\n credentials, project_id = _load_credentials_from_file(\n credentials_filename)\n\n if not project_id:\n project_id = _cloud_sdk.get_project_id()\n\n if not project_id:\n _LOGGER.warning(\n 'No project ID could be determined from the Cloud SDK '\n 'configuration. Consider running `gcloud config set project` or '\n 'setting the %s environment variable', environment_vars.PROJECT)\n\n return credentials, project_id\n\n\ndef _get_explicit_environ_credentials():\n \"\"\"Gets credentials from the GOOGLE_APPLICATION_CREDENTIALS environment\n variable.\"\"\"\n explicit_file = os.environ.get(environment_vars.CREDENTIALS)\n\n if explicit_file is not None:\n credentials, project_id = _load_credentials_from_file(\n os.environ[environment_vars.CREDENTIALS])\n\n if not project_id:\n _LOGGER.warning(\n 'No project ID could be determined from the credentials at %s '\n 'Consider setting the %s environment variable',\n environment_vars.CREDENTIALS, environment_vars.PROJECT)\n\n return credentials, project_id\n\n else:\n return None, None\n\n\ndef _get_gae_credentials():\n \"\"\"Gets Google App Engine App Identity credentials and project ID.\"\"\"\n try:\n credentials = app_engine.Credentials()\n project_id = app_engine.get_project_id()\n return credentials, project_id\n except EnvironmentError:\n return None, None\n\n\ndef _get_gce_credentials(request=None):\n \"\"\"Gets credentials and project ID from the GCE Metadata Service.\"\"\"\n # Ping requires a transport, but we want application default credentials\n # to require no arguments. So, we'll use the _http_client transport which\n # uses http.client. This is only acceptable because the metadata server\n # doesn't do SSL and never requires proxies.\n\n if request is None:\n request = google.auth.transport._http_client.Request()\n\n if _metadata.ping(request=request):\n # Get the project ID.\n try:\n project_id = _metadata.get_project_id(request=request)\n except exceptions.TransportError:\n _LOGGER.warning(\n 'No project ID could be determined from the Compute Engine '\n 'metadata service. Consider setting the %s environment '\n 'variable.', environment_vars.PROJECT)\n project_id = None\n\n return compute_engine.Credentials(), project_id\n else:\n return None, None\n\n\ndef default(request=None):\n \"\"\"Gets the default credentials for the current environment.\n\n `Application Default Credentials`_ provides an easy way to obtain\n credentials to call Google APIs for server-to-server or local applications.\n This function acquires credentials from the environment in the following\n order:\n\n 1. If the environment variable ``GOOGLE_APPLICATION_CREDENTIALS`` is set\n to the path of a valid service account JSON private key file, then it is\n loaded and returned. The project ID returned is the project ID defined\n in the service account file if available (some older files do not\n contain project ID information).\n 2. If the `Google Cloud SDK`_ is installed and has application default\n credentials set they are loaded and returned.\n\n To enable application default credentials with the Cloud SDK run::\n\n gcloud auth application-default login\n\n If the Cloud SDK has an active project, the project ID is returned. The\n active project can be set using::\n\n gcloud config set project\n\n 3. If the application is running in the `App Engine standard environment`_\n then the credentials and project ID from the `App Identity Service`_\n are used.\n 4. If the application is running in `Compute Engine`_ or the\n `App Engine flexible environment`_ then the credentials and project ID\n are obtained from the `Metadata Service`_.\n 5. If no credentials are found,\n :class:`~google.auth.exceptions.DefaultCredentialsError` will be raised.\n\n .. _Application Default Credentials: https://developers.google.com\\\n /identity/protocols/application-default-credentials\n .. _Google Cloud SDK: https://cloud.google.com/sdk\n .. _App Engine standard environment: https://cloud.google.com/appengine\n .. _App Identity Service: https://cloud.google.com/appengine/docs/python\\\n /appidentity/\n .. _Compute Engine: https://cloud.google.com/compute\n .. _App Engine flexible environment: https://cloud.google.com\\\n /appengine/flexible\n .. _Metadata Service: https://cloud.google.com/compute/docs\\\n /storing-retrieving-metadata\n\n Example::\n\n import google.auth\n\n credentials, project_id = google.auth.default()\n\n Args:\n request (google.auth.transport.Request): An object used to make\n HTTP requests. This is used to detect whether the application\n is running on Compute Engine. If not specified, then it will\n use the standard library http client to make requests.\n\n Returns:\n Tuple[~google.auth.credentials.Credentials, Optional[str]]:\n the current environment's credentials and project ID. Project ID\n may be None, which indicates that the Project ID could not be\n ascertained from the environment.\n\n Raises:\n ~google.auth.exceptions.DefaultCredentialsError:\n If no credentials were found, or if the credentials found were\n invalid.\n \"\"\"\n explicit_project_id = os.environ.get(environment_vars.PROJECT)\n\n checkers = (\n _get_explicit_environ_credentials,\n _get_gcloud_sdk_credentials,\n _get_gae_credentials,\n lambda: _get_gce_credentials(request))\n\n for checker in checkers:\n credentials, project_id = checker()\n if credentials is not None:\n return credentials, explicit_project_id or project_id\n\n raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)\n", "path": "google/auth/_default.py"}]} | 3,631 | 256 |
gh_patches_debug_7519 | rasdani/github-patches | git_diff | LMFDB__lmfdb-4279 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix python 2->3 problem in verify
</issue>
<code>
[start of lmfdb/characters/TinyConrey.py]
1 from sage.all import (gcd, Mod, Integer, Integers, Rational, pari, Pari,
2 DirichletGroup, CyclotomicField, euler_phi, lcm)
3 from sage.misc.cachefunc import cached_method
4 from sage.modular.dirichlet import DirichletCharacter
5
6 def symbol_numerator(cond, parity):
7 # Reference: Sect. 9.3, Montgomery, Hugh L; Vaughan, Robert C. (2007).
8 # Multiplicative number theory. I. Classical theory. Cambridge Studies in
9 # Advanced Mathematics 97
10 #
11 # Let F = Q(\sqrt(d)) with d a non zero squarefree integer then a real
12 # Dirichlet character \chi(n) can be represented as a Kronecker symbol
13 # (m / n) where { m = d if # d = 1 mod 4 else m = 4d if d = 2,3 (mod) 4 }
14 # and m is the discriminant of F. The conductor of \chi is |m|.
15 #
16 # symbol_numerator returns the appropriate Kronecker symbol depending on
17 # the conductor of \chi.
18 m = cond
19 if cond % 2 == 1:
20 if cond % 4 == 3:
21 m = -cond
22 elif cond % 8 == 4:
23 # Fixed cond % 16 == 4 and cond % 16 == 12 were switched in the
24 # previous version of the code.
25 #
26 # Let d be a non zero squarefree integer. If d = 2,3 (mod) 4 and if
27 # cond = 4d = 4 ( 4n + 2) or 4 (4n + 3) = 16 n + 8 or 16n + 12 then we
28 # set m = cond. On the other hand if d = 1 (mod) 4 and cond = 4d = 4
29 # (4n +1) = 16n + 4 then we set m = -cond.
30 if cond % 16 == 4:
31 m = -cond
32 elif cond % 16 == 8:
33 if parity == 1:
34 m = -cond
35 else:
36 return None
37 return m
38
39
40 def kronecker_symbol(m):
41 if m:
42 return r'\(\displaystyle\left(\frac{%s}{\bullet}\right)\)' % (m)
43 else:
44 return None
45
46 ###############################################################################
47 ## Conrey character with no call to Jonathan's code
48 ## in order to handle big moduli
49 ##
50
51 def get_sage_genvalues(modulus, order, genvalues, zeta_order):
52 """
53 Helper method for computing correct genvalues when constructing
54 the sage character
55 """
56 phi_mod = euler_phi(modulus)
57 exponent_factor = phi_mod / order
58 genvalues_exponent = [x * exponent_factor for x in genvalues]
59 return [x * zeta_order / phi_mod for x in genvalues_exponent]
60
61
62 class PariConreyGroup(object):
63
64 def __init__(self, modulus):
65 self.modulus = int(modulus)
66 self.G = Pari("znstar({},1)".format(modulus))
67
68 def gens(self):
69 return Integers(self.modulus).unit_gens()
70
71 def invariants(self):
72 return pari("znstar({},1).cyc".format(self.modulus))
73
74
75 class ConreyCharacter(object):
76 """
77 tiny implementation on Conrey index only
78 """
79
80 def __init__(self, modulus, number):
81 assert gcd(modulus, number)==1
82 self.modulus = Integer(modulus)
83 self.number = Integer(number)
84 self.G = Pari("znstar({},1)".format(modulus))
85 self.chi_pari = pari("znconreylog(%s,%d)"%(self.G,self.number))
86 self.chi_0 = None
87 self.indlabel = None
88
89 @property
90 def texname(self):
91 from lmfdb.characters.web_character import WebDirichlet
92 return WebDirichlet.char2tex(self.modulus, self.number)
93
94 @cached_method
95 def modfactor(self):
96 return self.modulus.factor()
97
98 @cached_method
99 def conductor(self):
100 B = pari("znconreyconductor(%s,%s,&chi0)"%(self.G, self.chi_pari))
101 if B.type() == 't_INT':
102 # means chi is primitive
103 self.chi_0 = self.chi_pari
104 self.indlabel = self.number
105 return int(B)
106 else:
107 self.chi_0 = pari("chi0")
108 G_0 = Pari("znstar({},1)".format(B))
109 self.indlabel = int(pari("znconreyexp(%s,%s)"%(G_0,self.chi_0)))
110 return int(B[0])
111
112 def is_primitive(self):
113 return self.conductor() == self.modulus
114
115 @cached_method
116 def parity(self):
117 number = self.number
118 par = 0
119 for p,e in self.modfactor():
120 if p == 2:
121 if number % 4 == 3:
122 par = 1 - par
123 else:
124 phi2 = (p-1)/Integer(2) * p **(e-1)
125 if Mod(number, p ** e)**phi2 != 1:
126 par = 1 - par
127 return par
128
129 def is_odd(self):
130 return self.parity() == 1
131
132 def is_even(self):
133 return self.parity() == 0
134
135 @cached_method
136 def multiplicative_order(self):
137 return Mod(self.number, self.modulus).multiplicative_order()
138
139 @property
140 def order(self):
141 return self.multiplicative_order()
142
143 @cached_method
144 def kronecker_symbol(self):
145 c = self.conductor()
146 p = self.parity()
147 return kronecker_symbol(symbol_numerator(c, p))
148
149 def conreyangle(self,x):
150 return Rational(pari("chareval(%s,znconreylog(%s,%d),%d)"%(self.G,self.G,self.number,x)))
151
152 def gauss_sum_numerical(self, a):
153 return pari("znchargauss(%s,%s,a=%d)"%(self.G,self.chi_pari,a))
154
155 def sage_zeta_order(self, order):
156 return 1 if self.modulus <= 2 else lcm(2,order)
157
158 def sage_character(self, order, genvalues):
159 H = DirichletGroup(self.modulus, base_ring=CyclotomicField(order))
160 M = H._module
161 order_corrected_genvalues = get_sage_genvalues(self.modulus, order, genvalues, H.zeta_order())
162 return DirichletCharacter(H,M(order_corrected_genvalues))
163
[end of lmfdb/characters/TinyConrey.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lmfdb/characters/TinyConrey.py b/lmfdb/characters/TinyConrey.py
--- a/lmfdb/characters/TinyConrey.py
+++ b/lmfdb/characters/TinyConrey.py
@@ -158,5 +158,5 @@
def sage_character(self, order, genvalues):
H = DirichletGroup(self.modulus, base_ring=CyclotomicField(order))
M = H._module
- order_corrected_genvalues = get_sage_genvalues(self.modulus, order, genvalues, H.zeta_order())
+ order_corrected_genvalues = get_sage_genvalues(self.modulus, order, genvalues, self.sage_zeta_order(order))
return DirichletCharacter(H,M(order_corrected_genvalues))
| {"golden_diff": "diff --git a/lmfdb/characters/TinyConrey.py b/lmfdb/characters/TinyConrey.py\n--- a/lmfdb/characters/TinyConrey.py\n+++ b/lmfdb/characters/TinyConrey.py\n@@ -158,5 +158,5 @@\n def sage_character(self, order, genvalues):\n H = DirichletGroup(self.modulus, base_ring=CyclotomicField(order))\n M = H._module\n- order_corrected_genvalues = get_sage_genvalues(self.modulus, order, genvalues, H.zeta_order())\n+ order_corrected_genvalues = get_sage_genvalues(self.modulus, order, genvalues, self.sage_zeta_order(order))\n return DirichletCharacter(H,M(order_corrected_genvalues))\n", "issue": "Fix python 2->3 problem in verify\n\n", "before_files": [{"content": "from sage.all import (gcd, Mod, Integer, Integers, Rational, pari, Pari,\n DirichletGroup, CyclotomicField, euler_phi, lcm)\nfrom sage.misc.cachefunc import cached_method\nfrom sage.modular.dirichlet import DirichletCharacter\n\ndef symbol_numerator(cond, parity):\n # Reference: Sect. 9.3, Montgomery, Hugh L; Vaughan, Robert C. (2007).\n # Multiplicative number theory. I. Classical theory. Cambridge Studies in\n # Advanced Mathematics 97\n #\n # Let F = Q(\\sqrt(d)) with d a non zero squarefree integer then a real\n # Dirichlet character \\chi(n) can be represented as a Kronecker symbol\n # (m / n) where { m = d if # d = 1 mod 4 else m = 4d if d = 2,3 (mod) 4 }\n # and m is the discriminant of F. The conductor of \\chi is |m|.\n #\n # symbol_numerator returns the appropriate Kronecker symbol depending on\n # the conductor of \\chi.\n m = cond\n if cond % 2 == 1:\n if cond % 4 == 3:\n m = -cond\n elif cond % 8 == 4:\n # Fixed cond % 16 == 4 and cond % 16 == 12 were switched in the\n # previous version of the code.\n #\n # Let d be a non zero squarefree integer. If d = 2,3 (mod) 4 and if\n # cond = 4d = 4 ( 4n + 2) or 4 (4n + 3) = 16 n + 8 or 16n + 12 then we\n # set m = cond. On the other hand if d = 1 (mod) 4 and cond = 4d = 4\n # (4n +1) = 16n + 4 then we set m = -cond.\n if cond % 16 == 4:\n m = -cond\n elif cond % 16 == 8:\n if parity == 1:\n m = -cond\n else:\n return None\n return m\n\n\ndef kronecker_symbol(m):\n if m:\n return r'\\(\\displaystyle\\left(\\frac{%s}{\\bullet}\\right)\\)' % (m)\n else:\n return None\n\n###############################################################################\n## Conrey character with no call to Jonathan's code\n## in order to handle big moduli\n##\n\ndef get_sage_genvalues(modulus, order, genvalues, zeta_order):\n \"\"\"\n Helper method for computing correct genvalues when constructing\n the sage character\n \"\"\"\n phi_mod = euler_phi(modulus)\n exponent_factor = phi_mod / order\n genvalues_exponent = [x * exponent_factor for x in genvalues]\n return [x * zeta_order / phi_mod for x in genvalues_exponent]\n\n\nclass PariConreyGroup(object):\n\n def __init__(self, modulus):\n self.modulus = int(modulus)\n self.G = Pari(\"znstar({},1)\".format(modulus))\n\n def gens(self):\n return Integers(self.modulus).unit_gens()\n\n def invariants(self):\n return pari(\"znstar({},1).cyc\".format(self.modulus))\n\n\nclass ConreyCharacter(object):\n \"\"\"\n tiny implementation on Conrey index only\n \"\"\"\n\n def __init__(self, modulus, number):\n assert gcd(modulus, number)==1\n self.modulus = Integer(modulus)\n self.number = Integer(number)\n self.G = Pari(\"znstar({},1)\".format(modulus))\n self.chi_pari = pari(\"znconreylog(%s,%d)\"%(self.G,self.number))\n self.chi_0 = None\n self.indlabel = None\n\n @property\n def texname(self):\n from lmfdb.characters.web_character import WebDirichlet\n return WebDirichlet.char2tex(self.modulus, self.number)\n\n @cached_method\n def modfactor(self):\n return self.modulus.factor()\n\n @cached_method\n def conductor(self):\n B = pari(\"znconreyconductor(%s,%s,&chi0)\"%(self.G, self.chi_pari))\n if B.type() == 't_INT':\n # means chi is primitive\n self.chi_0 = self.chi_pari\n self.indlabel = self.number\n return int(B)\n else:\n self.chi_0 = pari(\"chi0\")\n G_0 = Pari(\"znstar({},1)\".format(B))\n self.indlabel = int(pari(\"znconreyexp(%s,%s)\"%(G_0,self.chi_0)))\n return int(B[0])\n\n def is_primitive(self):\n return self.conductor() == self.modulus\n\n @cached_method\n def parity(self):\n number = self.number\n par = 0\n for p,e in self.modfactor():\n if p == 2:\n if number % 4 == 3:\n par = 1 - par\n else:\n phi2 = (p-1)/Integer(2) * p **(e-1)\n if Mod(number, p ** e)**phi2 != 1:\n par = 1 - par\n return par\n\n def is_odd(self):\n return self.parity() == 1\n\n def is_even(self):\n return self.parity() == 0\n\n @cached_method\n def multiplicative_order(self):\n return Mod(self.number, self.modulus).multiplicative_order()\n\n @property\n def order(self):\n return self.multiplicative_order()\n\n @cached_method\n def kronecker_symbol(self):\n c = self.conductor()\n p = self.parity()\n return kronecker_symbol(symbol_numerator(c, p))\n\n def conreyangle(self,x):\n return Rational(pari(\"chareval(%s,znconreylog(%s,%d),%d)\"%(self.G,self.G,self.number,x)))\n\n def gauss_sum_numerical(self, a):\n return pari(\"znchargauss(%s,%s,a=%d)\"%(self.G,self.chi_pari,a))\n\n def sage_zeta_order(self, order):\n return 1 if self.modulus <= 2 else lcm(2,order)\n\n def sage_character(self, order, genvalues):\n H = DirichletGroup(self.modulus, base_ring=CyclotomicField(order))\n M = H._module\n order_corrected_genvalues = get_sage_genvalues(self.modulus, order, genvalues, H.zeta_order())\n return DirichletCharacter(H,M(order_corrected_genvalues))\n", "path": "lmfdb/characters/TinyConrey.py"}]} | 2,452 | 175 |
gh_patches_debug_14866 | rasdani/github-patches | git_diff | fail2ban__fail2ban-918 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
asyncore: (9, 'Bad file descriptor')
```
Testing using /usr/bin/python2.6
Fail2ban 0.8.8 test suite. Python 2.6.8 (unknown, Jan 26 2013, 14:35:25) [GCC 4.7.2]. Please wait...
.........................................Exception in thread Thread-61:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
self.run()
File "/usr/lib/python2.6/threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/yoh/deb/gits/fail2ban/server/asyncserver.py", line 144, in start
asyncore.loop(use_poll = False) # fixes the "Unexpected communication problem" issue on Python 2.6 and 3.0
File "/usr/lib/python2.6/asyncore.py", line 210, in loop
poll_fun(timeout, map)
File "/usr/lib/python2.6/asyncore.py", line 140, in poll
r, w, e = select.select(r, w, e, timeout)
error: (9, 'Bad file descriptor')
.....................................F....
```
Never ran into this one before (or at least not for a while ;) )
</issue>
<code>
[start of fail2ban/server/asyncserver.py]
1 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-
2 # vi: set ft=python sts=4 ts=4 sw=4 noet :
3
4 # This file is part of Fail2Ban.
5 #
6 # Fail2Ban is free software; you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation; either version 2 of the License, or
9 # (at your option) any later version.
10 #
11 # Fail2Ban is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Fail2Ban; if not, write to the Free Software
18 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
19
20 # Author: Cyril Jaquier
21 #
22
23 __author__ = "Cyril Jaquier"
24 __copyright__ = "Copyright (c) 2004 Cyril Jaquier"
25 __license__ = "GPL"
26
27 from pickle import dumps, loads, HIGHEST_PROTOCOL
28 import asyncore, asynchat, socket, os, sys, traceback, fcntl
29
30 from ..helpers import getLogger,formatExceptionInfo
31
32 # Gets the instance of the logger.
33 logSys = getLogger(__name__)
34
35 if sys.version_info >= (3,):
36 # b"" causes SyntaxError in python <= 2.5, so below implements equivalent
37 EMPTY_BYTES = bytes("", encoding="ascii")
38 else:
39 # python 2.x, string type is equivalent to bytes.
40 EMPTY_BYTES = ""
41
42 ##
43 # Request handler class.
44 #
45 # This class extends asynchat in order to provide a request handler for
46 # incoming query.
47
48 class RequestHandler(asynchat.async_chat):
49
50 if sys.version_info >= (3,):
51 END_STRING = bytes("<F2B_END_COMMAND>", encoding="ascii")
52 else:
53 END_STRING = "<F2B_END_COMMAND>"
54
55 def __init__(self, conn, transmitter):
56 asynchat.async_chat.__init__(self, conn)
57 self.__transmitter = transmitter
58 self.__buffer = []
59 # Sets the terminator.
60 self.set_terminator(RequestHandler.END_STRING)
61
62 def collect_incoming_data(self, data):
63 #logSys.debug("Received raw data: " + str(data))
64 self.__buffer.append(data)
65
66 ##
67 # Handles a new request.
68 #
69 # This method is called once we have a complete request.
70
71 def found_terminator(self):
72 # Joins the buffer items.
73 message = loads(EMPTY_BYTES.join(self.__buffer))
74 # Gives the message to the transmitter.
75 message = self.__transmitter.proceed(message)
76 # Serializes the response.
77 message = dumps(message, HIGHEST_PROTOCOL)
78 # Sends the response to the client.
79 self.push(message + RequestHandler.END_STRING)
80 # Closes the channel.
81 self.close_when_done()
82
83 def handle_error(self):
84 e1, e2 = formatExceptionInfo()
85 logSys.error("Unexpected communication error: %s" % str(e2))
86 logSys.error(traceback.format_exc().splitlines())
87 self.close()
88
89 ##
90 # Asynchronous server class.
91 #
92 # This class extends asyncore and dispatches connection requests to
93 # RequestHandler.
94
95 class AsyncServer(asyncore.dispatcher):
96
97 def __init__(self, transmitter):
98 asyncore.dispatcher.__init__(self)
99 self.__transmitter = transmitter
100 self.__sock = "/var/run/fail2ban/fail2ban.sock"
101 self.__init = False
102
103 ##
104 # Returns False as we only read the socket first.
105
106 def writable(self):
107 return False
108
109 def handle_accept(self):
110 try:
111 conn, addr = self.accept()
112 except socket.error:
113 logSys.warning("Socket error")
114 return
115 except TypeError:
116 logSys.warning("Type error")
117 return
118 AsyncServer.__markCloseOnExec(conn)
119 # Creates an instance of the handler class to handle the
120 # request/response on the incoming connection.
121 RequestHandler(conn, self.__transmitter)
122
123 ##
124 # Starts the communication server.
125 #
126 # @param sock: socket file.
127 # @param force: remove the socket file if exists.
128
129 def start(self, sock, force):
130 self.__sock = sock
131 # Remove socket
132 if os.path.exists(sock):
133 logSys.error("Fail2ban seems to be already running")
134 if force:
135 logSys.warning("Forcing execution of the server")
136 os.remove(sock)
137 else:
138 raise AsyncServerException("Server already running")
139 # Creates the socket.
140 self.create_socket(socket.AF_UNIX, socket.SOCK_STREAM)
141 self.set_reuse_addr()
142 try:
143 self.bind(sock)
144 except Exception:
145 raise AsyncServerException("Unable to bind socket %s" % self.__sock)
146 AsyncServer.__markCloseOnExec(self.socket)
147 self.listen(1)
148 # Sets the init flag.
149 self.__init = True
150 # TODO Add try..catch
151 # There's a bug report for Python 2.6/3.0 that use_poll=True yields some 2.5 incompatibilities:
152 logSys.debug("Detected Python 2.6 or greater. asyncore.loop() not using poll")
153 asyncore.loop(use_poll=False) # fixes the "Unexpected communication problem" issue on Python 2.6 and 3.0
154
155 ##
156 # Stops the communication server.
157
158 def stop(self):
159 if self.__init:
160 # Only closes the socket if it was initialized first.
161 self.close()
162 # Remove socket
163 if os.path.exists(self.__sock):
164 logSys.debug("Removed socket file " + self.__sock)
165 os.remove(self.__sock)
166 logSys.debug("Socket shutdown")
167
168 ##
169 # Marks socket as close-on-exec to avoid leaking file descriptors when
170 # running actions involving command execution.
171
172 # @param sock: socket file.
173
174 @staticmethod
175 def __markCloseOnExec(sock):
176 fd = sock.fileno()
177 flags = fcntl.fcntl(fd, fcntl.F_GETFD)
178 fcntl.fcntl(fd, fcntl.F_SETFD, flags|fcntl.FD_CLOEXEC)
179
180 ##
181 # AsyncServerException is used to wrap communication exceptions.
182
183 class AsyncServerException(Exception):
184 pass
185
[end of fail2ban/server/asyncserver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/fail2ban/server/asyncserver.py b/fail2ban/server/asyncserver.py
--- a/fail2ban/server/asyncserver.py
+++ b/fail2ban/server/asyncserver.py
@@ -149,8 +149,11 @@
self.__init = True
# TODO Add try..catch
# There's a bug report for Python 2.6/3.0 that use_poll=True yields some 2.5 incompatibilities:
- logSys.debug("Detected Python 2.6 or greater. asyncore.loop() not using poll")
- asyncore.loop(use_poll=False) # fixes the "Unexpected communication problem" issue on Python 2.6 and 3.0
+ if sys.version_info >= (2, 7) and sys.version_info < (2, 8): # if python 2.7 ...
+ logSys.debug("Detected Python 2.7. asyncore.loop() using poll")
+ asyncore.loop(use_poll=True) # workaround for the "Bad file descriptor" issue on Python 2.7, gh-161
+ else:
+ asyncore.loop(use_poll=False) # fixes the "Unexpected communication problem" issue on Python 2.6 and 3.0
##
# Stops the communication server.
| {"golden_diff": "diff --git a/fail2ban/server/asyncserver.py b/fail2ban/server/asyncserver.py\n--- a/fail2ban/server/asyncserver.py\n+++ b/fail2ban/server/asyncserver.py\n@@ -149,8 +149,11 @@\n \t\tself.__init = True\n \t\t# TODO Add try..catch\n \t\t# There's a bug report for Python 2.6/3.0 that use_poll=True yields some 2.5 incompatibilities:\n-\t\tlogSys.debug(\"Detected Python 2.6 or greater. asyncore.loop() not using poll\")\n-\t\tasyncore.loop(use_poll=False) # fixes the \"Unexpected communication problem\" issue on Python 2.6 and 3.0\n+\t\tif sys.version_info >= (2, 7) and sys.version_info < (2, 8): # if python 2.7 ...\n+\t\t\tlogSys.debug(\"Detected Python 2.7. asyncore.loop() using poll\")\n+\t\t\tasyncore.loop(use_poll=True) # workaround for the \"Bad file descriptor\" issue on Python 2.7, gh-161\n+\t\telse:\n+\t\t\tasyncore.loop(use_poll=False) # fixes the \"Unexpected communication problem\" issue on Python 2.6 and 3.0\n \t\n \t##\n \t# Stops the communication server.\n", "issue": "asyncore: (9, 'Bad file descriptor')\n```\nTesting using /usr/bin/python2.6\nFail2ban 0.8.8 test suite. Python 2.6.8 (unknown, Jan 26 2013, 14:35:25) [GCC 4.7.2]. Please wait...\n.........................................Exception in thread Thread-61:\nTraceback (most recent call last):\n File \"/usr/lib/python2.6/threading.py\", line 532, in __bootstrap_inner\n self.run()\n File \"/usr/lib/python2.6/threading.py\", line 484, in run\n self.__target(*self.__args, **self.__kwargs)\n File \"/home/yoh/deb/gits/fail2ban/server/asyncserver.py\", line 144, in start\n asyncore.loop(use_poll = False) # fixes the \"Unexpected communication problem\" issue on Python 2.6 and 3.0\n File \"/usr/lib/python2.6/asyncore.py\", line 210, in loop\n poll_fun(timeout, map)\n File \"/usr/lib/python2.6/asyncore.py\", line 140, in poll\n r, w, e = select.select(r, w, e, timeout)\nerror: (9, 'Bad file descriptor')\n\n.....................................F....\n```\n\nNever ran into this one before (or at least not for a while ;) )\n\n", "before_files": [{"content": "# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-\n# vi: set ft=python sts=4 ts=4 sw=4 noet :\n\n# This file is part of Fail2Ban.\n#\n# Fail2Ban is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# Fail2Ban is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Fail2Ban; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\n# Author: Cyril Jaquier\n# \n\n__author__ = \"Cyril Jaquier\"\n__copyright__ = \"Copyright (c) 2004 Cyril Jaquier\"\n__license__ = \"GPL\"\n\nfrom pickle import dumps, loads, HIGHEST_PROTOCOL\nimport asyncore, asynchat, socket, os, sys, traceback, fcntl\n\nfrom ..helpers import getLogger,formatExceptionInfo\n\n# Gets the instance of the logger.\nlogSys = getLogger(__name__)\n\nif sys.version_info >= (3,):\n\t# b\"\" causes SyntaxError in python <= 2.5, so below implements equivalent\n\tEMPTY_BYTES = bytes(\"\", encoding=\"ascii\")\nelse:\n\t# python 2.x, string type is equivalent to bytes.\n\tEMPTY_BYTES = \"\"\n\n##\n# Request handler class.\n#\n# This class extends asynchat in order to provide a request handler for\n# incoming query.\n\nclass RequestHandler(asynchat.async_chat):\n\t\n\tif sys.version_info >= (3,):\n\t\tEND_STRING = bytes(\"<F2B_END_COMMAND>\", encoding=\"ascii\")\n\telse:\n\t\tEND_STRING = \"<F2B_END_COMMAND>\"\n\n\tdef __init__(self, conn, transmitter):\n\t\tasynchat.async_chat.__init__(self, conn)\n\t\tself.__transmitter = transmitter\n\t\tself.__buffer = []\n\t\t# Sets the terminator.\n\t\tself.set_terminator(RequestHandler.END_STRING)\n\n\tdef collect_incoming_data(self, data):\n\t\t#logSys.debug(\"Received raw data: \" + str(data))\n\t\tself.__buffer.append(data)\n\n\t##\n\t# Handles a new request.\n\t#\n\t# This method is called once we have a complete request.\n\n\tdef found_terminator(self):\n\t\t# Joins the buffer items.\n\t\tmessage = loads(EMPTY_BYTES.join(self.__buffer))\n\t\t# Gives the message to the transmitter.\n\t\tmessage = self.__transmitter.proceed(message)\n\t\t# Serializes the response.\n\t\tmessage = dumps(message, HIGHEST_PROTOCOL)\n\t\t# Sends the response to the client.\n\t\tself.push(message + RequestHandler.END_STRING)\n\t\t# Closes the channel.\n\t\tself.close_when_done()\n\t\t\n\tdef handle_error(self):\n\t\te1, e2 = formatExceptionInfo()\n\t\tlogSys.error(\"Unexpected communication error: %s\" % str(e2))\n\t\tlogSys.error(traceback.format_exc().splitlines())\n\t\tself.close()\n\t\t\n##\n# Asynchronous server class.\n#\n# This class extends asyncore and dispatches connection requests to\n# RequestHandler.\n\nclass AsyncServer(asyncore.dispatcher):\n\n\tdef __init__(self, transmitter):\n\t\tasyncore.dispatcher.__init__(self)\n\t\tself.__transmitter = transmitter\n\t\tself.__sock = \"/var/run/fail2ban/fail2ban.sock\"\n\t\tself.__init = False\n\n\t##\n\t# Returns False as we only read the socket first.\n\n\tdef writable(self):\n\t\treturn False\n\n\tdef handle_accept(self):\n\t\ttry:\n\t\t\tconn, addr = self.accept()\n\t\texcept socket.error:\n\t\t\tlogSys.warning(\"Socket error\")\n\t\t\treturn\n\t\texcept TypeError:\n\t\t\tlogSys.warning(\"Type error\")\n\t\t\treturn\n\t\tAsyncServer.__markCloseOnExec(conn)\n\t\t# Creates an instance of the handler class to handle the\n\t\t# request/response on the incoming connection.\n\t\tRequestHandler(conn, self.__transmitter)\n\t\n\t##\n\t# Starts the communication server.\n\t#\n\t# @param sock: socket file.\n\t# @param force: remove the socket file if exists.\n\t\n\tdef start(self, sock, force):\n\t\tself.__sock = sock\n\t\t# Remove socket\n\t\tif os.path.exists(sock):\n\t\t\tlogSys.error(\"Fail2ban seems to be already running\")\n\t\t\tif force:\n\t\t\t\tlogSys.warning(\"Forcing execution of the server\")\n\t\t\t\tos.remove(sock)\n\t\t\telse:\n\t\t\t\traise AsyncServerException(\"Server already running\")\n\t\t# Creates the socket.\n\t\tself.create_socket(socket.AF_UNIX, socket.SOCK_STREAM)\n\t\tself.set_reuse_addr()\n\t\ttry:\n\t\t\tself.bind(sock)\n\t\texcept Exception:\n\t\t\traise AsyncServerException(\"Unable to bind socket %s\" % self.__sock)\n\t\tAsyncServer.__markCloseOnExec(self.socket)\n\t\tself.listen(1)\n\t\t# Sets the init flag.\n\t\tself.__init = True\n\t\t# TODO Add try..catch\n\t\t# There's a bug report for Python 2.6/3.0 that use_poll=True yields some 2.5 incompatibilities:\n\t\tlogSys.debug(\"Detected Python 2.6 or greater. asyncore.loop() not using poll\")\n\t\tasyncore.loop(use_poll=False) # fixes the \"Unexpected communication problem\" issue on Python 2.6 and 3.0\n\t\n\t##\n\t# Stops the communication server.\n\t\n\tdef stop(self):\n\t\tif self.__init:\n\t\t\t# Only closes the socket if it was initialized first.\n\t\t\tself.close()\n\t\t# Remove socket\n\t\tif os.path.exists(self.__sock):\n\t\t\tlogSys.debug(\"Removed socket file \" + self.__sock)\n\t\t\tos.remove(self.__sock)\n\t\tlogSys.debug(\"Socket shutdown\")\n\n\t##\n\t# Marks socket as close-on-exec to avoid leaking file descriptors when\n\t# running actions involving command execution.\n\n\t# @param sock: socket file.\n\t\n\t@staticmethod\n\tdef __markCloseOnExec(sock):\n\t\tfd = sock.fileno()\n\t\tflags = fcntl.fcntl(fd, fcntl.F_GETFD)\n\t\tfcntl.fcntl(fd, fcntl.F_SETFD, flags|fcntl.FD_CLOEXEC)\n\n##\n# AsyncServerException is used to wrap communication exceptions.\n\nclass AsyncServerException(Exception):\n\tpass\n", "path": "fail2ban/server/asyncserver.py"}]} | 2,771 | 296 |
gh_patches_debug_19056 | rasdani/github-patches | git_diff | open-mmlab__mmdetection-4339 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid link in readme of cityscape
Hi, Thank you very much for the amazing project!
I would like to report that the [model](http://download.openmmlab.com/mmdetection/v2.0/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes_20200502-6ea77f0e.pth) in the [readme](https://github.com/open-mmlab/mmdetection/blob/master/configs/cityscapes/README.md) in the cityscape is invalid.
Meanwhile, may I ask whether there are more pre-trained models on Cityscape? I would like to evaluate the models elaborately.
Thank you!
</issue>
<code>
[start of mmdet/core/bbox/assigners/hungarian_assigner.py]
1 import torch
2 from scipy.optimize import linear_sum_assignment
3
4 from ..builder import BBOX_ASSIGNERS
5 from ..iou_calculators import build_iou_calculator
6 from ..transforms import bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh
7 from .assign_result import AssignResult
8 from .base_assigner import BaseAssigner
9
10
11 @BBOX_ASSIGNERS.register_module()
12 class HungarianAssigner(BaseAssigner):
13 """Computes one-to-one matching between predictions and ground truth.
14
15 This class computes an assignment between the targets and the predictions
16 based on the costs. The costs are weighted sum of three components:
17 classfication cost, regression L1 cost and regression iou cost. The
18 targets don't include the no_object, so generally there are more
19 predictions than targets. After the one-to-one matching, the un-matched
20 are treated as backgrounds. Thus each query prediction will be assigned
21 with `0` or a positive integer indicating the ground truth index:
22
23 - 0: negative sample, no assigned gt
24 - positive integer: positive sample, index (1-based) of assigned gt
25
26 Args:
27 cls_weight (int | float, optional): The scale factor for classification
28 cost. Default 1.0.
29 bbox_weight (int | float, optional): The scale factor for regression
30 L1 cost. Default 1.0.
31 iou_weight (int | float, optional): The scale factor for regression
32 iou cost. Default 1.0.
33 iou_calculator (dict | optional): The config for the iou calculation.
34 Default type `BboxOverlaps2D`.
35 iou_mode (str | optional): "iou" (intersection over union), "iof"
36 (intersection over foreground), or "giou" (generalized
37 intersection over union). Default "giou".
38 """
39
40 def __init__(self,
41 cls_weight=1.,
42 bbox_weight=1.,
43 iou_weight=1.,
44 iou_calculator=dict(type='BboxOverlaps2D'),
45 iou_mode='giou'):
46 # defaultly giou cost is used in the official DETR repo.
47 self.iou_mode = iou_mode
48 self.cls_weight = cls_weight
49 self.bbox_weight = bbox_weight
50 self.iou_weight = iou_weight
51 self.iou_calculator = build_iou_calculator(iou_calculator)
52
53 def assign(self,
54 bbox_pred,
55 cls_pred,
56 gt_bboxes,
57 gt_labels,
58 img_meta,
59 gt_bboxes_ignore=None,
60 eps=1e-7):
61 """Computes one-to-one matching based on the weighted costs.
62
63 This method assign each query prediction to a ground truth or
64 background. The `assigned_gt_inds` with -1 means don't care,
65 0 means negative sample, and positive number is the index (1-based)
66 of assigned gt.
67 The assignment is done in the following steps, the order matters.
68
69 1. assign every prediction to -1
70 2. compute the weighted costs
71 3. do Hungarian matching on CPU based on the costs
72 4. assign all to 0 (background) first, then for each matched pair
73 between predictions and gts, treat this prediction as foreground
74 and assign the corresponding gt index (plus 1) to it.
75
76 Args:
77 bbox_pred (Tensor): Predicted boxes with normalized coordinates
78 (cx, cy, w, h), which are all in range [0, 1]. Shape
79 [num_query, 4].
80 cls_pred (Tensor): Predicted classification logits, shape
81 [num_query, num_class].
82 gt_bboxes (Tensor): Ground truth boxes with unnormalized
83 coordinates (x1, y1, x2, y2). Shape [num_gt, 4].
84 gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,).
85 img_meta (dict): Meta information for current image.
86 gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are
87 labelled as `ignored`. Default None.
88 eps (int | float, optional): A value added to the denominator for
89 numerical stability. Default 1e-7.
90
91 Returns:
92 :obj:`AssignResult`: The assigned result.
93 """
94 assert gt_bboxes_ignore is None, \
95 'Only case when gt_bboxes_ignore is None is supported.'
96 num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0)
97
98 # 1. assign -1 by default
99 assigned_gt_inds = bbox_pred.new_full((num_bboxes, ),
100 -1,
101 dtype=torch.long)
102 assigned_labels = bbox_pred.new_full((num_bboxes, ),
103 -1,
104 dtype=torch.long)
105 if num_gts == 0 or num_bboxes == 0:
106 # No ground truth or boxes, return empty assignment
107 if num_gts == 0:
108 # No ground truth, assign all to background
109 assigned_gt_inds[:] = 0
110 return AssignResult(
111 num_gts, assigned_gt_inds, None, labels=assigned_labels)
112
113 # 2. compute the weighted costs
114 # classification cost.
115 # Following the official DETR repo, contrary to the loss that
116 # NLL is used, we approximate it in 1 - cls_score[gt_label].
117 # The 1 is a constant that doesn't change the matching,
118 # so it can be ommitted.
119 cls_score = cls_pred.softmax(-1)
120 cls_cost = -cls_score[:, gt_labels] # [num_bboxes, num_gt]
121
122 # regression L1 cost
123 img_h, img_w, _ = img_meta['img_shape']
124 factor = torch.Tensor([img_w, img_h, img_w,
125 img_h]).unsqueeze(0).to(gt_bboxes.device)
126 gt_bboxes_normalized = gt_bboxes / factor
127 bbox_cost = torch.cdist(
128 bbox_pred, bbox_xyxy_to_cxcywh(gt_bboxes_normalized),
129 p=1) # [num_bboxes, num_gt]
130
131 # regression iou cost, defaultly giou is used in official DETR.
132 bboxes = bbox_cxcywh_to_xyxy(bbox_pred) * factor
133 # overlaps: [num_bboxes, num_gt]
134 overlaps = self.iou_calculator(
135 bboxes, gt_bboxes, mode=self.iou_mode, is_aligned=False)
136 # The 1 is a constant that doesn't change the matching, so ommitted.
137 iou_cost = -overlaps
138
139 # weighted sum of above three costs
140 cost = self.cls_weight * cls_cost + self.bbox_weight * bbox_cost
141 cost = cost + self.iou_weight * iou_cost
142
143 # 3. do Hungarian matching on CPU using linear_sum_assignment
144 cost = cost.detach().cpu()
145 matched_row_inds, matched_col_inds = linear_sum_assignment(cost)
146 matched_row_inds = torch.from_numpy(matched_row_inds).to(
147 bbox_pred.device)
148 matched_col_inds = torch.from_numpy(matched_col_inds).to(
149 bbox_pred.device)
150
151 # 4. assign backgrounds and foregrounds
152 # assign all indices to backgrounds first
153 assigned_gt_inds[:] = 0
154 # assign foregrounds based on matching results
155 assigned_gt_inds[matched_row_inds] = matched_col_inds + 1
156 assigned_labels[matched_row_inds] = gt_labels[matched_col_inds]
157 return AssignResult(
158 num_gts, assigned_gt_inds, None, labels=assigned_labels)
159
[end of mmdet/core/bbox/assigners/hungarian_assigner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmdet/core/bbox/assigners/hungarian_assigner.py b/mmdet/core/bbox/assigners/hungarian_assigner.py
--- a/mmdet/core/bbox/assigners/hungarian_assigner.py
+++ b/mmdet/core/bbox/assigners/hungarian_assigner.py
@@ -1,5 +1,9 @@
import torch
-from scipy.optimize import linear_sum_assignment
+
+try:
+ from scipy.optimize import linear_sum_assignment
+except ImportError:
+ linear_sum_assignment = None
from ..builder import BBOX_ASSIGNERS
from ..iou_calculators import build_iou_calculator
@@ -142,6 +146,9 @@
# 3. do Hungarian matching on CPU using linear_sum_assignment
cost = cost.detach().cpu()
+ if linear_sum_assignment is None:
+ raise ImportError('Please run "pip install scipy" '
+ 'to install scipy first.')
matched_row_inds, matched_col_inds = linear_sum_assignment(cost)
matched_row_inds = torch.from_numpy(matched_row_inds).to(
bbox_pred.device)
| {"golden_diff": "diff --git a/mmdet/core/bbox/assigners/hungarian_assigner.py b/mmdet/core/bbox/assigners/hungarian_assigner.py\n--- a/mmdet/core/bbox/assigners/hungarian_assigner.py\n+++ b/mmdet/core/bbox/assigners/hungarian_assigner.py\n@@ -1,5 +1,9 @@\n import torch\n-from scipy.optimize import linear_sum_assignment\n+\n+try:\n+ from scipy.optimize import linear_sum_assignment\n+except ImportError:\n+ linear_sum_assignment = None\n \n from ..builder import BBOX_ASSIGNERS\n from ..iou_calculators import build_iou_calculator\n@@ -142,6 +146,9 @@\n \n # 3. do Hungarian matching on CPU using linear_sum_assignment\n cost = cost.detach().cpu()\n+ if linear_sum_assignment is None:\n+ raise ImportError('Please run \"pip install scipy\" '\n+ 'to install scipy first.')\n matched_row_inds, matched_col_inds = linear_sum_assignment(cost)\n matched_row_inds = torch.from_numpy(matched_row_inds).to(\n bbox_pred.device)\n", "issue": "Invalid link in readme of cityscape\nHi, Thank you very much for the amazing project! \r\n\r\nI would like to report that the [model](http://download.openmmlab.com/mmdetection/v2.0/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes_20200502-6ea77f0e.pth) in the [readme](https://github.com/open-mmlab/mmdetection/blob/master/configs/cityscapes/README.md) in the cityscape is invalid. \r\n\r\nMeanwhile, may I ask whether there are more pre-trained models on Cityscape? I would like to evaluate the models elaborately. \r\n\r\nThank you!\n", "before_files": [{"content": "import torch\nfrom scipy.optimize import linear_sum_assignment\n\nfrom ..builder import BBOX_ASSIGNERS\nfrom ..iou_calculators import build_iou_calculator\nfrom ..transforms import bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh\nfrom .assign_result import AssignResult\nfrom .base_assigner import BaseAssigner\n\n\n@BBOX_ASSIGNERS.register_module()\nclass HungarianAssigner(BaseAssigner):\n \"\"\"Computes one-to-one matching between predictions and ground truth.\n\n This class computes an assignment between the targets and the predictions\n based on the costs. The costs are weighted sum of three components:\n classfication cost, regression L1 cost and regression iou cost. The\n targets don't include the no_object, so generally there are more\n predictions than targets. After the one-to-one matching, the un-matched\n are treated as backgrounds. Thus each query prediction will be assigned\n with `0` or a positive integer indicating the ground truth index:\n\n - 0: negative sample, no assigned gt\n - positive integer: positive sample, index (1-based) of assigned gt\n\n Args:\n cls_weight (int | float, optional): The scale factor for classification\n cost. Default 1.0.\n bbox_weight (int | float, optional): The scale factor for regression\n L1 cost. Default 1.0.\n iou_weight (int | float, optional): The scale factor for regression\n iou cost. Default 1.0.\n iou_calculator (dict | optional): The config for the iou calculation.\n Default type `BboxOverlaps2D`.\n iou_mode (str | optional): \"iou\" (intersection over union), \"iof\"\n (intersection over foreground), or \"giou\" (generalized\n intersection over union). Default \"giou\".\n \"\"\"\n\n def __init__(self,\n cls_weight=1.,\n bbox_weight=1.,\n iou_weight=1.,\n iou_calculator=dict(type='BboxOverlaps2D'),\n iou_mode='giou'):\n # defaultly giou cost is used in the official DETR repo.\n self.iou_mode = iou_mode\n self.cls_weight = cls_weight\n self.bbox_weight = bbox_weight\n self.iou_weight = iou_weight\n self.iou_calculator = build_iou_calculator(iou_calculator)\n\n def assign(self,\n bbox_pred,\n cls_pred,\n gt_bboxes,\n gt_labels,\n img_meta,\n gt_bboxes_ignore=None,\n eps=1e-7):\n \"\"\"Computes one-to-one matching based on the weighted costs.\n\n This method assign each query prediction to a ground truth or\n background. The `assigned_gt_inds` with -1 means don't care,\n 0 means negative sample, and positive number is the index (1-based)\n of assigned gt.\n The assignment is done in the following steps, the order matters.\n\n 1. assign every prediction to -1\n 2. compute the weighted costs\n 3. do Hungarian matching on CPU based on the costs\n 4. assign all to 0 (background) first, then for each matched pair\n between predictions and gts, treat this prediction as foreground\n and assign the corresponding gt index (plus 1) to it.\n\n Args:\n bbox_pred (Tensor): Predicted boxes with normalized coordinates\n (cx, cy, w, h), which are all in range [0, 1]. Shape\n [num_query, 4].\n cls_pred (Tensor): Predicted classification logits, shape\n [num_query, num_class].\n gt_bboxes (Tensor): Ground truth boxes with unnormalized\n coordinates (x1, y1, x2, y2). Shape [num_gt, 4].\n gt_labels (Tensor): Label of `gt_bboxes`, shape (num_gt,).\n img_meta (dict): Meta information for current image.\n gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are\n labelled as `ignored`. Default None.\n eps (int | float, optional): A value added to the denominator for\n numerical stability. Default 1e-7.\n\n Returns:\n :obj:`AssignResult`: The assigned result.\n \"\"\"\n assert gt_bboxes_ignore is None, \\\n 'Only case when gt_bboxes_ignore is None is supported.'\n num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0)\n\n # 1. assign -1 by default\n assigned_gt_inds = bbox_pred.new_full((num_bboxes, ),\n -1,\n dtype=torch.long)\n assigned_labels = bbox_pred.new_full((num_bboxes, ),\n -1,\n dtype=torch.long)\n if num_gts == 0 or num_bboxes == 0:\n # No ground truth or boxes, return empty assignment\n if num_gts == 0:\n # No ground truth, assign all to background\n assigned_gt_inds[:] = 0\n return AssignResult(\n num_gts, assigned_gt_inds, None, labels=assigned_labels)\n\n # 2. compute the weighted costs\n # classification cost.\n # Following the official DETR repo, contrary to the loss that\n # NLL is used, we approximate it in 1 - cls_score[gt_label].\n # The 1 is a constant that doesn't change the matching,\n # so it can be ommitted.\n cls_score = cls_pred.softmax(-1)\n cls_cost = -cls_score[:, gt_labels] # [num_bboxes, num_gt]\n\n # regression L1 cost\n img_h, img_w, _ = img_meta['img_shape']\n factor = torch.Tensor([img_w, img_h, img_w,\n img_h]).unsqueeze(0).to(gt_bboxes.device)\n gt_bboxes_normalized = gt_bboxes / factor\n bbox_cost = torch.cdist(\n bbox_pred, bbox_xyxy_to_cxcywh(gt_bboxes_normalized),\n p=1) # [num_bboxes, num_gt]\n\n # regression iou cost, defaultly giou is used in official DETR.\n bboxes = bbox_cxcywh_to_xyxy(bbox_pred) * factor\n # overlaps: [num_bboxes, num_gt]\n overlaps = self.iou_calculator(\n bboxes, gt_bboxes, mode=self.iou_mode, is_aligned=False)\n # The 1 is a constant that doesn't change the matching, so ommitted.\n iou_cost = -overlaps\n\n # weighted sum of above three costs\n cost = self.cls_weight * cls_cost + self.bbox_weight * bbox_cost\n cost = cost + self.iou_weight * iou_cost\n\n # 3. do Hungarian matching on CPU using linear_sum_assignment\n cost = cost.detach().cpu()\n matched_row_inds, matched_col_inds = linear_sum_assignment(cost)\n matched_row_inds = torch.from_numpy(matched_row_inds).to(\n bbox_pred.device)\n matched_col_inds = torch.from_numpy(matched_col_inds).to(\n bbox_pred.device)\n\n # 4. assign backgrounds and foregrounds\n # assign all indices to backgrounds first\n assigned_gt_inds[:] = 0\n # assign foregrounds based on matching results\n assigned_gt_inds[matched_row_inds] = matched_col_inds + 1\n assigned_labels[matched_row_inds] = gt_labels[matched_col_inds]\n return AssignResult(\n num_gts, assigned_gt_inds, None, labels=assigned_labels)\n", "path": "mmdet/core/bbox/assigners/hungarian_assigner.py"}]} | 2,726 | 243 |
gh_patches_debug_47888 | rasdani/github-patches | git_diff | keras-team__keras-7955 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong result for cosine proximity: keras 2.0.8
# Conclusion: Observation of keras cosine proximity stuck as -1/3 #
As noted by numerous post, Keras seriously currently has an issue with cosine proximity:
https://github.com/fchollet/keras/issues/3031
https://github.com/fchollet/keras/issues/5046
Here is the code in jupyter notebook for simple test:
```
import keras
from keras.layers import Input, Dense
from keras.models import Model
import numpy as np
# --> print keras version
print keras.__version__
# --> compute average cosine between all angles samples
def computeMeanConsineAngle(x,y):
cosMean = 0
numSample = x.shape[0]
for i in xrange(numSample):
cosMean += np.dot(x[i,:],y[i,:])/np.sqrt(np.dot(x[i,:],x[i,:])*np.dot(y[i,:],y[i,:]))
return cosMean/float(numSample)
X = np.random.random((1000,3))
Y = X
inputs = Input(shape=(3,))
preds = Dense(3,activation='linear')(inputs)
model = Model(inputs=inputs,outputs=preds)
sgd=keras.optimizers.Adam(lr=1e-2)
model.compile(optimizer=sgd ,loss='mse',metrics=['cosine_proximity'])
model.fit(X,Y, batch_size=1000, epochs=500, shuffle=False)
pred = model.predict(X)
from sklearn.metrics import mean_squared_error
mse = mean_squared_error(X, pred)
%pylab
%matplotlib inline
plt.scatter(pred,Y)
print 'mse = ', mse
print computeMeanConsineAngle(pred, Y)
testX = np.array([[1,0]])
testY = np.array([[1,0]])
- computeMeanConsineAngle(testX,testY)
```
The printed result is
```
Epoch 500/500
1000/1000 [==============================] - 0s - loss: 7.1132e-04
- cosine_proximity: -0.3329
Using matplotlib backend: TkAgg
Populating the interactive namespace from numpy and matplotlib
mse = 0.000703760391565
0.998615947541
```
**So the true cosine proximity is actually 0.9986, but keras shows near -1/3. Of course keras would use the negative of cosine proximity for minimization purpose, but it should be -0.9986.., in any case, don't trust the outcome of metric in keras cosine proximity**
</issue>
<code>
[start of keras/losses.py]
1 from __future__ import absolute_import
2 import six
3 from . import backend as K
4 from .utils.generic_utils import deserialize_keras_object
5
6
7 # noinspection SpellCheckingInspection
8 def mean_squared_error(y_true, y_pred):
9 return K.mean(K.square(y_pred - y_true), axis=-1)
10
11
12 def mean_absolute_error(y_true, y_pred):
13 return K.mean(K.abs(y_pred - y_true), axis=-1)
14
15
16 def mean_absolute_percentage_error(y_true, y_pred):
17 diff = K.abs((y_true - y_pred) / K.clip(K.abs(y_true),
18 K.epsilon(),
19 None))
20 return 100. * K.mean(diff, axis=-1)
21
22
23 def mean_squared_logarithmic_error(y_true, y_pred):
24 first_log = K.log(K.clip(y_pred, K.epsilon(), None) + 1.)
25 second_log = K.log(K.clip(y_true, K.epsilon(), None) + 1.)
26 return K.mean(K.square(first_log - second_log), axis=-1)
27
28
29 def squared_hinge(y_true, y_pred):
30 return K.mean(K.square(K.maximum(1. - y_true * y_pred, 0.)), axis=-1)
31
32
33 def hinge(y_true, y_pred):
34 return K.mean(K.maximum(1. - y_true * y_pred, 0.), axis=-1)
35
36
37 def categorical_hinge(y_true, y_pred):
38 pos = K.sum(y_true * y_pred, axis=-1)
39 neg = K.max((1. - y_true) * y_pred, axis=-1)
40 return K.maximum(0., neg - pos + 1.)
41
42
43 def logcosh(y_true, y_pred):
44 def cosh(x):
45 return (K.exp(x) + K.exp(-x)) / 2
46 return K.mean(K.log(cosh(y_pred - y_true)), axis=-1)
47
48
49 def categorical_crossentropy(y_true, y_pred):
50 return K.categorical_crossentropy(y_true, y_pred)
51
52
53 def sparse_categorical_crossentropy(y_true, y_pred):
54 return K.sparse_categorical_crossentropy(y_true, y_pred)
55
56
57 def binary_crossentropy(y_true, y_pred):
58 return K.mean(K.binary_crossentropy(y_true, y_pred), axis=-1)
59
60
61 def kullback_leibler_divergence(y_true, y_pred):
62 y_true = K.clip(y_true, K.epsilon(), 1)
63 y_pred = K.clip(y_pred, K.epsilon(), 1)
64 return K.sum(y_true * K.log(y_true / y_pred), axis=-1)
65
66
67 def poisson(y_true, y_pred):
68 return K.mean(y_pred - y_true * K.log(y_pred + K.epsilon()), axis=-1)
69
70
71 def cosine_proximity(y_true, y_pred):
72 y_true = K.l2_normalize(y_true, axis=-1)
73 y_pred = K.l2_normalize(y_pred, axis=-1)
74 return -K.mean(y_true * y_pred, axis=-1)
75
76
77 # Aliases.
78
79 mse = MSE = mean_squared_error
80 mae = MAE = mean_absolute_error
81 mape = MAPE = mean_absolute_percentage_error
82 msle = MSLE = mean_squared_logarithmic_error
83 kld = KLD = kullback_leibler_divergence
84 cosine = cosine_proximity
85
86
87 def serialize(loss):
88 return loss.__name__
89
90
91 def deserialize(name, custom_objects=None):
92 return deserialize_keras_object(name,
93 module_objects=globals(),
94 custom_objects=custom_objects,
95 printable_module_name='loss function')
96
97
98 def get(identifier):
99 if identifier is None:
100 return None
101 if isinstance(identifier, six.string_types):
102 identifier = str(identifier)
103 return deserialize(identifier)
104 elif callable(identifier):
105 return identifier
106 else:
107 raise ValueError('Could not interpret '
108 'loss function identifier:', identifier)
109
[end of keras/losses.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/keras/losses.py b/keras/losses.py
--- a/keras/losses.py
+++ b/keras/losses.py
@@ -71,7 +71,7 @@
def cosine_proximity(y_true, y_pred):
y_true = K.l2_normalize(y_true, axis=-1)
y_pred = K.l2_normalize(y_pred, axis=-1)
- return -K.mean(y_true * y_pred, axis=-1)
+ return -K.sum(y_true * y_pred, axis=-1)
# Aliases.
| {"golden_diff": "diff --git a/keras/losses.py b/keras/losses.py\n--- a/keras/losses.py\n+++ b/keras/losses.py\n@@ -71,7 +71,7 @@\n def cosine_proximity(y_true, y_pred):\n y_true = K.l2_normalize(y_true, axis=-1)\n y_pred = K.l2_normalize(y_pred, axis=-1)\n- return -K.mean(y_true * y_pred, axis=-1)\n+ return -K.sum(y_true * y_pred, axis=-1)\n \n \n # Aliases.\n", "issue": "Wrong result for cosine proximity: keras 2.0.8\n# Conclusion: Observation of keras cosine proximity stuck as -1/3 #\r\nAs noted by numerous post, Keras seriously currently has an issue with cosine proximity:\r\n\r\nhttps://github.com/fchollet/keras/issues/3031\r\nhttps://github.com/fchollet/keras/issues/5046\r\n\r\nHere is the code in jupyter notebook for simple test:\r\n\r\n\r\n```\r\nimport keras\r\nfrom keras.layers import Input, Dense\r\nfrom keras.models import Model\r\nimport numpy as np\r\n\r\n# --> print keras version\r\nprint keras.__version__\r\n\r\n# --> compute average cosine between all angles samples\r\ndef computeMeanConsineAngle(x,y):\r\n cosMean = 0\r\n numSample = x.shape[0]\r\n for i in xrange(numSample):\r\n cosMean += np.dot(x[i,:],y[i,:])/np.sqrt(np.dot(x[i,:],x[i,:])*np.dot(y[i,:],y[i,:]))\r\n \r\n return cosMean/float(numSample)\r\n\r\nX = np.random.random((1000,3))\r\nY = X\r\n\r\ninputs = Input(shape=(3,))\r\npreds = Dense(3,activation='linear')(inputs)\r\nmodel = Model(inputs=inputs,outputs=preds)\r\n\r\nsgd=keras.optimizers.Adam(lr=1e-2)\r\nmodel.compile(optimizer=sgd ,loss='mse',metrics=['cosine_proximity'])\r\nmodel.fit(X,Y, batch_size=1000, epochs=500, shuffle=False)\r\n\r\npred = model.predict(X)\r\n\r\nfrom sklearn.metrics import mean_squared_error\r\nmse = mean_squared_error(X, pred)\r\n\r\n\r\n%pylab\r\n%matplotlib inline\r\nplt.scatter(pred,Y)\r\n\r\nprint 'mse = ', mse\r\nprint computeMeanConsineAngle(pred, Y)\r\n\r\ntestX = np.array([[1,0]])\r\ntestY = np.array([[1,0]])\r\n- computeMeanConsineAngle(testX,testY)\r\n```\r\n\r\nThe printed result is \r\n```\r\nEpoch 500/500\r\n1000/1000 [==============================] - 0s - loss: 7.1132e-04 \r\n- cosine_proximity: -0.3329\r\nUsing matplotlib backend: TkAgg\r\nPopulating the interactive namespace from numpy and matplotlib\r\nmse = 0.000703760391565\r\n0.998615947541\r\n```\r\n\r\n**So the true cosine proximity is actually 0.9986, but keras shows near -1/3. Of course keras would use the negative of cosine proximity for minimization purpose, but it should be -0.9986.., in any case, don't trust the outcome of metric in keras cosine proximity**\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\nimport six\nfrom . import backend as K\nfrom .utils.generic_utils import deserialize_keras_object\n\n\n# noinspection SpellCheckingInspection\ndef mean_squared_error(y_true, y_pred):\n return K.mean(K.square(y_pred - y_true), axis=-1)\n\n\ndef mean_absolute_error(y_true, y_pred):\n return K.mean(K.abs(y_pred - y_true), axis=-1)\n\n\ndef mean_absolute_percentage_error(y_true, y_pred):\n diff = K.abs((y_true - y_pred) / K.clip(K.abs(y_true),\n K.epsilon(),\n None))\n return 100. * K.mean(diff, axis=-1)\n\n\ndef mean_squared_logarithmic_error(y_true, y_pred):\n first_log = K.log(K.clip(y_pred, K.epsilon(), None) + 1.)\n second_log = K.log(K.clip(y_true, K.epsilon(), None) + 1.)\n return K.mean(K.square(first_log - second_log), axis=-1)\n\n\ndef squared_hinge(y_true, y_pred):\n return K.mean(K.square(K.maximum(1. - y_true * y_pred, 0.)), axis=-1)\n\n\ndef hinge(y_true, y_pred):\n return K.mean(K.maximum(1. - y_true * y_pred, 0.), axis=-1)\n\n\ndef categorical_hinge(y_true, y_pred):\n pos = K.sum(y_true * y_pred, axis=-1)\n neg = K.max((1. - y_true) * y_pred, axis=-1)\n return K.maximum(0., neg - pos + 1.)\n\n\ndef logcosh(y_true, y_pred):\n def cosh(x):\n return (K.exp(x) + K.exp(-x)) / 2\n return K.mean(K.log(cosh(y_pred - y_true)), axis=-1)\n\n\ndef categorical_crossentropy(y_true, y_pred):\n return K.categorical_crossentropy(y_true, y_pred)\n\n\ndef sparse_categorical_crossentropy(y_true, y_pred):\n return K.sparse_categorical_crossentropy(y_true, y_pred)\n\n\ndef binary_crossentropy(y_true, y_pred):\n return K.mean(K.binary_crossentropy(y_true, y_pred), axis=-1)\n\n\ndef kullback_leibler_divergence(y_true, y_pred):\n y_true = K.clip(y_true, K.epsilon(), 1)\n y_pred = K.clip(y_pred, K.epsilon(), 1)\n return K.sum(y_true * K.log(y_true / y_pred), axis=-1)\n\n\ndef poisson(y_true, y_pred):\n return K.mean(y_pred - y_true * K.log(y_pred + K.epsilon()), axis=-1)\n\n\ndef cosine_proximity(y_true, y_pred):\n y_true = K.l2_normalize(y_true, axis=-1)\n y_pred = K.l2_normalize(y_pred, axis=-1)\n return -K.mean(y_true * y_pred, axis=-1)\n\n\n# Aliases.\n\nmse = MSE = mean_squared_error\nmae = MAE = mean_absolute_error\nmape = MAPE = mean_absolute_percentage_error\nmsle = MSLE = mean_squared_logarithmic_error\nkld = KLD = kullback_leibler_divergence\ncosine = cosine_proximity\n\n\ndef serialize(loss):\n return loss.__name__\n\n\ndef deserialize(name, custom_objects=None):\n return deserialize_keras_object(name,\n module_objects=globals(),\n custom_objects=custom_objects,\n printable_module_name='loss function')\n\n\ndef get(identifier):\n if identifier is None:\n return None\n if isinstance(identifier, six.string_types):\n identifier = str(identifier)\n return deserialize(identifier)\n elif callable(identifier):\n return identifier\n else:\n raise ValueError('Could not interpret '\n 'loss function identifier:', identifier)\n", "path": "keras/losses.py"}]} | 2,177 | 129 |
gh_patches_debug_66901 | rasdani/github-patches | git_diff | ivy-llc__ivy-12830 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
multi_dot
</issue>
<code>
[start of ivy/functional/frontends/torch/linalg.py]
1 # local
2 import math
3 import ivy
4 import ivy.functional.frontends.torch as torch_frontend
5 from ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back
6 from ivy.func_wrapper import with_supported_dtypes, with_unsupported_dtypes
7
8
9 @to_ivy_arrays_and_back
10 def vector_norm(input, ord=2, dim=None, keepdim=False, *, dtype=None, out=None):
11 return ivy.vector_norm(
12 input, axis=dim, keepdims=keepdim, ord=ord, out=out, dtype=dtype
13 )
14
15
16 @to_ivy_arrays_and_back
17 def diagonal(A, *, offset=0, dim1=-2, dim2=-1):
18 return torch_frontend.diagonal(A, offset=offset, dim1=dim1, dim2=dim2)
19
20
21 @to_ivy_arrays_and_back
22 def divide(input, other, *, rounding_mode=None, out=None):
23 return ivy.divide(input, other, out=out)
24
25
26 @to_ivy_arrays_and_back
27 def inv(input, *, out=None):
28 return ivy.inv(input, out=out)
29
30
31 @to_ivy_arrays_and_back
32 def pinv(input, *, atol=None, rtol=None, hermitian=False, out=None):
33 # TODO: add handling for hermitian once complex numbers are supported
34 if atol is None:
35 return ivy.pinv(input, rtol=rtol, out=out)
36 else:
37 sigma = ivy.svdvals(input)[0]
38 if rtol is None:
39 rtol = atol / sigma
40 else:
41 if atol > rtol * sigma:
42 rtol = atol / sigma
43
44 return ivy.pinv(input, rtol=rtol, out=out)
45
46
47 @to_ivy_arrays_and_back
48 def det(input, *, out=None):
49 return ivy.det(input, out=out)
50
51
52 @to_ivy_arrays_and_back
53 def eigvals(input, *, out=None):
54 # TODO: add handling for out
55 return ivy.eigvals(input)
56
57
58 @to_ivy_arrays_and_back
59 def eigvalsh(input, UPLO="L", *, out=None):
60 return ivy.eigvalsh(input, UPLO=UPLO, out=out)
61
62
63 @to_ivy_arrays_and_back
64 @with_unsupported_dtypes({"1.11.0 and below": ("bfloat16", "float16")}, "torch")
65 def eigh(a, /, UPLO="L", out=None):
66 return ivy.eigh(a, UPLO=UPLO, out=out)
67
68
69 @to_ivy_arrays_and_back
70 def qr(input, mode="reduced", *, out=None):
71 if mode == "reduced":
72 ret = ivy.qr(input, mode="reduced")
73 elif mode == "r":
74 Q, R = ivy.qr(input, mode="r")
75 Q = []
76 ret = Q, R
77 elif mode == "complete":
78 ret = ivy.qr(input, mode="complete")
79 if ivy.exists(out):
80 return ivy.inplace_update(out, ret)
81 return ret
82
83
84 @to_ivy_arrays_and_back
85 def slogdet(input, *, out=None):
86 # TODO: add handling for out
87 return ivy.slogdet(input)
88
89
90 @to_ivy_arrays_and_back
91 def matrix_power(input, n, *, out=None):
92 return ivy.matrix_power(input, n, out=out)
93
94
95 @with_supported_dtypes(
96 {"1.11.0 and below": ("float32", "float64", "complex64", "complex128")}, "torch"
97 )
98 @to_ivy_arrays_and_back
99 def matrix_norm(input, ord="fro", dim=(-2, -1), keepdim=False, *, dtype=None, out=None):
100 if "complex" in ivy.as_ivy_dtype(input.dtype):
101 input = ivy.abs(input)
102 if dtype:
103 input = ivy.astype(input, dtype)
104 return ivy.matrix_norm(input, ord=ord, axis=dim, keepdims=keepdim, out=out)
105
106
107 @to_ivy_arrays_and_back
108 @with_unsupported_dtypes({"1.11.0 and below": ("float16",)}, "torch")
109 def cross(input, other, *, dim=None, out=None):
110 return torch_frontend.miscellaneous_ops.cross(input, other, dim=dim, out=out)
111
112
113 @to_ivy_arrays_and_back
114 def vecdot(x1, x2, *, dim=-1, out=None):
115 return ivy.vecdot(x1, x2, axis=dim, out=out)
116
117
118 @to_ivy_arrays_and_back
119 def matrix_rank(input, *, atol=None, rtol=None, hermitian=False, out=None):
120 # TODO: add handling for hermitian once complex numbers are supported
121 return ivy.astype(ivy.matrix_rank(input, atol=atol, rtol=rtol, out=out), ivy.int64)
122
123
124 @to_ivy_arrays_and_back
125 def cholesky(input, *, upper=False, out=None):
126 return ivy.cholesky(input, upper=upper, out=out)
127
128
129 @to_ivy_arrays_and_back
130 def svd(A, /, *, full_matrices=True, driver=None, out=None):
131 # TODO: add handling for driver and out
132 return ivy.svd(A, compute_uv=True, full_matrices=full_matrices)
133
134
135 @to_ivy_arrays_and_back
136 def svdvals(A, *, driver=None, out=None):
137 # TODO: add handling for driver
138 return ivy.svdvals(A, out=out)
139
140
141 @to_ivy_arrays_and_back
142 def inv_ex(input, *, check_errors=False, out=None):
143 try:
144 inputInv = ivy.inv(input, out=out)
145 info = ivy.zeros(input.shape[:-2], dtype=ivy.int32)
146 return inputInv, info
147 except RuntimeError as e:
148 if check_errors:
149 raise RuntimeError(e)
150 else:
151 inputInv = input * math.nan
152 info = ivy.ones(input.shape[:-2], dtype=ivy.int32)
153 return inputInv, info
154
155
156 @to_ivy_arrays_and_back
157 @with_unsupported_dtypes({"1.11.0 and below": ("float16", "bfloat16")}, "torch")
158 def tensorinv(input, ind=2, *, out=None):
159 not_invertible = "Reshaped tensor is not invertible"
160 prod_cond = "Tensor shape must satisfy prod(A.shape[:ind]) == prod(A.shape[ind:])"
161 positive_ind_cond = "Expected a strictly positive integer for 'ind'"
162 input_shape = ivy.shape(input)
163 assert ind > 0, f"{positive_ind_cond}"
164 shape_ind_end = input_shape[:ind]
165 shape_ind_start = input_shape[ind:]
166 prod_ind_end = 1
167 prod_ind_start = 1
168 for i in shape_ind_start:
169 prod_ind_start *= i
170 for j in shape_ind_end:
171 prod_ind_end *= j
172 assert prod_ind_end == prod_ind_start, f"{prod_cond}."
173 inverse_shape = shape_ind_start + shape_ind_end
174 input = ivy.reshape(input, shape=(prod_ind_end, -1))
175 inverse_shape_tuple = tuple([*inverse_shape])
176 assert inv_ex(input, check_errors=True), f"{not_invertible}."
177 inverse_tensor = ivy.inv(input)
178 return ivy.reshape(inverse_tensor, shape=inverse_shape_tuple, out=out)
179
180
181 @with_unsupported_dtypes({"1.11.0 and below": ("bfloat16", "float16")}, "torch")
182 def eig(input, *, out=None):
183 return ivy.eig(input, out=out)
184
185
186 @to_ivy_arrays_and_back
187 @with_unsupported_dtypes({"1.11.0 and below": ("bfloat16", "float16")}, "torch")
188 def solve(input, other, *, out=None):
189 return ivy.solve(input, other, out=out)
190
191
192 @to_ivy_arrays_and_back
193 @with_unsupported_dtypes({"1.11.0 and below": ("bfloat16", "float16")}, "torch")
194 def tensorsolve(A, B, dims=None, *, out=None):
195 return ivy.tensorsolve(A, B, axes=dims, out=out)
196
197
198 @to_ivy_arrays_and_back
199 @with_unsupported_dtypes({"1.11.0 and below": ("bfloat16", "float16")}, "torch")
200 def lu_factor(A, *, pivot=True, out=None):
201 return ivy.lu_factor(A, pivot=pivot, out=out)
202
203
204 @to_ivy_arrays_and_back
205 def matmul(input, other, *, out=None):
206 return ivy.matmul(input, other, out=out)
207
208
209 @to_ivy_arrays_and_back
210 @with_unsupported_dtypes({"1.11.0 and below": ("bfloat16", "float16")}, "torch")
211 def vander(x, N=None):
212 if len(x.shape) < 1:
213 raise RuntimeError("Input dim must be greater than or equal to 1.")
214
215 # pytorch always return int64 for integers
216 if "int" in x.dtype:
217 x = ivy.astype(x, ivy.int64)
218
219 if len(x.shape) == 1:
220 # torch always returns the powers in ascending order
221 return ivy.vander(x, N=N, increasing=True)
222
223 # support multi-dimensional array
224 original_shape = x.shape
225 if N is None:
226 N = x.shape[-1]
227
228 # store the vander output
229 x = ivy.reshape(x, (-1, x.shape[-1]))
230 output = []
231
232 for i in range(x.shape[0]):
233 output.append(ivy.vander(x[i], N=N, increasing=True))
234
235 output = ivy.stack(output)
236 output = ivy.reshape(output, (*original_shape, N))
237 output = ivy.astype(output, x.dtype)
238 return output
239
[end of ivy/functional/frontends/torch/linalg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/torch/linalg.py b/ivy/functional/frontends/torch/linalg.py
--- a/ivy/functional/frontends/torch/linalg.py
+++ b/ivy/functional/frontends/torch/linalg.py
@@ -236,3 +236,9 @@
output = ivy.reshape(output, (*original_shape, N))
output = ivy.astype(output, x.dtype)
return output
+
+
+@to_ivy_arrays_and_back
+@with_unsupported_dtypes({"1.11.0 and below": ("bfloat16", "float16")}, "torch")
+def multi_dot(tensors, *, out=None):
+ return ivy.multi_dot(tensors, out=out)
| {"golden_diff": "diff --git a/ivy/functional/frontends/torch/linalg.py b/ivy/functional/frontends/torch/linalg.py\n--- a/ivy/functional/frontends/torch/linalg.py\n+++ b/ivy/functional/frontends/torch/linalg.py\n@@ -236,3 +236,9 @@\n output = ivy.reshape(output, (*original_shape, N))\n output = ivy.astype(output, x.dtype)\n return output\n+\n+\n+@to_ivy_arrays_and_back\n+@with_unsupported_dtypes({\"1.11.0 and below\": (\"bfloat16\", \"float16\")}, \"torch\")\n+def multi_dot(tensors, *, out=None):\n+ return ivy.multi_dot(tensors, out=out)\n", "issue": "multi_dot\n\n", "before_files": [{"content": "# local\nimport math\nimport ivy\nimport ivy.functional.frontends.torch as torch_frontend\nfrom ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_supported_dtypes, with_unsupported_dtypes\n\n\n@to_ivy_arrays_and_back\ndef vector_norm(input, ord=2, dim=None, keepdim=False, *, dtype=None, out=None):\n return ivy.vector_norm(\n input, axis=dim, keepdims=keepdim, ord=ord, out=out, dtype=dtype\n )\n\n\n@to_ivy_arrays_and_back\ndef diagonal(A, *, offset=0, dim1=-2, dim2=-1):\n return torch_frontend.diagonal(A, offset=offset, dim1=dim1, dim2=dim2)\n\n\n@to_ivy_arrays_and_back\ndef divide(input, other, *, rounding_mode=None, out=None):\n return ivy.divide(input, other, out=out)\n\n\n@to_ivy_arrays_and_back\ndef inv(input, *, out=None):\n return ivy.inv(input, out=out)\n\n\n@to_ivy_arrays_and_back\ndef pinv(input, *, atol=None, rtol=None, hermitian=False, out=None):\n # TODO: add handling for hermitian once complex numbers are supported\n if atol is None:\n return ivy.pinv(input, rtol=rtol, out=out)\n else:\n sigma = ivy.svdvals(input)[0]\n if rtol is None:\n rtol = atol / sigma\n else:\n if atol > rtol * sigma:\n rtol = atol / sigma\n\n return ivy.pinv(input, rtol=rtol, out=out)\n\n\n@to_ivy_arrays_and_back\ndef det(input, *, out=None):\n return ivy.det(input, out=out)\n\n\n@to_ivy_arrays_and_back\ndef eigvals(input, *, out=None):\n # TODO: add handling for out\n return ivy.eigvals(input)\n\n\n@to_ivy_arrays_and_back\ndef eigvalsh(input, UPLO=\"L\", *, out=None):\n return ivy.eigvalsh(input, UPLO=UPLO, out=out)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"bfloat16\", \"float16\")}, \"torch\")\ndef eigh(a, /, UPLO=\"L\", out=None):\n return ivy.eigh(a, UPLO=UPLO, out=out)\n\n\n@to_ivy_arrays_and_back\ndef qr(input, mode=\"reduced\", *, out=None):\n if mode == \"reduced\":\n ret = ivy.qr(input, mode=\"reduced\")\n elif mode == \"r\":\n Q, R = ivy.qr(input, mode=\"r\")\n Q = []\n ret = Q, R\n elif mode == \"complete\":\n ret = ivy.qr(input, mode=\"complete\")\n if ivy.exists(out):\n return ivy.inplace_update(out, ret)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef slogdet(input, *, out=None):\n # TODO: add handling for out\n return ivy.slogdet(input)\n\n\n@to_ivy_arrays_and_back\ndef matrix_power(input, n, *, out=None):\n return ivy.matrix_power(input, n, out=out)\n\n\n@with_supported_dtypes(\n {\"1.11.0 and below\": (\"float32\", \"float64\", \"complex64\", \"complex128\")}, \"torch\"\n)\n@to_ivy_arrays_and_back\ndef matrix_norm(input, ord=\"fro\", dim=(-2, -1), keepdim=False, *, dtype=None, out=None):\n if \"complex\" in ivy.as_ivy_dtype(input.dtype):\n input = ivy.abs(input)\n if dtype:\n input = ivy.astype(input, dtype)\n return ivy.matrix_norm(input, ord=ord, axis=dim, keepdims=keepdim, out=out)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef cross(input, other, *, dim=None, out=None):\n return torch_frontend.miscellaneous_ops.cross(input, other, dim=dim, out=out)\n\n\n@to_ivy_arrays_and_back\ndef vecdot(x1, x2, *, dim=-1, out=None):\n return ivy.vecdot(x1, x2, axis=dim, out=out)\n\n\n@to_ivy_arrays_and_back\ndef matrix_rank(input, *, atol=None, rtol=None, hermitian=False, out=None):\n # TODO: add handling for hermitian once complex numbers are supported\n return ivy.astype(ivy.matrix_rank(input, atol=atol, rtol=rtol, out=out), ivy.int64)\n\n\n@to_ivy_arrays_and_back\ndef cholesky(input, *, upper=False, out=None):\n return ivy.cholesky(input, upper=upper, out=out)\n\n\n@to_ivy_arrays_and_back\ndef svd(A, /, *, full_matrices=True, driver=None, out=None):\n # TODO: add handling for driver and out\n return ivy.svd(A, compute_uv=True, full_matrices=full_matrices)\n\n\n@to_ivy_arrays_and_back\ndef svdvals(A, *, driver=None, out=None):\n # TODO: add handling for driver\n return ivy.svdvals(A, out=out)\n\n\n@to_ivy_arrays_and_back\ndef inv_ex(input, *, check_errors=False, out=None):\n try:\n inputInv = ivy.inv(input, out=out)\n info = ivy.zeros(input.shape[:-2], dtype=ivy.int32)\n return inputInv, info\n except RuntimeError as e:\n if check_errors:\n raise RuntimeError(e)\n else:\n inputInv = input * math.nan\n info = ivy.ones(input.shape[:-2], dtype=ivy.int32)\n return inputInv, info\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\", \"bfloat16\")}, \"torch\")\ndef tensorinv(input, ind=2, *, out=None):\n not_invertible = \"Reshaped tensor is not invertible\"\n prod_cond = \"Tensor shape must satisfy prod(A.shape[:ind]) == prod(A.shape[ind:])\"\n positive_ind_cond = \"Expected a strictly positive integer for 'ind'\"\n input_shape = ivy.shape(input)\n assert ind > 0, f\"{positive_ind_cond}\"\n shape_ind_end = input_shape[:ind]\n shape_ind_start = input_shape[ind:]\n prod_ind_end = 1\n prod_ind_start = 1\n for i in shape_ind_start:\n prod_ind_start *= i\n for j in shape_ind_end:\n prod_ind_end *= j\n assert prod_ind_end == prod_ind_start, f\"{prod_cond}.\"\n inverse_shape = shape_ind_start + shape_ind_end\n input = ivy.reshape(input, shape=(prod_ind_end, -1))\n inverse_shape_tuple = tuple([*inverse_shape])\n assert inv_ex(input, check_errors=True), f\"{not_invertible}.\"\n inverse_tensor = ivy.inv(input)\n return ivy.reshape(inverse_tensor, shape=inverse_shape_tuple, out=out)\n\n\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"bfloat16\", \"float16\")}, \"torch\")\ndef eig(input, *, out=None):\n return ivy.eig(input, out=out)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"bfloat16\", \"float16\")}, \"torch\")\ndef solve(input, other, *, out=None):\n return ivy.solve(input, other, out=out)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"bfloat16\", \"float16\")}, \"torch\")\ndef tensorsolve(A, B, dims=None, *, out=None):\n return ivy.tensorsolve(A, B, axes=dims, out=out)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"bfloat16\", \"float16\")}, \"torch\")\ndef lu_factor(A, *, pivot=True, out=None):\n return ivy.lu_factor(A, pivot=pivot, out=out)\n\n\n@to_ivy_arrays_and_back\ndef matmul(input, other, *, out=None):\n return ivy.matmul(input, other, out=out)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"bfloat16\", \"float16\")}, \"torch\")\ndef vander(x, N=None):\n if len(x.shape) < 1:\n raise RuntimeError(\"Input dim must be greater than or equal to 1.\")\n\n # pytorch always return int64 for integers\n if \"int\" in x.dtype:\n x = ivy.astype(x, ivy.int64)\n\n if len(x.shape) == 1:\n # torch always returns the powers in ascending order\n return ivy.vander(x, N=N, increasing=True)\n\n # support multi-dimensional array\n original_shape = x.shape\n if N is None:\n N = x.shape[-1]\n\n # store the vander output\n x = ivy.reshape(x, (-1, x.shape[-1]))\n output = []\n\n for i in range(x.shape[0]):\n output.append(ivy.vander(x[i], N=N, increasing=True))\n\n output = ivy.stack(output)\n output = ivy.reshape(output, (*original_shape, N))\n output = ivy.astype(output, x.dtype)\n return output\n", "path": "ivy/functional/frontends/torch/linalg.py"}]} | 3,355 | 166 |
gh_patches_debug_8853 | rasdani/github-patches | git_diff | jazzband__pip-tools-1592 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect output on pip-compile upgrade with --dry-run and --quiet flags
##### Environment Versions
1. OS Type: any
1. Python version: any
1. pip version: any
1. pip-tools version: 3.8.0
##### Steps to replicate
1. `echo "six" > requirements.in`
1. `pip-compile`
1. `pip-compile --upgrade --dry-run --quiet`
##### Expected result
- `--dry-run` should print changes to stdout/stderr
- `--quiet` should suppress _"Dry-run, so nothing updated."_ message
```bash
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile
#
six==1.12.0
```
##### Actual result
But currently it behaves the opposite way:
```
Dry-run, so nothing updated.
```
---
Refs:
- #843
- https://github.com/jazzband/pip-tools/issues/841#issuecomment-508454216
</issue>
<code>
[start of piptools/writer.py]
1 import os
2 import re
3 import sys
4 from itertools import chain
5 from typing import BinaryIO, Dict, Iterable, Iterator, List, Optional, Set, Tuple
6
7 from click import unstyle
8 from click.core import Context
9 from pip._internal.models.format_control import FormatControl
10 from pip._internal.req.req_install import InstallRequirement
11 from pip._vendor.packaging.markers import Marker
12
13 from .logging import log
14 from .utils import (
15 UNSAFE_PACKAGES,
16 comment,
17 dedup,
18 format_requirement,
19 get_compile_command,
20 key_from_ireq,
21 )
22
23 MESSAGE_UNHASHED_PACKAGE = comment(
24 "# WARNING: pip install will require the following package to be hashed."
25 "\n# Consider using a hashable URL like "
26 "https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip"
27 )
28
29 MESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(
30 "# WARNING: The following packages were not pinned, but pip requires them to be"
31 "\n# pinned when the requirements file includes hashes. "
32 "Consider using the --allow-unsafe flag."
33 )
34
35 MESSAGE_UNSAFE_PACKAGES = comment(
36 "# The following packages are considered to be unsafe in a requirements file:"
37 )
38
39 MESSAGE_UNINSTALLABLE = (
40 "The generated requirements file may be rejected by pip install. "
41 "See # WARNING lines for details."
42 )
43
44
45 strip_comes_from_line_re = re.compile(r" \(line \d+\)$")
46
47
48 def _comes_from_as_string(ireq: InstallRequirement) -> str:
49 if isinstance(ireq.comes_from, str):
50 return strip_comes_from_line_re.sub("", ireq.comes_from)
51 return key_from_ireq(ireq.comes_from)
52
53
54 def annotation_style_split(required_by: Set[str]) -> str:
55 sorted_required_by = sorted(required_by)
56 if len(sorted_required_by) == 1:
57 source = sorted_required_by[0]
58 annotation = "# via " + source
59 else:
60 annotation_lines = ["# via"]
61 for source in sorted_required_by:
62 annotation_lines.append(" # " + source)
63 annotation = "\n".join(annotation_lines)
64 return annotation
65
66
67 def annotation_style_line(required_by: Set[str]) -> str:
68 return f"# via {', '.join(sorted(required_by))}"
69
70
71 class OutputWriter:
72 def __init__(
73 self,
74 dst_file: BinaryIO,
75 click_ctx: Context,
76 dry_run: bool,
77 emit_header: bool,
78 emit_index_url: bool,
79 emit_trusted_host: bool,
80 annotate: bool,
81 annotation_style: str,
82 strip_extras: bool,
83 generate_hashes: bool,
84 default_index_url: str,
85 index_urls: Iterable[str],
86 trusted_hosts: Iterable[str],
87 format_control: FormatControl,
88 allow_unsafe: bool,
89 find_links: List[str],
90 emit_find_links: bool,
91 emit_options: bool,
92 ) -> None:
93 self.dst_file = dst_file
94 self.click_ctx = click_ctx
95 self.dry_run = dry_run
96 self.emit_header = emit_header
97 self.emit_index_url = emit_index_url
98 self.emit_trusted_host = emit_trusted_host
99 self.annotate = annotate
100 self.annotation_style = annotation_style
101 self.strip_extras = strip_extras
102 self.generate_hashes = generate_hashes
103 self.default_index_url = default_index_url
104 self.index_urls = index_urls
105 self.trusted_hosts = trusted_hosts
106 self.format_control = format_control
107 self.allow_unsafe = allow_unsafe
108 self.find_links = find_links
109 self.emit_find_links = emit_find_links
110 self.emit_options = emit_options
111
112 def _sort_key(self, ireq: InstallRequirement) -> Tuple[bool, str]:
113 return (not ireq.editable, key_from_ireq(ireq))
114
115 def write_header(self) -> Iterator[str]:
116 if self.emit_header:
117 yield comment("#")
118 yield comment(
119 "# This file is autogenerated by pip-compile with python "
120 f"{sys.version_info.major}.{sys.version_info.minor}"
121 )
122 yield comment("# To update, run:")
123 yield comment("#")
124 compile_command = os.environ.get(
125 "CUSTOM_COMPILE_COMMAND"
126 ) or get_compile_command(self.click_ctx)
127 yield comment(f"# {compile_command}")
128 yield comment("#")
129
130 def write_index_options(self) -> Iterator[str]:
131 if self.emit_index_url:
132 for index, index_url in enumerate(dedup(self.index_urls)):
133 if index == 0 and index_url.rstrip("/") == self.default_index_url:
134 continue
135 flag = "--index-url" if index == 0 else "--extra-index-url"
136 yield f"{flag} {index_url}"
137
138 def write_trusted_hosts(self) -> Iterator[str]:
139 if self.emit_trusted_host:
140 for trusted_host in dedup(self.trusted_hosts):
141 yield f"--trusted-host {trusted_host}"
142
143 def write_format_controls(self) -> Iterator[str]:
144 for nb in dedup(sorted(self.format_control.no_binary)):
145 yield f"--no-binary {nb}"
146 for ob in dedup(sorted(self.format_control.only_binary)):
147 yield f"--only-binary {ob}"
148
149 def write_find_links(self) -> Iterator[str]:
150 if self.emit_find_links:
151 for find_link in dedup(self.find_links):
152 yield f"--find-links {find_link}"
153
154 def write_flags(self) -> Iterator[str]:
155 if not self.emit_options:
156 return
157 emitted = False
158 for line in chain(
159 self.write_index_options(),
160 self.write_find_links(),
161 self.write_trusted_hosts(),
162 self.write_format_controls(),
163 ):
164 emitted = True
165 yield line
166 if emitted:
167 yield ""
168
169 def _iter_lines(
170 self,
171 results: Set[InstallRequirement],
172 unsafe_requirements: Optional[Set[InstallRequirement]] = None,
173 markers: Optional[Dict[str, Marker]] = None,
174 hashes: Optional[Dict[InstallRequirement, Set[str]]] = None,
175 ) -> Iterator[str]:
176 # default values
177 unsafe_requirements = unsafe_requirements or set()
178 markers = markers or {}
179 hashes = hashes or {}
180
181 # Check for unhashed or unpinned packages if at least one package does have
182 # hashes, which will trigger pip install's --require-hashes mode.
183 warn_uninstallable = False
184 has_hashes = hashes and any(hash for hash in hashes.values())
185
186 yielded = False
187
188 for line in self.write_header():
189 yield line
190 yielded = True
191 for line in self.write_flags():
192 yield line
193 yielded = True
194
195 unsafe_requirements = (
196 {r for r in results if r.name in UNSAFE_PACKAGES}
197 if not unsafe_requirements
198 else unsafe_requirements
199 )
200 packages = {r for r in results if r.name not in UNSAFE_PACKAGES}
201
202 if packages:
203 for ireq in sorted(packages, key=self._sort_key):
204 if has_hashes and not hashes.get(ireq):
205 yield MESSAGE_UNHASHED_PACKAGE
206 warn_uninstallable = True
207 line = self._format_requirement(
208 ireq, markers.get(key_from_ireq(ireq)), hashes=hashes
209 )
210 yield line
211 yielded = True
212
213 if unsafe_requirements:
214 yield ""
215 yielded = True
216 if has_hashes and not self.allow_unsafe:
217 yield MESSAGE_UNSAFE_PACKAGES_UNPINNED
218 warn_uninstallable = True
219 else:
220 yield MESSAGE_UNSAFE_PACKAGES
221
222 for ireq in sorted(unsafe_requirements, key=self._sort_key):
223 ireq_key = key_from_ireq(ireq)
224 if not self.allow_unsafe:
225 yield comment(f"# {ireq_key}")
226 else:
227 line = self._format_requirement(
228 ireq, marker=markers.get(ireq_key), hashes=hashes
229 )
230 yield line
231
232 # Yield even when there's no real content, so that blank files are written
233 if not yielded:
234 yield ""
235
236 if warn_uninstallable:
237 log.warning(MESSAGE_UNINSTALLABLE)
238
239 def write(
240 self,
241 results: Set[InstallRequirement],
242 unsafe_requirements: Set[InstallRequirement],
243 markers: Dict[str, Marker],
244 hashes: Optional[Dict[InstallRequirement, Set[str]]],
245 ) -> None:
246
247 for line in self._iter_lines(results, unsafe_requirements, markers, hashes):
248 log.info(line)
249 if not self.dry_run:
250 self.dst_file.write(unstyle(line).encode())
251 self.dst_file.write(os.linesep.encode())
252
253 def _format_requirement(
254 self,
255 ireq: InstallRequirement,
256 marker: Optional[Marker] = None,
257 hashes: Optional[Dict[InstallRequirement, Set[str]]] = None,
258 ) -> str:
259 ireq_hashes = (hashes if hashes is not None else {}).get(ireq)
260
261 line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)
262 if self.strip_extras:
263 line = re.sub(r"\[.+?\]", "", line)
264
265 if not self.annotate:
266 return line
267
268 # Annotate what packages or reqs-ins this package is required by
269 required_by = set()
270 if hasattr(ireq, "_source_ireqs"):
271 required_by |= {
272 _comes_from_as_string(src_ireq)
273 for src_ireq in ireq._source_ireqs
274 if src_ireq.comes_from
275 }
276 elif ireq.comes_from:
277 required_by.add(_comes_from_as_string(ireq))
278
279 if required_by:
280 if self.annotation_style == "split":
281 annotation = annotation_style_split(required_by)
282 sep = "\n "
283 elif self.annotation_style == "line":
284 annotation = annotation_style_line(required_by)
285 sep = "\n " if ireq_hashes else " "
286 else: # pragma: no cover
287 raise ValueError("Invalid value for annotation style")
288 # 24 is one reasonable column size to use here, that we've used in the past
289 lines = f"{line:24}{sep}{comment(annotation)}".splitlines()
290 line = "\n".join(ln.rstrip() for ln in lines)
291
292 return line
293
[end of piptools/writer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/piptools/writer.py b/piptools/writer.py
--- a/piptools/writer.py
+++ b/piptools/writer.py
@@ -245,8 +245,11 @@
) -> None:
for line in self._iter_lines(results, unsafe_requirements, markers, hashes):
- log.info(line)
- if not self.dry_run:
+ if self.dry_run:
+ # Bypass the log level to always print this during a dry run
+ log.log(line)
+ else:
+ log.info(line)
self.dst_file.write(unstyle(line).encode())
self.dst_file.write(os.linesep.encode())
| {"golden_diff": "diff --git a/piptools/writer.py b/piptools/writer.py\n--- a/piptools/writer.py\n+++ b/piptools/writer.py\n@@ -245,8 +245,11 @@\n ) -> None:\n \n for line in self._iter_lines(results, unsafe_requirements, markers, hashes):\n- log.info(line)\n- if not self.dry_run:\n+ if self.dry_run:\n+ # Bypass the log level to always print this during a dry run\n+ log.log(line)\n+ else:\n+ log.info(line)\n self.dst_file.write(unstyle(line).encode())\n self.dst_file.write(os.linesep.encode())\n", "issue": "Incorrect output on pip-compile upgrade with --dry-run and --quiet flags\n##### Environment Versions\r\n\r\n1. OS Type: any\r\n1. Python version: any\r\n1. pip version: any\r\n1. pip-tools version: 3.8.0\r\n\r\n##### Steps to replicate\r\n\r\n1. `echo \"six\" > requirements.in`\r\n1. `pip-compile`\r\n1. `pip-compile --upgrade --dry-run --quiet`\r\n\r\n##### Expected result\r\n\r\n- `--dry-run` should print changes to stdout/stderr\r\n- `--quiet` should suppress _\"Dry-run, so nothing updated.\"_ message \r\n\r\n```bash\r\n#\r\n# This file is autogenerated by pip-compile\r\n# To update, run:\r\n#\r\n# pip-compile\r\n#\r\nsix==1.12.0\r\n```\r\n\r\n##### Actual result\r\n\r\nBut currently it behaves the opposite way:\r\n\r\n```\r\nDry-run, so nothing updated.\r\n```\r\n\r\n---\r\n\r\nRefs: \r\n\r\n- #843 \r\n- https://github.com/jazzband/pip-tools/issues/841#issuecomment-508454216\n", "before_files": [{"content": "import os\nimport re\nimport sys\nfrom itertools import chain\nfrom typing import BinaryIO, Dict, Iterable, Iterator, List, Optional, Set, Tuple\n\nfrom click import unstyle\nfrom click.core import Context\nfrom pip._internal.models.format_control import FormatControl\nfrom pip._internal.req.req_install import InstallRequirement\nfrom pip._vendor.packaging.markers import Marker\n\nfrom .logging import log\nfrom .utils import (\n UNSAFE_PACKAGES,\n comment,\n dedup,\n format_requirement,\n get_compile_command,\n key_from_ireq,\n)\n\nMESSAGE_UNHASHED_PACKAGE = comment(\n \"# WARNING: pip install will require the following package to be hashed.\"\n \"\\n# Consider using a hashable URL like \"\n \"https://github.com/jazzband/pip-tools/archive/SOMECOMMIT.zip\"\n)\n\nMESSAGE_UNSAFE_PACKAGES_UNPINNED = comment(\n \"# WARNING: The following packages were not pinned, but pip requires them to be\"\n \"\\n# pinned when the requirements file includes hashes. \"\n \"Consider using the --allow-unsafe flag.\"\n)\n\nMESSAGE_UNSAFE_PACKAGES = comment(\n \"# The following packages are considered to be unsafe in a requirements file:\"\n)\n\nMESSAGE_UNINSTALLABLE = (\n \"The generated requirements file may be rejected by pip install. \"\n \"See # WARNING lines for details.\"\n)\n\n\nstrip_comes_from_line_re = re.compile(r\" \\(line \\d+\\)$\")\n\n\ndef _comes_from_as_string(ireq: InstallRequirement) -> str:\n if isinstance(ireq.comes_from, str):\n return strip_comes_from_line_re.sub(\"\", ireq.comes_from)\n return key_from_ireq(ireq.comes_from)\n\n\ndef annotation_style_split(required_by: Set[str]) -> str:\n sorted_required_by = sorted(required_by)\n if len(sorted_required_by) == 1:\n source = sorted_required_by[0]\n annotation = \"# via \" + source\n else:\n annotation_lines = [\"# via\"]\n for source in sorted_required_by:\n annotation_lines.append(\" # \" + source)\n annotation = \"\\n\".join(annotation_lines)\n return annotation\n\n\ndef annotation_style_line(required_by: Set[str]) -> str:\n return f\"# via {', '.join(sorted(required_by))}\"\n\n\nclass OutputWriter:\n def __init__(\n self,\n dst_file: BinaryIO,\n click_ctx: Context,\n dry_run: bool,\n emit_header: bool,\n emit_index_url: bool,\n emit_trusted_host: bool,\n annotate: bool,\n annotation_style: str,\n strip_extras: bool,\n generate_hashes: bool,\n default_index_url: str,\n index_urls: Iterable[str],\n trusted_hosts: Iterable[str],\n format_control: FormatControl,\n allow_unsafe: bool,\n find_links: List[str],\n emit_find_links: bool,\n emit_options: bool,\n ) -> None:\n self.dst_file = dst_file\n self.click_ctx = click_ctx\n self.dry_run = dry_run\n self.emit_header = emit_header\n self.emit_index_url = emit_index_url\n self.emit_trusted_host = emit_trusted_host\n self.annotate = annotate\n self.annotation_style = annotation_style\n self.strip_extras = strip_extras\n self.generate_hashes = generate_hashes\n self.default_index_url = default_index_url\n self.index_urls = index_urls\n self.trusted_hosts = trusted_hosts\n self.format_control = format_control\n self.allow_unsafe = allow_unsafe\n self.find_links = find_links\n self.emit_find_links = emit_find_links\n self.emit_options = emit_options\n\n def _sort_key(self, ireq: InstallRequirement) -> Tuple[bool, str]:\n return (not ireq.editable, key_from_ireq(ireq))\n\n def write_header(self) -> Iterator[str]:\n if self.emit_header:\n yield comment(\"#\")\n yield comment(\n \"# This file is autogenerated by pip-compile with python \"\n f\"{sys.version_info.major}.{sys.version_info.minor}\"\n )\n yield comment(\"# To update, run:\")\n yield comment(\"#\")\n compile_command = os.environ.get(\n \"CUSTOM_COMPILE_COMMAND\"\n ) or get_compile_command(self.click_ctx)\n yield comment(f\"# {compile_command}\")\n yield comment(\"#\")\n\n def write_index_options(self) -> Iterator[str]:\n if self.emit_index_url:\n for index, index_url in enumerate(dedup(self.index_urls)):\n if index == 0 and index_url.rstrip(\"/\") == self.default_index_url:\n continue\n flag = \"--index-url\" if index == 0 else \"--extra-index-url\"\n yield f\"{flag} {index_url}\"\n\n def write_trusted_hosts(self) -> Iterator[str]:\n if self.emit_trusted_host:\n for trusted_host in dedup(self.trusted_hosts):\n yield f\"--trusted-host {trusted_host}\"\n\n def write_format_controls(self) -> Iterator[str]:\n for nb in dedup(sorted(self.format_control.no_binary)):\n yield f\"--no-binary {nb}\"\n for ob in dedup(sorted(self.format_control.only_binary)):\n yield f\"--only-binary {ob}\"\n\n def write_find_links(self) -> Iterator[str]:\n if self.emit_find_links:\n for find_link in dedup(self.find_links):\n yield f\"--find-links {find_link}\"\n\n def write_flags(self) -> Iterator[str]:\n if not self.emit_options:\n return\n emitted = False\n for line in chain(\n self.write_index_options(),\n self.write_find_links(),\n self.write_trusted_hosts(),\n self.write_format_controls(),\n ):\n emitted = True\n yield line\n if emitted:\n yield \"\"\n\n def _iter_lines(\n self,\n results: Set[InstallRequirement],\n unsafe_requirements: Optional[Set[InstallRequirement]] = None,\n markers: Optional[Dict[str, Marker]] = None,\n hashes: Optional[Dict[InstallRequirement, Set[str]]] = None,\n ) -> Iterator[str]:\n # default values\n unsafe_requirements = unsafe_requirements or set()\n markers = markers or {}\n hashes = hashes or {}\n\n # Check for unhashed or unpinned packages if at least one package does have\n # hashes, which will trigger pip install's --require-hashes mode.\n warn_uninstallable = False\n has_hashes = hashes and any(hash for hash in hashes.values())\n\n yielded = False\n\n for line in self.write_header():\n yield line\n yielded = True\n for line in self.write_flags():\n yield line\n yielded = True\n\n unsafe_requirements = (\n {r for r in results if r.name in UNSAFE_PACKAGES}\n if not unsafe_requirements\n else unsafe_requirements\n )\n packages = {r for r in results if r.name not in UNSAFE_PACKAGES}\n\n if packages:\n for ireq in sorted(packages, key=self._sort_key):\n if has_hashes and not hashes.get(ireq):\n yield MESSAGE_UNHASHED_PACKAGE\n warn_uninstallable = True\n line = self._format_requirement(\n ireq, markers.get(key_from_ireq(ireq)), hashes=hashes\n )\n yield line\n yielded = True\n\n if unsafe_requirements:\n yield \"\"\n yielded = True\n if has_hashes and not self.allow_unsafe:\n yield MESSAGE_UNSAFE_PACKAGES_UNPINNED\n warn_uninstallable = True\n else:\n yield MESSAGE_UNSAFE_PACKAGES\n\n for ireq in sorted(unsafe_requirements, key=self._sort_key):\n ireq_key = key_from_ireq(ireq)\n if not self.allow_unsafe:\n yield comment(f\"# {ireq_key}\")\n else:\n line = self._format_requirement(\n ireq, marker=markers.get(ireq_key), hashes=hashes\n )\n yield line\n\n # Yield even when there's no real content, so that blank files are written\n if not yielded:\n yield \"\"\n\n if warn_uninstallable:\n log.warning(MESSAGE_UNINSTALLABLE)\n\n def write(\n self,\n results: Set[InstallRequirement],\n unsafe_requirements: Set[InstallRequirement],\n markers: Dict[str, Marker],\n hashes: Optional[Dict[InstallRequirement, Set[str]]],\n ) -> None:\n\n for line in self._iter_lines(results, unsafe_requirements, markers, hashes):\n log.info(line)\n if not self.dry_run:\n self.dst_file.write(unstyle(line).encode())\n self.dst_file.write(os.linesep.encode())\n\n def _format_requirement(\n self,\n ireq: InstallRequirement,\n marker: Optional[Marker] = None,\n hashes: Optional[Dict[InstallRequirement, Set[str]]] = None,\n ) -> str:\n ireq_hashes = (hashes if hashes is not None else {}).get(ireq)\n\n line = format_requirement(ireq, marker=marker, hashes=ireq_hashes)\n if self.strip_extras:\n line = re.sub(r\"\\[.+?\\]\", \"\", line)\n\n if not self.annotate:\n return line\n\n # Annotate what packages or reqs-ins this package is required by\n required_by = set()\n if hasattr(ireq, \"_source_ireqs\"):\n required_by |= {\n _comes_from_as_string(src_ireq)\n for src_ireq in ireq._source_ireqs\n if src_ireq.comes_from\n }\n elif ireq.comes_from:\n required_by.add(_comes_from_as_string(ireq))\n\n if required_by:\n if self.annotation_style == \"split\":\n annotation = annotation_style_split(required_by)\n sep = \"\\n \"\n elif self.annotation_style == \"line\":\n annotation = annotation_style_line(required_by)\n sep = \"\\n \" if ireq_hashes else \" \"\n else: # pragma: no cover\n raise ValueError(\"Invalid value for annotation style\")\n # 24 is one reasonable column size to use here, that we've used in the past\n lines = f\"{line:24}{sep}{comment(annotation)}\".splitlines()\n line = \"\\n\".join(ln.rstrip() for ln in lines)\n\n return line\n", "path": "piptools/writer.py"}]} | 3,770 | 151 |
gh_patches_debug_10772 | rasdani/github-patches | git_diff | freedomofpress__securedrop-3082 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Delete setup.cfg
`setup.cfg` was used to set configuration options for `pytest` in the past, but since `pytest.ini` is now providing that configuration, it seems like `setup.cfg` should be deleted.
</issue>
<code>
[start of testinfra/conftest.py]
1 """
2 Configuration for TestInfra test suite for SecureDrop.
3 Handles importing host-specific test vars, so test functions
4 can be reused across multiple hosts, with varied targets.
5
6 Vars should be placed in `testinfra/vars/<hostname>.yml`.
7 """
8
9 import os
10 import yaml
11
12
13 target_host = os.environ['SECUREDROP_TESTINFRA_TARGET_HOST']
14 assert target_host != ""
15
16
17 def securedrop_import_testinfra_vars(hostname, with_header=False):
18 """
19 Import vars from a YAML file to populate tests with host-specific
20 values used in checks. For instance, the SecureDrop docroot will
21 be under /vagrant in development, but /var/www/securedrop in staging.
22
23 Vars must be stored in `testinfra/vars/<hostname>.yml`.
24 """
25 filepath = os.path.join(os.path.dirname(__file__), "vars", hostname+".yml")
26 with open(filepath, 'r') as f:
27 hostvars = yaml.safe_load(f)
28 # The directory Travis runs builds in varies by PR, so we cannot hardcode
29 # it in the YAML testvars. Read it from env var and concatenate.
30 if hostname.lower() == 'travis':
31 build_env = os.environ["TRAVIS_BUILD_DIR"]
32 hostvars['securedrop_code'] = build_env+"/securedrop"
33
34 if with_header:
35 hostvars = dict(securedrop_test_vars=hostvars)
36 return hostvars
37
38
39 def pytest_namespace():
40 return securedrop_import_testinfra_vars(target_host, with_header=True)
41
[end of testinfra/conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/testinfra/conftest.py b/testinfra/conftest.py
--- a/testinfra/conftest.py
+++ b/testinfra/conftest.py
@@ -25,11 +25,6 @@
filepath = os.path.join(os.path.dirname(__file__), "vars", hostname+".yml")
with open(filepath, 'r') as f:
hostvars = yaml.safe_load(f)
- # The directory Travis runs builds in varies by PR, so we cannot hardcode
- # it in the YAML testvars. Read it from env var and concatenate.
- if hostname.lower() == 'travis':
- build_env = os.environ["TRAVIS_BUILD_DIR"]
- hostvars['securedrop_code'] = build_env+"/securedrop"
if with_header:
hostvars = dict(securedrop_test_vars=hostvars)
| {"golden_diff": "diff --git a/testinfra/conftest.py b/testinfra/conftest.py\n--- a/testinfra/conftest.py\n+++ b/testinfra/conftest.py\n@@ -25,11 +25,6 @@\n filepath = os.path.join(os.path.dirname(__file__), \"vars\", hostname+\".yml\")\n with open(filepath, 'r') as f:\n hostvars = yaml.safe_load(f)\n- # The directory Travis runs builds in varies by PR, so we cannot hardcode\n- # it in the YAML testvars. Read it from env var and concatenate.\n- if hostname.lower() == 'travis':\n- build_env = os.environ[\"TRAVIS_BUILD_DIR\"]\n- hostvars['securedrop_code'] = build_env+\"/securedrop\"\n \n if with_header:\n hostvars = dict(securedrop_test_vars=hostvars)\n", "issue": "Delete setup.cfg\n`setup.cfg` was used to set configuration options for `pytest` in the past, but since `pytest.ini` is now providing that configuration, it seems like `setup.cfg` should be deleted.\n", "before_files": [{"content": "\"\"\"\nConfiguration for TestInfra test suite for SecureDrop.\nHandles importing host-specific test vars, so test functions\ncan be reused across multiple hosts, with varied targets.\n\nVars should be placed in `testinfra/vars/<hostname>.yml`.\n\"\"\"\n\nimport os\nimport yaml\n\n\ntarget_host = os.environ['SECUREDROP_TESTINFRA_TARGET_HOST']\nassert target_host != \"\"\n\n\ndef securedrop_import_testinfra_vars(hostname, with_header=False):\n \"\"\"\n Import vars from a YAML file to populate tests with host-specific\n values used in checks. For instance, the SecureDrop docroot will\n be under /vagrant in development, but /var/www/securedrop in staging.\n\n Vars must be stored in `testinfra/vars/<hostname>.yml`.\n \"\"\"\n filepath = os.path.join(os.path.dirname(__file__), \"vars\", hostname+\".yml\")\n with open(filepath, 'r') as f:\n hostvars = yaml.safe_load(f)\n # The directory Travis runs builds in varies by PR, so we cannot hardcode\n # it in the YAML testvars. Read it from env var and concatenate.\n if hostname.lower() == 'travis':\n build_env = os.environ[\"TRAVIS_BUILD_DIR\"]\n hostvars['securedrop_code'] = build_env+\"/securedrop\"\n\n if with_header:\n hostvars = dict(securedrop_test_vars=hostvars)\n return hostvars\n\n\ndef pytest_namespace():\n return securedrop_import_testinfra_vars(target_host, with_header=True)\n", "path": "testinfra/conftest.py"}]} | 982 | 186 |
gh_patches_debug_67394 | rasdani/github-patches | git_diff | pymodbus-dev__pymodbus-945 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AsyncioModbusSerialClient TypeError Coroutine
### Versions
* Python: 3.9
* OS: Ubuntu 20.04
* Pymodbus: `3.0.0dev4`
* Modbus Hardware (if used):
### Pymodbus Specific
* Server: None
* Client: rtu - async
### Description
When I try `3.0.0dev4` and the latest commit as of today. I am getting a type error that variable `coro` is not a coroutine in file `serial.py`. I am trying to create `AsyncModbusSerialClient(schedulers.ASYNC_IO, port=connPort, baudrate=connSpeed, method=connMethod, timeout=commTimeout)` in an existing running loop.
I don't think the coroutine was created correctly. What do you think?
Old:
`future = asyncio.run_coroutine_threadsafe(coro, loop=loop)`
Proposed:
` future = asyncio.run_coroutine_threadsafe(coro(), loop=loop)`
"""Create asyncio based asynchronous serial clients.
:param port: Serial port
:param framer: Modbus Framer
:param kwargs: Serial port options
:return: asyncio event loop and serial client
"""
try:
loop = kwargs.pop("loop", None) or asyncio.get_running_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
proto_cls = kwargs.get("proto_cls") or ModbusClientProtocol
client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)
coro = client.connect
if not loop.is_running():
loop.run_until_complete(coro())
else: # loop is not asyncio.get_event_loop():
future = asyncio.run_coroutine_threadsafe(coro, loop=loop) <- `Fails here`
future.result()
return loop, client
```
``` py
def async_io_factory(port=None, framer=None, **kwargs):
"""Create asyncio based asynchronous serial clients.
:param port: Serial port
:param framer: Modbus Framer
:param kwargs: Serial port options
:return: asyncio event loop and serial client
"""
try:
loop = kwargs.pop("loop", None) or asyncio.get_running_loop()
except RuntimeError:
loop = asyncio.new_event_loop()
proto_cls = kwargs.get("proto_cls") or ModbusClientProtocol
client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)
coro = client.connect
if not loop.is_running():
loop.run_until_complete(coro())
else: # loop is not asyncio.get_event_loop():
future = asyncio.run_coroutine_threadsafe(coro, loop=loop) <- `Fails here`
future.result()
return loop, client
```
</issue>
<code>
[start of pymodbus/client/asynchronous/factory/serial.py]
1 """Factory to create asynchronous serial clients based on twisted/asyncio."""
2 # pylint: disable=missing-type-doc
3 import logging
4 import asyncio
5
6 from pymodbus.client.asynchronous import schedulers
7 from pymodbus.client.asynchronous.thread import EventLoopThread
8 from pymodbus.client.asynchronous.async_io import (
9 ModbusClientProtocol,
10 AsyncioModbusSerialClient,
11 )
12 from pymodbus.factory import ClientDecoder
13
14
15 _logger = logging.getLogger(__name__)
16
17
18 def reactor_factory(port, framer, **kwargs):
19 """Create twisted serial asynchronous client.
20
21 :param port: Serial port
22 :param framer: Modbus Framer
23 :param kwargs:
24 :return: event_loop_thread and twisted serial client
25 """
26 from twisted.internet import reactor # pylint: disable=import-outside-toplevel
27 from twisted.internet.serialport import ( # pylint: disable=import-outside-toplevel
28 SerialPort,
29 )
30 from twisted.internet.protocol import ( # pylint: disable=import-outside-toplevel
31 ClientFactory,
32 )
33
34 class SerialClientFactory(ClientFactory):
35 """Define serial client factory."""
36
37 def __init__(self, framer, proto_cls):
38 """Remember things necessary for building a protocols."""
39 self.proto_cls = proto_cls
40 self.framer = framer
41
42 def buildProtocol(self): # pylint: disable=arguments-differ
43 """Create a protocol and start the reading cycle-"""
44 proto = self.proto_cls(self.framer)
45 proto.factory = self
46 return proto
47
48 class SerialModbusClient(SerialPort): # pylint: disable=abstract-method
49 """Define serial client."""
50
51 def __init__(self, framer, *args, **kwargs):
52 """Initialize the client and start listening on the serial port.
53
54 :param factory: The factory to build clients with
55 """
56 self.decoder = ClientDecoder()
57 proto_cls = kwargs.pop("proto_cls", None)
58 proto = SerialClientFactory(framer, proto_cls).buildProtocol()
59 SerialPort.__init__(self, proto, *args, **kwargs)
60
61 proto = EventLoopThread(
62 "reactor",
63 reactor.run, # pylint: disable=no-member
64 reactor.stop, # pylint: disable=no-member
65 installSignalHandlers=0,
66 )
67 ser_client = SerialModbusClient(framer, port, reactor, **kwargs)
68
69 return proto, ser_client
70
71
72 def async_io_factory(port=None, framer=None, **kwargs):
73 """Create asyncio based asynchronous serial clients.
74
75 :param port: Serial port
76 :param framer: Modbus Framer
77 :param kwargs: Serial port options
78 :return: asyncio event loop and serial client
79 """
80 try:
81 loop = kwargs.pop("loop", None) or asyncio.get_running_loop()
82 except RuntimeError:
83 loop = asyncio.new_event_loop()
84
85 proto_cls = kwargs.get("proto_cls") or ModbusClientProtocol
86
87 client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)
88 coro = client.connect
89 if not loop.is_running():
90 loop.run_until_complete(coro())
91 else: # loop is not asyncio.get_event_loop():
92 future = asyncio.run_coroutine_threadsafe(coro, loop=loop)
93 future.result()
94
95 return loop, client
96
97
98 def get_factory(scheduler):
99 """Get protocol factory based on the backend scheduler being used.
100
101 :param scheduler: REACTOR/ASYNC_IO
102 :return:
103 :raises Exception: Failure
104 """
105 if scheduler == schedulers.REACTOR:
106 return reactor_factory
107 if scheduler == schedulers.ASYNC_IO:
108 return async_io_factory
109
110 txt = f"Allowed Schedulers: {schedulers.REACTOR}, {schedulers.ASYNC_IO}"
111 _logger.warning(txt)
112 txt = f'Invalid Scheduler "{scheduler}"'
113 raise Exception(txt)
114
[end of pymodbus/client/asynchronous/factory/serial.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pymodbus/client/asynchronous/factory/serial.py b/pymodbus/client/asynchronous/factory/serial.py
--- a/pymodbus/client/asynchronous/factory/serial.py
+++ b/pymodbus/client/asynchronous/factory/serial.py
@@ -89,7 +89,7 @@
if not loop.is_running():
loop.run_until_complete(coro())
else: # loop is not asyncio.get_event_loop():
- future = asyncio.run_coroutine_threadsafe(coro, loop=loop)
+ future = asyncio.run_coroutine_threadsafe(coro(), loop=loop)
future.result()
return loop, client
| {"golden_diff": "diff --git a/pymodbus/client/asynchronous/factory/serial.py b/pymodbus/client/asynchronous/factory/serial.py\n--- a/pymodbus/client/asynchronous/factory/serial.py\n+++ b/pymodbus/client/asynchronous/factory/serial.py\n@@ -89,7 +89,7 @@\n if not loop.is_running():\n loop.run_until_complete(coro())\n else: # loop is not asyncio.get_event_loop():\n- future = asyncio.run_coroutine_threadsafe(coro, loop=loop)\n+ future = asyncio.run_coroutine_threadsafe(coro(), loop=loop)\n future.result()\n \n return loop, client\n", "issue": "AsyncioModbusSerialClient TypeError Coroutine\n### Versions\r\n\r\n* Python: 3.9\r\n* OS: Ubuntu 20.04\r\n* Pymodbus: `3.0.0dev4`\r\n* Modbus Hardware (if used): \r\n\r\n### Pymodbus Specific\r\n* Server: None\r\n* Client: rtu - async\r\n\r\n### Description\r\n\r\nWhen I try `3.0.0dev4` and the latest commit as of today. I am getting a type error that variable `coro` is not a coroutine in file `serial.py`. I am trying to create `AsyncModbusSerialClient(schedulers.ASYNC_IO, port=connPort, baudrate=connSpeed, method=connMethod, timeout=commTimeout)` in an existing running loop.\r\n\r\nI don't think the coroutine was created correctly. What do you think?\r\n\r\nOld:\r\n`future = asyncio.run_coroutine_threadsafe(coro, loop=loop)` \r\n\r\nProposed:\r\n` future = asyncio.run_coroutine_threadsafe(coro(), loop=loop)`\r\n \"\"\"Create asyncio based asynchronous serial clients.\r\n :param port: Serial port\r\n :param framer: Modbus Framer\r\n :param kwargs: Serial port options\r\n :return: asyncio event loop and serial client\r\n \"\"\"\r\n try:\r\n loop = kwargs.pop(\"loop\", None) or asyncio.get_running_loop()\r\n except RuntimeError:\r\n loop = asyncio.new_event_loop()\r\n\r\n proto_cls = kwargs.get(\"proto_cls\") or ModbusClientProtocol\r\n\r\n client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)\r\n coro = client.connect\r\n if not loop.is_running():\r\n loop.run_until_complete(coro())\r\n else: # loop is not asyncio.get_event_loop():\r\n future = asyncio.run_coroutine_threadsafe(coro, loop=loop) <- `Fails here`\r\n future.result()\r\n\r\n return loop, client\r\n```\r\n``` py\r\ndef async_io_factory(port=None, framer=None, **kwargs):\r\n \"\"\"Create asyncio based asynchronous serial clients.\r\n :param port: Serial port\r\n :param framer: Modbus Framer\r\n :param kwargs: Serial port options\r\n :return: asyncio event loop and serial client\r\n \"\"\"\r\n try:\r\n loop = kwargs.pop(\"loop\", None) or asyncio.get_running_loop()\r\n except RuntimeError:\r\n loop = asyncio.new_event_loop()\r\n\r\n proto_cls = kwargs.get(\"proto_cls\") or ModbusClientProtocol\r\n\r\n client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)\r\n coro = client.connect\r\n if not loop.is_running():\r\n loop.run_until_complete(coro())\r\n else: # loop is not asyncio.get_event_loop():\r\n future = asyncio.run_coroutine_threadsafe(coro, loop=loop) <- `Fails here`\r\n future.result()\r\n\r\n return loop, client\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Factory to create asynchronous serial clients based on twisted/asyncio.\"\"\"\n# pylint: disable=missing-type-doc\nimport logging\nimport asyncio\n\nfrom pymodbus.client.asynchronous import schedulers\nfrom pymodbus.client.asynchronous.thread import EventLoopThread\nfrom pymodbus.client.asynchronous.async_io import (\n ModbusClientProtocol,\n AsyncioModbusSerialClient,\n)\nfrom pymodbus.factory import ClientDecoder\n\n\n_logger = logging.getLogger(__name__)\n\n\ndef reactor_factory(port, framer, **kwargs):\n \"\"\"Create twisted serial asynchronous client.\n\n :param port: Serial port\n :param framer: Modbus Framer\n :param kwargs:\n :return: event_loop_thread and twisted serial client\n \"\"\"\n from twisted.internet import reactor # pylint: disable=import-outside-toplevel\n from twisted.internet.serialport import ( # pylint: disable=import-outside-toplevel\n SerialPort,\n )\n from twisted.internet.protocol import ( # pylint: disable=import-outside-toplevel\n ClientFactory,\n )\n\n class SerialClientFactory(ClientFactory):\n \"\"\"Define serial client factory.\"\"\"\n\n def __init__(self, framer, proto_cls):\n \"\"\"Remember things necessary for building a protocols.\"\"\"\n self.proto_cls = proto_cls\n self.framer = framer\n\n def buildProtocol(self): # pylint: disable=arguments-differ\n \"\"\"Create a protocol and start the reading cycle-\"\"\"\n proto = self.proto_cls(self.framer)\n proto.factory = self\n return proto\n\n class SerialModbusClient(SerialPort): # pylint: disable=abstract-method\n \"\"\"Define serial client.\"\"\"\n\n def __init__(self, framer, *args, **kwargs):\n \"\"\"Initialize the client and start listening on the serial port.\n\n :param factory: The factory to build clients with\n \"\"\"\n self.decoder = ClientDecoder()\n proto_cls = kwargs.pop(\"proto_cls\", None)\n proto = SerialClientFactory(framer, proto_cls).buildProtocol()\n SerialPort.__init__(self, proto, *args, **kwargs)\n\n proto = EventLoopThread(\n \"reactor\",\n reactor.run, # pylint: disable=no-member\n reactor.stop, # pylint: disable=no-member\n installSignalHandlers=0,\n )\n ser_client = SerialModbusClient(framer, port, reactor, **kwargs)\n\n return proto, ser_client\n\n\ndef async_io_factory(port=None, framer=None, **kwargs):\n \"\"\"Create asyncio based asynchronous serial clients.\n\n :param port: Serial port\n :param framer: Modbus Framer\n :param kwargs: Serial port options\n :return: asyncio event loop and serial client\n \"\"\"\n try:\n loop = kwargs.pop(\"loop\", None) or asyncio.get_running_loop()\n except RuntimeError:\n loop = asyncio.new_event_loop()\n\n proto_cls = kwargs.get(\"proto_cls\") or ModbusClientProtocol\n\n client = AsyncioModbusSerialClient(port, proto_cls, framer, loop, **kwargs)\n coro = client.connect\n if not loop.is_running():\n loop.run_until_complete(coro())\n else: # loop is not asyncio.get_event_loop():\n future = asyncio.run_coroutine_threadsafe(coro, loop=loop)\n future.result()\n\n return loop, client\n\n\ndef get_factory(scheduler):\n \"\"\"Get protocol factory based on the backend scheduler being used.\n\n :param scheduler: REACTOR/ASYNC_IO\n :return:\n :raises Exception: Failure\n \"\"\"\n if scheduler == schedulers.REACTOR:\n return reactor_factory\n if scheduler == schedulers.ASYNC_IO:\n return async_io_factory\n\n txt = f\"Allowed Schedulers: {schedulers.REACTOR}, {schedulers.ASYNC_IO}\"\n _logger.warning(txt)\n txt = f'Invalid Scheduler \"{scheduler}\"'\n raise Exception(txt)\n", "path": "pymodbus/client/asynchronous/factory/serial.py"}]} | 2,248 | 145 |
gh_patches_debug_3124 | rasdani/github-patches | git_diff | spack__spack-36099 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LLNL Cardioid homepage no longer exists
https://github.com/spack/spack/blob/fe5865da0d4ee4480a85ee1257cb310c036b0c88/var/spack/repos/builtin/packages/cardioid/package.py#L12
@rblake-llnl it looks like Cardioid's [home page](https://baasic.llnl.gov/comp-bio/cardioid-code.php) redirects to a 404 these days. You're listed as the maintainer. Has the home of cardioid moved?
</issue>
<code>
[start of var/spack/repos/builtin/packages/cardioid/package.py]
1 # Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack.package import *
7
8
9 class Cardioid(CMakePackage):
10 """Cardiac simulation suite."""
11
12 homepage = "https://baasic.llnl.gov/comp-bio/cardioid-code.php"
13 git = "https://github.com/LLNL/cardioid.git"
14 maintainers("rblake-llnl")
15
16 version("develop", branch="master")
17 version("elecfem", branch="elec-fem")
18
19 variant("cuda", default=False, description="Build with cuda support")
20 variant("mfem", default=False, description="Build with mfem support")
21
22 depends_on("blas")
23 depends_on("lapack")
24 depends_on("mpi")
25 depends_on("cuda", when="+cuda")
26 depends_on("mfem+mpi+superlu-dist+lapack", when="+mfem")
27 depends_on("hypre+cuda", when="+mfem+cuda")
28 depends_on("[email protected]:", type="build")
29 depends_on("perl", type="build")
30
31 def cmake_args(self):
32 spec = self.spec
33 args = [
34 "-DLAPACK_LIB:PATH=" + ";".join(spec["lapack"].libs.libraries),
35 "-DBLAS_LIB:PATH=" + ";".join(spec["blas"].libs.libraries),
36 "-DENABLE_OPENMP:BOOL=ON",
37 "-DENABLE_MPI:BOOL=ON",
38 "-DENABLE_FIND_MPI:BOOL=OFF",
39 "-DMPI_C_COMPILER:STRING=" + spec["mpi"].mpicc,
40 "-DMPI_CXX_COMPILER:STRING=" + spec["mpi"].mpicxx,
41 "-DCMAKE_C_COMPILER:STRING=" + spec["mpi"].mpicc,
42 "-DCMAKE_CXX_COMPILER:STRING=" + spec["mpi"].mpicxx,
43 ]
44
45 if "+cuda" in self.spec:
46 args.append("-DENABLE_CUDA:BOOL=ON")
47 args.append("-DCUDA_TOOLKIT_ROOT:PATH=" + spec["cuda"].prefix)
48 else:
49 args.append("-DENABLE_CUDA:BOOL=OFF")
50
51 if "+mfem" in self.spec:
52 args.append("-DMFEM_DIR:PATH=" + spec["mfem"].prefix)
53 return args
54
[end of var/spack/repos/builtin/packages/cardioid/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/var/spack/repos/builtin/packages/cardioid/package.py b/var/spack/repos/builtin/packages/cardioid/package.py
--- a/var/spack/repos/builtin/packages/cardioid/package.py
+++ b/var/spack/repos/builtin/packages/cardioid/package.py
@@ -9,7 +9,7 @@
class Cardioid(CMakePackage):
"""Cardiac simulation suite."""
- homepage = "https://baasic.llnl.gov/comp-bio/cardioid-code.php"
+ homepage = "https://baasic.llnl.gov/comp-bio/cardioid-code"
git = "https://github.com/LLNL/cardioid.git"
maintainers("rblake-llnl")
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/cardioid/package.py b/var/spack/repos/builtin/packages/cardioid/package.py\n--- a/var/spack/repos/builtin/packages/cardioid/package.py\n+++ b/var/spack/repos/builtin/packages/cardioid/package.py\n@@ -9,7 +9,7 @@\n class Cardioid(CMakePackage):\n \"\"\"Cardiac simulation suite.\"\"\"\n \n- homepage = \"https://baasic.llnl.gov/comp-bio/cardioid-code.php\"\n+ homepage = \"https://baasic.llnl.gov/comp-bio/cardioid-code\"\n git = \"https://github.com/LLNL/cardioid.git\"\n maintainers(\"rblake-llnl\")\n", "issue": "LLNL Cardioid homepage no longer exists\nhttps://github.com/spack/spack/blob/fe5865da0d4ee4480a85ee1257cb310c036b0c88/var/spack/repos/builtin/packages/cardioid/package.py#L12\r\n\r\n@rblake-llnl it looks like Cardioid's [home page](https://baasic.llnl.gov/comp-bio/cardioid-code.php) redirects to a 404 these days. You're listed as the maintainer. Has the home of cardioid moved?\n", "before_files": [{"content": "# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack.package import *\n\n\nclass Cardioid(CMakePackage):\n \"\"\"Cardiac simulation suite.\"\"\"\n\n homepage = \"https://baasic.llnl.gov/comp-bio/cardioid-code.php\"\n git = \"https://github.com/LLNL/cardioid.git\"\n maintainers(\"rblake-llnl\")\n\n version(\"develop\", branch=\"master\")\n version(\"elecfem\", branch=\"elec-fem\")\n\n variant(\"cuda\", default=False, description=\"Build with cuda support\")\n variant(\"mfem\", default=False, description=\"Build with mfem support\")\n\n depends_on(\"blas\")\n depends_on(\"lapack\")\n depends_on(\"mpi\")\n depends_on(\"cuda\", when=\"+cuda\")\n depends_on(\"mfem+mpi+superlu-dist+lapack\", when=\"+mfem\")\n depends_on(\"hypre+cuda\", when=\"+mfem+cuda\")\n depends_on(\"[email protected]:\", type=\"build\")\n depends_on(\"perl\", type=\"build\")\n\n def cmake_args(self):\n spec = self.spec\n args = [\n \"-DLAPACK_LIB:PATH=\" + \";\".join(spec[\"lapack\"].libs.libraries),\n \"-DBLAS_LIB:PATH=\" + \";\".join(spec[\"blas\"].libs.libraries),\n \"-DENABLE_OPENMP:BOOL=ON\",\n \"-DENABLE_MPI:BOOL=ON\",\n \"-DENABLE_FIND_MPI:BOOL=OFF\",\n \"-DMPI_C_COMPILER:STRING=\" + spec[\"mpi\"].mpicc,\n \"-DMPI_CXX_COMPILER:STRING=\" + spec[\"mpi\"].mpicxx,\n \"-DCMAKE_C_COMPILER:STRING=\" + spec[\"mpi\"].mpicc,\n \"-DCMAKE_CXX_COMPILER:STRING=\" + spec[\"mpi\"].mpicxx,\n ]\n\n if \"+cuda\" in self.spec:\n args.append(\"-DENABLE_CUDA:BOOL=ON\")\n args.append(\"-DCUDA_TOOLKIT_ROOT:PATH=\" + spec[\"cuda\"].prefix)\n else:\n args.append(\"-DENABLE_CUDA:BOOL=OFF\")\n\n if \"+mfem\" in self.spec:\n args.append(\"-DMFEM_DIR:PATH=\" + spec[\"mfem\"].prefix)\n return args\n", "path": "var/spack/repos/builtin/packages/cardioid/package.py"}]} | 1,310 | 156 |
gh_patches_debug_37491 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-3404 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
query string arrays are not fully displayed
##### Steps to reproduce the problem:
1. visit through mitmproxy/mitmdump e.g. http://example.com/?first=value&arr[]=foo+bar&arr[]=baz
2. Check the query parameters in the request
3. Notice that they contain more data than mitmproxy/mitmdump shows
##### Any other comments? What have you tried so far?
The following script shows all the data:
```
#!/usr/bin/env python3
from urllib.parse import urlparse, parse_qs
url = "http://example.com/?first=value&arr[]=foo+bar&arr[]=baz"
parts = urlparse(url)
print(parse_qs(parts.query))
```
Output:
`{'first': ['value'], 'arr[]': ['foo bar', 'baz']}`
But mitmproxy/mitmdump only shows:
```
first: value
arr[]: foo bar
```
##### System information
<!-- Paste the output of "mitmproxy --version" here. -->
Mitmproxy: 3.0.4
Python: 3.5.2
OpenSSL: OpenSSL 1.0.2g 1 Mar 2016
Platform: Linux-4.10.0-42-generic-x86_64-with-Ubuntu-16.04-xenial
<!-- Please use the mitmproxy forums (https://discourse.mitmproxy.org/) for support/how-to questions. Thanks! :) -->
</issue>
<code>
[start of mitmproxy/contentviews/base.py]
1 # Default view cutoff *in lines*
2 import typing
3
4 KEY_MAX = 30
5
6 TTextType = typing.Union[str, bytes] # FIXME: This should be either bytes or str ultimately.
7 TViewLine = typing.List[typing.Tuple[str, TTextType]]
8 TViewResult = typing.Tuple[str, typing.Iterator[TViewLine]]
9
10
11 class View:
12 name: str = None
13 content_types: typing.List[str] = []
14
15 def __call__(self, data: bytes, **metadata) -> TViewResult:
16 """
17 Transform raw data into human-readable output.
18
19 Args:
20 data: the data to decode/format.
21 metadata: optional keyword-only arguments for metadata. Implementations must not
22 rely on a given argument being present.
23
24 Returns:
25 A (description, content generator) tuple.
26
27 The content generator yields lists of (style, text) tuples, where each list represents
28 a single line. ``text`` is a unfiltered byte string which may need to be escaped,
29 depending on the used output.
30
31 Caveats:
32 The content generator must not yield tuples of tuples,
33 because urwid cannot process that. You have to yield a *list* of tuples per line.
34 """
35 raise NotImplementedError() # pragma: no cover
36
37
38 def format_dict(
39 d: typing.Mapping[TTextType, TTextType]
40 ) -> typing.Iterator[TViewLine]:
41 """
42 Helper function that transforms the given dictionary into a list of
43 [
44 ("key", key )
45 ("value", value)
46 ]
47 entries, where key is padded to a uniform width.
48 """
49
50 max_key_len = max((len(k) for k in d.keys()), default=0)
51 max_key_len = min((max_key_len, KEY_MAX), default=0)
52 for key, value in d.items():
53 if isinstance(key, bytes):
54 key += b":"
55 else:
56 key += ":"
57 key = key.ljust(max_key_len + 2)
58 yield [
59 ("header", key),
60 ("text", value)
61 ]
62
63
64 def format_text(text: TTextType) -> typing.Iterator[TViewLine]:
65 """
66 Helper function that transforms bytes into the view output format.
67 """
68 for line in text.splitlines():
69 yield [("text", line)]
70
[end of mitmproxy/contentviews/base.py]
[start of mitmproxy/contentviews/urlencoded.py]
1 from mitmproxy.net.http import url
2 from mitmproxy.coretypes import multidict
3 from . import base
4
5
6 class ViewURLEncoded(base.View):
7 name = "URL-encoded"
8 content_types = ["application/x-www-form-urlencoded"]
9
10 def __call__(self, data, **metadata):
11 try:
12 data = data.decode("ascii", "strict")
13 except ValueError:
14 return None
15 d = url.decode(data)
16 return "URLEncoded form", base.format_dict(multidict.MultiDict(d))
17
[end of mitmproxy/contentviews/urlencoded.py]
[start of mitmproxy/contentviews/query.py]
1 from typing import List # noqa
2
3 from . import base
4
5
6 class ViewQuery(base.View):
7 name = "Query"
8
9 def __call__(self, data, **metadata):
10 query = metadata.get("query")
11 if query:
12 return "Query", base.format_dict(query)
13 else:
14 return "Query", base.format_text("")
15
[end of mitmproxy/contentviews/query.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/contentviews/base.py b/mitmproxy/contentviews/base.py
--- a/mitmproxy/contentviews/base.py
+++ b/mitmproxy/contentviews/base.py
@@ -35,32 +35,52 @@
raise NotImplementedError() # pragma: no cover
-def format_dict(
- d: typing.Mapping[TTextType, TTextType]
-) -> typing.Iterator[TViewLine]:
+def format_pairs(
+ items: typing.Iterable[typing.Tuple[TTextType, TTextType]]
+)-> typing.Iterator[TViewLine]:
+
"""
- Helper function that transforms the given dictionary into a list of
+ Helper function that accepts a list of (k,v) pairs into a list of
[
- ("key", key )
+ ("key", key )
("value", value)
]
- entries, where key is padded to a uniform width.
+ where key is padded to a uniform width
"""
- max_key_len = max((len(k) for k in d.keys()), default=0)
+ max_key_len = max((len(k[0]) for k in items), default=0)
max_key_len = min((max_key_len, KEY_MAX), default=0)
- for key, value in d.items():
+
+ for key, value in items:
if isinstance(key, bytes):
+
key += b":"
else:
key += ":"
+
key = key.ljust(max_key_len + 2)
+
yield [
("header", key),
("text", value)
]
+def format_dict(
+ d: typing.Mapping[TTextType, TTextType]
+) -> typing.Iterator[TViewLine]:
+ """
+ Helper function that transforms the given dictionary into a list of
+ [
+ ("key", key )
+ ("value", value)
+ ]
+ entries, where key is padded to a uniform width.
+ """
+
+ return format_pairs(d.items())
+
+
def format_text(text: TTextType) -> typing.Iterator[TViewLine]:
"""
Helper function that transforms bytes into the view output format.
diff --git a/mitmproxy/contentviews/query.py b/mitmproxy/contentviews/query.py
--- a/mitmproxy/contentviews/query.py
+++ b/mitmproxy/contentviews/query.py
@@ -9,6 +9,6 @@
def __call__(self, data, **metadata):
query = metadata.get("query")
if query:
- return "Query", base.format_dict(query)
+ return "Query", base.format_pairs(query.items(multi=True))
else:
return "Query", base.format_text("")
diff --git a/mitmproxy/contentviews/urlencoded.py b/mitmproxy/contentviews/urlencoded.py
--- a/mitmproxy/contentviews/urlencoded.py
+++ b/mitmproxy/contentviews/urlencoded.py
@@ -1,5 +1,4 @@
from mitmproxy.net.http import url
-from mitmproxy.coretypes import multidict
from . import base
@@ -13,4 +12,4 @@
except ValueError:
return None
d = url.decode(data)
- return "URLEncoded form", base.format_dict(multidict.MultiDict(d))
+ return "URLEncoded form", base.format_pairs(d)
| {"golden_diff": "diff --git a/mitmproxy/contentviews/base.py b/mitmproxy/contentviews/base.py\n--- a/mitmproxy/contentviews/base.py\n+++ b/mitmproxy/contentviews/base.py\n@@ -35,32 +35,52 @@\n raise NotImplementedError() # pragma: no cover\n \n \n-def format_dict(\n- d: typing.Mapping[TTextType, TTextType]\n-) -> typing.Iterator[TViewLine]:\n+def format_pairs(\n+ items: typing.Iterable[typing.Tuple[TTextType, TTextType]]\n+)-> typing.Iterator[TViewLine]:\n+\n \"\"\"\n- Helper function that transforms the given dictionary into a list of\n+ Helper function that accepts a list of (k,v) pairs into a list of\n [\n- (\"key\", key )\n+ (\"key\", key )\n (\"value\", value)\n ]\n- entries, where key is padded to a uniform width.\n+ where key is padded to a uniform width\n \"\"\"\n \n- max_key_len = max((len(k) for k in d.keys()), default=0)\n+ max_key_len = max((len(k[0]) for k in items), default=0)\n max_key_len = min((max_key_len, KEY_MAX), default=0)\n- for key, value in d.items():\n+\n+ for key, value in items:\n if isinstance(key, bytes):\n+\n key += b\":\"\n else:\n key += \":\"\n+\n key = key.ljust(max_key_len + 2)\n+\n yield [\n (\"header\", key),\n (\"text\", value)\n ]\n \n \n+def format_dict(\n+ d: typing.Mapping[TTextType, TTextType]\n+) -> typing.Iterator[TViewLine]:\n+ \"\"\"\n+ Helper function that transforms the given dictionary into a list of\n+ [\n+ (\"key\", key )\n+ (\"value\", value)\n+ ]\n+ entries, where key is padded to a uniform width.\n+ \"\"\"\n+\n+ return format_pairs(d.items())\n+\n+\n def format_text(text: TTextType) -> typing.Iterator[TViewLine]:\n \"\"\"\n Helper function that transforms bytes into the view output format.\ndiff --git a/mitmproxy/contentviews/query.py b/mitmproxy/contentviews/query.py\n--- a/mitmproxy/contentviews/query.py\n+++ b/mitmproxy/contentviews/query.py\n@@ -9,6 +9,6 @@\n def __call__(self, data, **metadata):\n query = metadata.get(\"query\")\n if query:\n- return \"Query\", base.format_dict(query)\n+ return \"Query\", base.format_pairs(query.items(multi=True))\n else:\n return \"Query\", base.format_text(\"\")\ndiff --git a/mitmproxy/contentviews/urlencoded.py b/mitmproxy/contentviews/urlencoded.py\n--- a/mitmproxy/contentviews/urlencoded.py\n+++ b/mitmproxy/contentviews/urlencoded.py\n@@ -1,5 +1,4 @@\n from mitmproxy.net.http import url\n-from mitmproxy.coretypes import multidict\n from . import base\n \n \n@@ -13,4 +12,4 @@\n except ValueError:\n return None\n d = url.decode(data)\n- return \"URLEncoded form\", base.format_dict(multidict.MultiDict(d))\n+ return \"URLEncoded form\", base.format_pairs(d)\n", "issue": "query string arrays are not fully displayed\n##### Steps to reproduce the problem:\r\n\r\n1. visit through mitmproxy/mitmdump e.g. http://example.com/?first=value&arr[]=foo+bar&arr[]=baz\r\n2. Check the query parameters in the request\r\n3. Notice that they contain more data than mitmproxy/mitmdump shows\r\n\r\n##### Any other comments? What have you tried so far?\r\n\r\nThe following script shows all the data:\r\n\r\n```\r\n#!/usr/bin/env python3\r\n\r\nfrom urllib.parse import urlparse, parse_qs\r\n\r\nurl = \"http://example.com/?first=value&arr[]=foo+bar&arr[]=baz\"\r\nparts = urlparse(url)\r\nprint(parse_qs(parts.query))\r\n```\r\n\r\nOutput:\r\n`{'first': ['value'], 'arr[]': ['foo bar', 'baz']}`\r\n\r\nBut mitmproxy/mitmdump only shows:\r\n```\r\n first: value\r\n arr[]: foo bar\r\n```\r\n\r\n##### System information\r\n\r\n<!-- Paste the output of \"mitmproxy --version\" here. -->\r\n\r\nMitmproxy: 3.0.4\r\nPython: 3.5.2\r\nOpenSSL: OpenSSL 1.0.2g 1 Mar 2016\r\nPlatform: Linux-4.10.0-42-generic-x86_64-with-Ubuntu-16.04-xenial\r\n\r\n<!-- Please use the mitmproxy forums (https://discourse.mitmproxy.org/) for support/how-to questions. Thanks! :) -->\r\n\n", "before_files": [{"content": "# Default view cutoff *in lines*\nimport typing\n\nKEY_MAX = 30\n\nTTextType = typing.Union[str, bytes] # FIXME: This should be either bytes or str ultimately.\nTViewLine = typing.List[typing.Tuple[str, TTextType]]\nTViewResult = typing.Tuple[str, typing.Iterator[TViewLine]]\n\n\nclass View:\n name: str = None\n content_types: typing.List[str] = []\n\n def __call__(self, data: bytes, **metadata) -> TViewResult:\n \"\"\"\n Transform raw data into human-readable output.\n\n Args:\n data: the data to decode/format.\n metadata: optional keyword-only arguments for metadata. Implementations must not\n rely on a given argument being present.\n\n Returns:\n A (description, content generator) tuple.\n\n The content generator yields lists of (style, text) tuples, where each list represents\n a single line. ``text`` is a unfiltered byte string which may need to be escaped,\n depending on the used output.\n\n Caveats:\n The content generator must not yield tuples of tuples,\n because urwid cannot process that. You have to yield a *list* of tuples per line.\n \"\"\"\n raise NotImplementedError() # pragma: no cover\n\n\ndef format_dict(\n d: typing.Mapping[TTextType, TTextType]\n) -> typing.Iterator[TViewLine]:\n \"\"\"\n Helper function that transforms the given dictionary into a list of\n [\n (\"key\", key )\n (\"value\", value)\n ]\n entries, where key is padded to a uniform width.\n \"\"\"\n\n max_key_len = max((len(k) for k in d.keys()), default=0)\n max_key_len = min((max_key_len, KEY_MAX), default=0)\n for key, value in d.items():\n if isinstance(key, bytes):\n key += b\":\"\n else:\n key += \":\"\n key = key.ljust(max_key_len + 2)\n yield [\n (\"header\", key),\n (\"text\", value)\n ]\n\n\ndef format_text(text: TTextType) -> typing.Iterator[TViewLine]:\n \"\"\"\n Helper function that transforms bytes into the view output format.\n \"\"\"\n for line in text.splitlines():\n yield [(\"text\", line)]\n", "path": "mitmproxy/contentviews/base.py"}, {"content": "from mitmproxy.net.http import url\nfrom mitmproxy.coretypes import multidict\nfrom . import base\n\n\nclass ViewURLEncoded(base.View):\n name = \"URL-encoded\"\n content_types = [\"application/x-www-form-urlencoded\"]\n\n def __call__(self, data, **metadata):\n try:\n data = data.decode(\"ascii\", \"strict\")\n except ValueError:\n return None\n d = url.decode(data)\n return \"URLEncoded form\", base.format_dict(multidict.MultiDict(d))\n", "path": "mitmproxy/contentviews/urlencoded.py"}, {"content": "from typing import List # noqa\n\nfrom . import base\n\n\nclass ViewQuery(base.View):\n name = \"Query\"\n\n def __call__(self, data, **metadata):\n query = metadata.get(\"query\")\n if query:\n return \"Query\", base.format_dict(query)\n else:\n return \"Query\", base.format_text(\"\")\n", "path": "mitmproxy/contentviews/query.py"}]} | 1,770 | 727 |
gh_patches_debug_18641 | rasdani/github-patches | git_diff | litestar-org__litestar-1009 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: Structlog example no longer works
Me again. Sorry 🙈
**Describe the bug**
Running the structlog example [here](https://starlite-api.github.io/starlite/1.48/usage/0-the-starlite-app/?h=structlog#using-structlog) results in an internal server error as of v1.45 (I think)
```
{"status_code":500,"detail":"TypeError(\"encode_json() got an unexpected keyword argument 'default'\")"}
```
The default encoder was changed [here](https://github.com/starlite-api/starlite/pull/891/files#diff-6b2294023eb60948cd9f742e4930255a72254daf74f9e3157df8d479a685b123R213)
Which doesn't accept the `default` argument given [here](https://github.com/hynek/structlog/blob/main/src/structlog/processors.py#L318)
I'm not sure if it's a structlog problem or a starlite problem.
Maybe the solution is to rename `enc_hook` to `default` then it mirrors the signature of `json.dumps`? I'm not sure, to be honest.
**To Reproduce**
Run the structlog example in the documentation:
```python
from starlite import Starlite, StructLoggingConfig, Request, get
@get("/")
def my_router_handler(request: Request) -> None:
request.logger.info("inside a request")
return None
logging_config = StructLoggingConfig()
app = Starlite(route_handlers=[my_router_handler], logging_config=logging_config)
```
**Additional context**
Add any other context about the problem here.
</issue>
<code>
[start of starlite/utils/serialization.py]
1 from pathlib import PurePosixPath
2 from typing import TYPE_CHECKING, Any, Callable, Dict, Optional, Union
3
4 import msgspec
5 from pydantic import (
6 AnyUrl,
7 BaseModel,
8 ByteSize,
9 ConstrainedBytes,
10 ConstrainedDate,
11 ConstrainedDecimal,
12 ConstrainedFloat,
13 ConstrainedFrozenSet,
14 ConstrainedInt,
15 ConstrainedList,
16 ConstrainedSet,
17 ConstrainedStr,
18 EmailStr,
19 NameEmail,
20 PaymentCardNumber,
21 SecretField,
22 StrictBool,
23 )
24 from pydantic.color import Color
25
26 if TYPE_CHECKING:
27 from starlite.types import TypeEncodersMap
28
29 DEFAULT_TYPE_ENCODERS: "TypeEncodersMap" = {
30 PurePosixPath: str,
31 # pydantic specific types
32 BaseModel: lambda m: m.dict(),
33 ByteSize: lambda b: b.real,
34 EmailStr: str,
35 NameEmail: str,
36 Color: str,
37 AnyUrl: str,
38 SecretField: str,
39 ConstrainedInt: int,
40 ConstrainedFloat: float,
41 ConstrainedStr: str,
42 ConstrainedBytes: lambda b: b.decode("utf-8"),
43 ConstrainedList: list,
44 ConstrainedSet: set,
45 ConstrainedFrozenSet: frozenset,
46 ConstrainedDecimal: float,
47 ConstrainedDate: lambda d: d.isoformat(),
48 PaymentCardNumber: str,
49 StrictBool: int, # pydantic compatibility
50 }
51
52
53 def default_serializer(value: Any, type_encoders: Optional[Dict[Any, Callable[[Any], Any]]] = None) -> Any:
54 """Transform values non-natively supported by `msgspec`
55
56 Args:
57 value: A value to serialize#
58 type_encoders: Mapping of types to callables to transforming types
59 Returns:
60 A serialized value
61 Raises:
62 TypeError: if value is not supported
63 """
64 if type_encoders is None:
65 type_encoders = DEFAULT_TYPE_ENCODERS
66 for base in value.__class__.__mro__[:-1]:
67 try:
68 encoder = type_encoders[base]
69 except KeyError:
70 continue
71 return encoder(value)
72 raise TypeError(f"Unsupported type: {type(value)!r}")
73
74
75 def dec_hook(type_: Any, value: Any) -> Any: # pragma: no cover
76 """Transform values non-natively supported by `msgspec`
77
78 Args:
79 type_: Encountered type
80 value: Value to coerce
81
82 Returns:
83 A `msgspec`-supported type
84 """
85 if issubclass(type_, BaseModel):
86 return type_(**value)
87 raise TypeError(f"Unsupported type: {type(value)!r}")
88
89
90 _msgspec_json_encoder = msgspec.json.Encoder(enc_hook=default_serializer)
91 _msgspec_json_decoder = msgspec.json.Decoder(dec_hook=dec_hook)
92 _msgspec_msgpack_encoder = msgspec.msgpack.Encoder(enc_hook=default_serializer)
93 _msgspec_msgpack_decoder = msgspec.msgpack.Decoder(dec_hook=dec_hook)
94
95
96 def encode_json(obj: Any, enc_hook: Optional[Callable[[Any], Any]] = default_serializer) -> bytes:
97 """Encode a value into JSON.
98
99 Args:
100 obj: Value to encode
101 enc_hook: Optional callable to support non-natively supported types
102
103 Returns:
104 JSON as bytes
105 """
106 if enc_hook is None or enc_hook is default_serializer:
107 return _msgspec_json_encoder.encode(obj)
108 return msgspec.json.encode(obj, enc_hook=enc_hook)
109
110
111 def decode_json(raw: Union[str, bytes]) -> Any:
112 """Decode a JSON string/bytes into an object.
113
114 Args:
115 raw: Value to decode
116
117 Returns:
118 An object
119 """
120 return _msgspec_json_decoder.decode(raw)
121
122
123 def encode_msgpack(obj: Any, enc_hook: Optional[Callable[[Any], Any]] = default_serializer) -> bytes:
124 """Encode a value into MessagePack.
125
126 Args:
127 obj: Value to encode
128 enc_hook: Optional callable to support non-natively supported types
129
130 Returns:
131 MessagePack as bytes
132 """
133 if enc_hook is None or enc_hook is default_serializer:
134 return _msgspec_msgpack_encoder.encode(obj)
135 return msgspec.msgpack.encode(obj, enc_hook=enc_hook)
136
137
138 def decode_msgpack(raw: bytes) -> Any:
139 """Decode a MessagePack string/bytes into an object.
140
141 Args:
142 raw: Value to decode
143
144 Returns:
145 An object
146 """
147 return _msgspec_msgpack_decoder.decode(raw)
148
[end of starlite/utils/serialization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlite/utils/serialization.py b/starlite/utils/serialization.py
--- a/starlite/utils/serialization.py
+++ b/starlite/utils/serialization.py
@@ -93,19 +93,19 @@
_msgspec_msgpack_decoder = msgspec.msgpack.Decoder(dec_hook=dec_hook)
-def encode_json(obj: Any, enc_hook: Optional[Callable[[Any], Any]] = default_serializer) -> bytes:
+def encode_json(obj: Any, default: Optional[Callable[[Any], Any]] = default_serializer) -> bytes:
"""Encode a value into JSON.
Args:
obj: Value to encode
- enc_hook: Optional callable to support non-natively supported types
+ default: Optional callable to support non-natively supported types.
Returns:
JSON as bytes
"""
- if enc_hook is None or enc_hook is default_serializer:
+ if default is None or default is default_serializer:
return _msgspec_json_encoder.encode(obj)
- return msgspec.json.encode(obj, enc_hook=enc_hook)
+ return msgspec.json.encode(obj, enc_hook=default)
def decode_json(raw: Union[str, bytes]) -> Any:
| {"golden_diff": "diff --git a/starlite/utils/serialization.py b/starlite/utils/serialization.py\n--- a/starlite/utils/serialization.py\n+++ b/starlite/utils/serialization.py\n@@ -93,19 +93,19 @@\n _msgspec_msgpack_decoder = msgspec.msgpack.Decoder(dec_hook=dec_hook)\n \n \n-def encode_json(obj: Any, enc_hook: Optional[Callable[[Any], Any]] = default_serializer) -> bytes:\n+def encode_json(obj: Any, default: Optional[Callable[[Any], Any]] = default_serializer) -> bytes:\n \"\"\"Encode a value into JSON.\n \n Args:\n obj: Value to encode\n- enc_hook: Optional callable to support non-natively supported types\n+ default: Optional callable to support non-natively supported types.\n \n Returns:\n JSON as bytes\n \"\"\"\n- if enc_hook is None or enc_hook is default_serializer:\n+ if default is None or default is default_serializer:\n return _msgspec_json_encoder.encode(obj)\n- return msgspec.json.encode(obj, enc_hook=enc_hook)\n+ return msgspec.json.encode(obj, enc_hook=default)\n \n \n def decode_json(raw: Union[str, bytes]) -> Any:\n", "issue": "Bug: Structlog example no longer works\nMe again. Sorry \ud83d\ude48 \r\n\r\n**Describe the bug**\r\nRunning the structlog example [here](https://starlite-api.github.io/starlite/1.48/usage/0-the-starlite-app/?h=structlog#using-structlog) results in an internal server error as of v1.45 (I think)\r\n\r\n```\r\n{\"status_code\":500,\"detail\":\"TypeError(\\\"encode_json() got an unexpected keyword argument 'default'\\\")\"}\r\n```\r\n\r\nThe default encoder was changed [here](https://github.com/starlite-api/starlite/pull/891/files#diff-6b2294023eb60948cd9f742e4930255a72254daf74f9e3157df8d479a685b123R213)\r\nWhich doesn't accept the `default` argument given [here](https://github.com/hynek/structlog/blob/main/src/structlog/processors.py#L318)\r\n\r\nI'm not sure if it's a structlog problem or a starlite problem.\r\n\r\nMaybe the solution is to rename `enc_hook` to `default` then it mirrors the signature of `json.dumps`? I'm not sure, to be honest.\r\n\r\n\r\n**To Reproduce**\r\nRun the structlog example in the documentation:\r\n```python\r\nfrom starlite import Starlite, StructLoggingConfig, Request, get\r\n\r\n\r\n@get(\"/\")\r\ndef my_router_handler(request: Request) -> None:\r\n request.logger.info(\"inside a request\")\r\n return None\r\n\r\n\r\nlogging_config = StructLoggingConfig()\r\n\r\napp = Starlite(route_handlers=[my_router_handler], logging_config=logging_config)\r\n```\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "from pathlib import PurePosixPath\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, Optional, Union\n\nimport msgspec\nfrom pydantic import (\n AnyUrl,\n BaseModel,\n ByteSize,\n ConstrainedBytes,\n ConstrainedDate,\n ConstrainedDecimal,\n ConstrainedFloat,\n ConstrainedFrozenSet,\n ConstrainedInt,\n ConstrainedList,\n ConstrainedSet,\n ConstrainedStr,\n EmailStr,\n NameEmail,\n PaymentCardNumber,\n SecretField,\n StrictBool,\n)\nfrom pydantic.color import Color\n\nif TYPE_CHECKING:\n from starlite.types import TypeEncodersMap\n\nDEFAULT_TYPE_ENCODERS: \"TypeEncodersMap\" = {\n PurePosixPath: str,\n # pydantic specific types\n BaseModel: lambda m: m.dict(),\n ByteSize: lambda b: b.real,\n EmailStr: str,\n NameEmail: str,\n Color: str,\n AnyUrl: str,\n SecretField: str,\n ConstrainedInt: int,\n ConstrainedFloat: float,\n ConstrainedStr: str,\n ConstrainedBytes: lambda b: b.decode(\"utf-8\"),\n ConstrainedList: list,\n ConstrainedSet: set,\n ConstrainedFrozenSet: frozenset,\n ConstrainedDecimal: float,\n ConstrainedDate: lambda d: d.isoformat(),\n PaymentCardNumber: str,\n StrictBool: int, # pydantic compatibility\n}\n\n\ndef default_serializer(value: Any, type_encoders: Optional[Dict[Any, Callable[[Any], Any]]] = None) -> Any:\n \"\"\"Transform values non-natively supported by `msgspec`\n\n Args:\n value: A value to serialize#\n type_encoders: Mapping of types to callables to transforming types\n Returns:\n A serialized value\n Raises:\n TypeError: if value is not supported\n \"\"\"\n if type_encoders is None:\n type_encoders = DEFAULT_TYPE_ENCODERS\n for base in value.__class__.__mro__[:-1]:\n try:\n encoder = type_encoders[base]\n except KeyError:\n continue\n return encoder(value)\n raise TypeError(f\"Unsupported type: {type(value)!r}\")\n\n\ndef dec_hook(type_: Any, value: Any) -> Any: # pragma: no cover\n \"\"\"Transform values non-natively supported by `msgspec`\n\n Args:\n type_: Encountered type\n value: Value to coerce\n\n Returns:\n A `msgspec`-supported type\n \"\"\"\n if issubclass(type_, BaseModel):\n return type_(**value)\n raise TypeError(f\"Unsupported type: {type(value)!r}\")\n\n\n_msgspec_json_encoder = msgspec.json.Encoder(enc_hook=default_serializer)\n_msgspec_json_decoder = msgspec.json.Decoder(dec_hook=dec_hook)\n_msgspec_msgpack_encoder = msgspec.msgpack.Encoder(enc_hook=default_serializer)\n_msgspec_msgpack_decoder = msgspec.msgpack.Decoder(dec_hook=dec_hook)\n\n\ndef encode_json(obj: Any, enc_hook: Optional[Callable[[Any], Any]] = default_serializer) -> bytes:\n \"\"\"Encode a value into JSON.\n\n Args:\n obj: Value to encode\n enc_hook: Optional callable to support non-natively supported types\n\n Returns:\n JSON as bytes\n \"\"\"\n if enc_hook is None or enc_hook is default_serializer:\n return _msgspec_json_encoder.encode(obj)\n return msgspec.json.encode(obj, enc_hook=enc_hook)\n\n\ndef decode_json(raw: Union[str, bytes]) -> Any:\n \"\"\"Decode a JSON string/bytes into an object.\n\n Args:\n raw: Value to decode\n\n Returns:\n An object\n \"\"\"\n return _msgspec_json_decoder.decode(raw)\n\n\ndef encode_msgpack(obj: Any, enc_hook: Optional[Callable[[Any], Any]] = default_serializer) -> bytes:\n \"\"\"Encode a value into MessagePack.\n\n Args:\n obj: Value to encode\n enc_hook: Optional callable to support non-natively supported types\n\n Returns:\n MessagePack as bytes\n \"\"\"\n if enc_hook is None or enc_hook is default_serializer:\n return _msgspec_msgpack_encoder.encode(obj)\n return msgspec.msgpack.encode(obj, enc_hook=enc_hook)\n\n\ndef decode_msgpack(raw: bytes) -> Any:\n \"\"\"Decode a MessagePack string/bytes into an object.\n\n Args:\n raw: Value to decode\n\n Returns:\n An object\n \"\"\"\n return _msgspec_msgpack_decoder.decode(raw)\n", "path": "starlite/utils/serialization.py"}]} | 2,254 | 257 |
gh_patches_debug_8798 | rasdani/github-patches | git_diff | apache__airflow-16392 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consider and add common sensitive names
**Description**
Since sensitive informations in the connection object (specifically the extras field) are now being masked based on sensitive key names, we should consider adding some common sensitive key names.
`private_key` from [ssh connection](https://airflow.apache.org/docs/apache-airflow-providers-ssh/stable/connections/ssh.html) is an examples.
**Use case / motivation**
Extras field used to be blocked out entirely before the sensitive value masking feature (#15599).
[Before in 2.0.2](https://github.com/apache/airflow/blob/2.0.2/airflow/hooks/base.py#L78
) and [after in 2.1.0](https://github.com/apache/airflow/blob/2.1.0/airflow/hooks/base.py#L78
).
Extras field containing sensitive information now shown unless the key contains sensitive names.
**Are you willing to submit a PR?**
@ashb has expressed interest in adding this.
</issue>
<code>
[start of airflow/utils/log/secrets_masker.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 """Mask sensitive information from logs"""
18 import collections
19 import io
20 import logging
21 import re
22 from typing import TYPE_CHECKING, Iterable, Optional, Set, TypeVar, Union
23
24 from airflow.compat.functools import cache, cached_property
25
26 if TYPE_CHECKING:
27 from airflow.typing_compat import RePatternType
28
29 RedactableItem = TypeVar('RedactableItem')
30
31
32 log = logging.getLogger(__name__)
33
34
35 DEFAULT_SENSITIVE_FIELDS = frozenset(
36 {
37 'password',
38 'secret',
39 'passwd',
40 'authorization',
41 'api_key',
42 'apikey',
43 'access_token',
44 }
45 )
46 """Names of fields (Connection extra, Variable key name etc.) that are deemed sensitive"""
47
48
49 @cache
50 def get_sensitive_variables_fields():
51 """Get comma-separated sensitive Variable Fields from airflow.cfg."""
52 from airflow.configuration import conf
53
54 sensitive_fields = DEFAULT_SENSITIVE_FIELDS.copy()
55 sensitive_variable_fields = conf.get('core', 'sensitive_var_conn_names')
56 if sensitive_variable_fields:
57 sensitive_fields |= frozenset({field.strip() for field in sensitive_variable_fields.split(',')})
58 return sensitive_fields
59
60
61 def should_hide_value_for_key(name):
62 """Should the value for this given name (Variable name, or key in conn.extra_dejson) be hidden"""
63 from airflow import settings
64
65 if name and settings.HIDE_SENSITIVE_VAR_CONN_FIELDS:
66 name = name.strip().lower()
67 return any(s in name for s in get_sensitive_variables_fields())
68 return False
69
70
71 def mask_secret(secret: Union[str, dict, Iterable], name: str = None) -> None:
72 """
73 Mask a secret from appearing in the task logs.
74
75 If ``name`` is provided, then it will only be masked if the name matches
76 one of the configured "sensitive" names.
77
78 If ``secret`` is a dict or a iterable (excluding str) then it will be
79 recursively walked and keys with sensitive names will be hidden.
80 """
81 # Delay import
82 from airflow import settings
83
84 # Filtering all log messages is not a free process, so we only do it when
85 # running tasks
86 if not settings.MASK_SECRETS_IN_LOGS or not secret:
87 return
88
89 _secrets_masker().add_mask(secret, name)
90
91
92 def redact(value: "RedactableItem", name: str = None) -> "RedactableItem":
93 """Redact any secrets found in ``value``."""
94 return _secrets_masker().redact(value, name)
95
96
97 @cache
98 def _secrets_masker() -> "SecretsMasker":
99
100 for flt in logging.getLogger('airflow.task').filters:
101 if isinstance(flt, SecretsMasker):
102 return flt
103 raise RuntimeError("No SecretsMasker found!")
104
105
106 class SecretsMasker(logging.Filter):
107 """Redact secrets from logs"""
108
109 replacer: Optional["RePatternType"] = None
110 patterns: Set[str]
111
112 ALREADY_FILTERED_FLAG = "__SecretsMasker_filtered"
113
114 def __init__(self):
115 super().__init__()
116 self.patterns = set()
117
118 @cached_property
119 def _record_attrs_to_ignore(self) -> Iterable[str]:
120 # Doing log.info(..., extra={'foo': 2}) sets extra properties on
121 # record, i.e. record.foo. And we need to filter those too. Fun
122 #
123 # Create a record, and look at what attributes are on it, and ignore
124 # all the default ones!
125
126 record = logging.getLogRecordFactory()(
127 # name, level, pathname, lineno, msg, args, exc_info, func=None, sinfo=None,
128 "x",
129 logging.INFO,
130 __file__,
131 1,
132 "",
133 tuple(),
134 exc_info=None,
135 func="funcname",
136 )
137 return frozenset(record.__dict__).difference({'msg', 'args'})
138
139 def filter(self, record) -> bool:
140 if self.ALREADY_FILTERED_FLAG in record.__dict__:
141 # Filters are attached to multiple handlers and logs, keep a
142 # "private" flag that stops us needing to process it more than once
143 return True
144
145 if self.replacer:
146 for k, v in record.__dict__.items():
147 if k in self._record_attrs_to_ignore:
148 continue
149 record.__dict__[k] = self.redact(v)
150 if record.exc_info and record.exc_info[1] is not None:
151 exc = record.exc_info[1]
152 # I'm not sure if this is a good idea!
153 exc.args = (self.redact(v) for v in exc.args)
154 record.__dict__[self.ALREADY_FILTERED_FLAG] = True
155
156 return True
157
158 def _redact_all(self, item: "RedactableItem") -> "RedactableItem":
159 if isinstance(item, dict):
160 return {dict_key: self._redact_all(subval) for dict_key, subval in item.items()}
161 elif isinstance(item, str):
162 return '***'
163 elif isinstance(item, (tuple, set)):
164 # Turn set in to tuple!
165 return tuple(self._redact_all(subval) for subval in item)
166 elif isinstance(item, Iterable):
167 return list(self._redact_all(subval) for subval in item)
168 else:
169 return item
170
171 # pylint: disable=too-many-return-statements
172 def redact(self, item: "RedactableItem", name: str = None) -> "RedactableItem":
173 """
174 Redact an any secrets found in ``item``, if it is a string.
175
176 If ``name`` is given, and it's a "sensitive" name (see
177 :func:`should_hide_value_for_key`) then all string values in the item
178 is redacted.
179
180 """
181 try:
182 if name and should_hide_value_for_key(name):
183 return self._redact_all(item)
184
185 if isinstance(item, dict):
186 return {dict_key: self.redact(subval, dict_key) for dict_key, subval in item.items()}
187 elif isinstance(item, str):
188 if self.replacer:
189 # We can't replace specific values, but the key-based redacting
190 # can still happen, so we can't short-circuit, we need to walk
191 # the structure.
192 return self.replacer.sub('***', item)
193 return item
194 elif isinstance(item, (tuple, set)):
195 # Turn set in to tuple!
196 return tuple(self.redact(subval) for subval in item)
197 elif isinstance(item, io.IOBase):
198 return item
199 elif isinstance(item, Iterable):
200 return list(self.redact(subval) for subval in item)
201 else:
202 return item
203 except Exception as e: # pylint: disable=broad-except
204 log.warning(
205 "Unable to redact %r, please report this via <https://github.com/apache/airflow/issues>. "
206 "Error was: %s: %s",
207 item,
208 type(e).__name__,
209 str(e),
210 )
211 return item
212
213 # pylint: enable=too-many-return-statements
214
215 def add_mask(self, secret: Union[str, dict, Iterable], name: str = None):
216 """Add a new secret to be masked to this filter instance."""
217 if isinstance(secret, dict):
218 for k, v in secret.items():
219 self.add_mask(v, k)
220 elif isinstance(secret, str):
221 if not secret:
222 return
223 pattern = re.escape(secret)
224 if pattern not in self.patterns and (not name or should_hide_value_for_key(name)):
225 self.patterns.add(pattern)
226 self.replacer = re.compile('|'.join(self.patterns))
227 elif isinstance(secret, collections.abc.Iterable):
228 for v in secret:
229 self.add_mask(v, name)
230
[end of airflow/utils/log/secrets_masker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/airflow/utils/log/secrets_masker.py b/airflow/utils/log/secrets_masker.py
--- a/airflow/utils/log/secrets_masker.py
+++ b/airflow/utils/log/secrets_masker.py
@@ -34,13 +34,15 @@
DEFAULT_SENSITIVE_FIELDS = frozenset(
{
- 'password',
- 'secret',
- 'passwd',
- 'authorization',
+ 'access_token',
'api_key',
'apikey',
- 'access_token',
+ 'authorization',
+ 'passphrase',
+ 'passwd',
+ 'password',
+ 'private_key',
+ 'secret',
}
)
"""Names of fields (Connection extra, Variable key name etc.) that are deemed sensitive"""
| {"golden_diff": "diff --git a/airflow/utils/log/secrets_masker.py b/airflow/utils/log/secrets_masker.py\n--- a/airflow/utils/log/secrets_masker.py\n+++ b/airflow/utils/log/secrets_masker.py\n@@ -34,13 +34,15 @@\n \n DEFAULT_SENSITIVE_FIELDS = frozenset(\n {\n- 'password',\n- 'secret',\n- 'passwd',\n- 'authorization',\n+ 'access_token',\n 'api_key',\n 'apikey',\n- 'access_token',\n+ 'authorization',\n+ 'passphrase',\n+ 'passwd',\n+ 'password',\n+ 'private_key',\n+ 'secret',\n }\n )\n \"\"\"Names of fields (Connection extra, Variable key name etc.) that are deemed sensitive\"\"\"\n", "issue": "Consider and add common sensitive names\n**Description** \r\n\r\nSince sensitive informations in the connection object (specifically the extras field) are now being masked based on sensitive key names, we should consider adding some common sensitive key names.\r\n\r\n`private_key` from [ssh connection](https://airflow.apache.org/docs/apache-airflow-providers-ssh/stable/connections/ssh.html) is an examples.\r\n\r\n**Use case / motivation**\r\n\r\nExtras field used to be blocked out entirely before the sensitive value masking feature (#15599).\r\n\r\n[Before in 2.0.2](https://github.com/apache/airflow/blob/2.0.2/airflow/hooks/base.py#L78\r\n) and [after in 2.1.0](https://github.com/apache/airflow/blob/2.1.0/airflow/hooks/base.py#L78\r\n).\r\n\r\nExtras field containing sensitive information now shown unless the key contains sensitive names.\r\n\r\n**Are you willing to submit a PR?** \r\n\r\n@ashb has expressed interest in adding this.\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"Mask sensitive information from logs\"\"\"\nimport collections\nimport io\nimport logging\nimport re\nfrom typing import TYPE_CHECKING, Iterable, Optional, Set, TypeVar, Union\n\nfrom airflow.compat.functools import cache, cached_property\n\nif TYPE_CHECKING:\n from airflow.typing_compat import RePatternType\n\n RedactableItem = TypeVar('RedactableItem')\n\n\nlog = logging.getLogger(__name__)\n\n\nDEFAULT_SENSITIVE_FIELDS = frozenset(\n {\n 'password',\n 'secret',\n 'passwd',\n 'authorization',\n 'api_key',\n 'apikey',\n 'access_token',\n }\n)\n\"\"\"Names of fields (Connection extra, Variable key name etc.) that are deemed sensitive\"\"\"\n\n\n@cache\ndef get_sensitive_variables_fields():\n \"\"\"Get comma-separated sensitive Variable Fields from airflow.cfg.\"\"\"\n from airflow.configuration import conf\n\n sensitive_fields = DEFAULT_SENSITIVE_FIELDS.copy()\n sensitive_variable_fields = conf.get('core', 'sensitive_var_conn_names')\n if sensitive_variable_fields:\n sensitive_fields |= frozenset({field.strip() for field in sensitive_variable_fields.split(',')})\n return sensitive_fields\n\n\ndef should_hide_value_for_key(name):\n \"\"\"Should the value for this given name (Variable name, or key in conn.extra_dejson) be hidden\"\"\"\n from airflow import settings\n\n if name and settings.HIDE_SENSITIVE_VAR_CONN_FIELDS:\n name = name.strip().lower()\n return any(s in name for s in get_sensitive_variables_fields())\n return False\n\n\ndef mask_secret(secret: Union[str, dict, Iterable], name: str = None) -> None:\n \"\"\"\n Mask a secret from appearing in the task logs.\n\n If ``name`` is provided, then it will only be masked if the name matches\n one of the configured \"sensitive\" names.\n\n If ``secret`` is a dict or a iterable (excluding str) then it will be\n recursively walked and keys with sensitive names will be hidden.\n \"\"\"\n # Delay import\n from airflow import settings\n\n # Filtering all log messages is not a free process, so we only do it when\n # running tasks\n if not settings.MASK_SECRETS_IN_LOGS or not secret:\n return\n\n _secrets_masker().add_mask(secret, name)\n\n\ndef redact(value: \"RedactableItem\", name: str = None) -> \"RedactableItem\":\n \"\"\"Redact any secrets found in ``value``.\"\"\"\n return _secrets_masker().redact(value, name)\n\n\n@cache\ndef _secrets_masker() -> \"SecretsMasker\":\n\n for flt in logging.getLogger('airflow.task').filters:\n if isinstance(flt, SecretsMasker):\n return flt\n raise RuntimeError(\"No SecretsMasker found!\")\n\n\nclass SecretsMasker(logging.Filter):\n \"\"\"Redact secrets from logs\"\"\"\n\n replacer: Optional[\"RePatternType\"] = None\n patterns: Set[str]\n\n ALREADY_FILTERED_FLAG = \"__SecretsMasker_filtered\"\n\n def __init__(self):\n super().__init__()\n self.patterns = set()\n\n @cached_property\n def _record_attrs_to_ignore(self) -> Iterable[str]:\n # Doing log.info(..., extra={'foo': 2}) sets extra properties on\n # record, i.e. record.foo. And we need to filter those too. Fun\n #\n # Create a record, and look at what attributes are on it, and ignore\n # all the default ones!\n\n record = logging.getLogRecordFactory()(\n # name, level, pathname, lineno, msg, args, exc_info, func=None, sinfo=None,\n \"x\",\n logging.INFO,\n __file__,\n 1,\n \"\",\n tuple(),\n exc_info=None,\n func=\"funcname\",\n )\n return frozenset(record.__dict__).difference({'msg', 'args'})\n\n def filter(self, record) -> bool:\n if self.ALREADY_FILTERED_FLAG in record.__dict__:\n # Filters are attached to multiple handlers and logs, keep a\n # \"private\" flag that stops us needing to process it more than once\n return True\n\n if self.replacer:\n for k, v in record.__dict__.items():\n if k in self._record_attrs_to_ignore:\n continue\n record.__dict__[k] = self.redact(v)\n if record.exc_info and record.exc_info[1] is not None:\n exc = record.exc_info[1]\n # I'm not sure if this is a good idea!\n exc.args = (self.redact(v) for v in exc.args)\n record.__dict__[self.ALREADY_FILTERED_FLAG] = True\n\n return True\n\n def _redact_all(self, item: \"RedactableItem\") -> \"RedactableItem\":\n if isinstance(item, dict):\n return {dict_key: self._redact_all(subval) for dict_key, subval in item.items()}\n elif isinstance(item, str):\n return '***'\n elif isinstance(item, (tuple, set)):\n # Turn set in to tuple!\n return tuple(self._redact_all(subval) for subval in item)\n elif isinstance(item, Iterable):\n return list(self._redact_all(subval) for subval in item)\n else:\n return item\n\n # pylint: disable=too-many-return-statements\n def redact(self, item: \"RedactableItem\", name: str = None) -> \"RedactableItem\":\n \"\"\"\n Redact an any secrets found in ``item``, if it is a string.\n\n If ``name`` is given, and it's a \"sensitive\" name (see\n :func:`should_hide_value_for_key`) then all string values in the item\n is redacted.\n\n \"\"\"\n try:\n if name and should_hide_value_for_key(name):\n return self._redact_all(item)\n\n if isinstance(item, dict):\n return {dict_key: self.redact(subval, dict_key) for dict_key, subval in item.items()}\n elif isinstance(item, str):\n if self.replacer:\n # We can't replace specific values, but the key-based redacting\n # can still happen, so we can't short-circuit, we need to walk\n # the structure.\n return self.replacer.sub('***', item)\n return item\n elif isinstance(item, (tuple, set)):\n # Turn set in to tuple!\n return tuple(self.redact(subval) for subval in item)\n elif isinstance(item, io.IOBase):\n return item\n elif isinstance(item, Iterable):\n return list(self.redact(subval) for subval in item)\n else:\n return item\n except Exception as e: # pylint: disable=broad-except\n log.warning(\n \"Unable to redact %r, please report this via <https://github.com/apache/airflow/issues>. \"\n \"Error was: %s: %s\",\n item,\n type(e).__name__,\n str(e),\n )\n return item\n\n # pylint: enable=too-many-return-statements\n\n def add_mask(self, secret: Union[str, dict, Iterable], name: str = None):\n \"\"\"Add a new secret to be masked to this filter instance.\"\"\"\n if isinstance(secret, dict):\n for k, v in secret.items():\n self.add_mask(v, k)\n elif isinstance(secret, str):\n if not secret:\n return\n pattern = re.escape(secret)\n if pattern not in self.patterns and (not name or should_hide_value_for_key(name)):\n self.patterns.add(pattern)\n self.replacer = re.compile('|'.join(self.patterns))\n elif isinstance(secret, collections.abc.Iterable):\n for v in secret:\n self.add_mask(v, name)\n", "path": "airflow/utils/log/secrets_masker.py"}]} | 3,212 | 172 |
gh_patches_debug_427 | rasdani/github-patches | git_diff | python__python-docs-es-1787 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Translate 'using/unix.po'
This needs to reach 100% translated.
The rendered version of this file will be available at https://docs.python.org/es/3.10/using/unix.html once translated.
Meanwhile, the English version is shown.
Current stats for `using/unix.po`:
* Fuzzy: 1
* Percent translated: 88.9%
* Entries: 40 / 45
* Untranslated: 5
Please, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.
Remember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).
</issue>
<code>
[start of scripts/translate.py]
1 import os
2 import re
3 import sys
4 from typing import Dict, Tuple
5
6 import polib
7
8 VERBOSE = False
9 DEBUG = False
10 SKIP_TRANSLATED_ENTRIES = True
11
12 try:
13 from deep_translator import GoogleTranslator
14 except ImportError:
15 print("Error: This util script needs `deep_translator` to be installed")
16 sys.exit(1)
17
18 _patterns = [
19 ":c:func:`[^`]+`",
20 ":c:type:`[^`]+`",
21 ":c:macro:`[^`]+`",
22 ":c:member:`[^`]+`",
23 ":c:data:`[^`]+`",
24 ":py:data:`[^`]+`",
25 ":py:mod:`[^`]+`",
26 ":func:`[^`]+`",
27 ":mod:`[^`]+`",
28 ":ref:`[^`]+`",
29 ":class:`[^`]+`",
30 ":pep:`[^`]+`",
31 ":data:`[^`]+`",
32 ":exc:`[^`]+`",
33 ":term:`[^`]+`",
34 ":meth:`[^`]+`",
35 ":envvar:`[^`]+`",
36 ":file:`[^`]+`",
37 ":attr:`[^`]+`",
38 ":const:`[^`]+`",
39 ":issue:`[^`]+`",
40 ":opcode:`[^`]+`",
41 ":option:`[^`]+`",
42 ":program:`[^`]+`",
43 ":keyword:`[^`]+`",
44 ":RFC:`[^`]+`",
45 ":rfc:`[^`]+`",
46 ":doc:`[^`]+`",
47 ":manpage:`[^`]+`",
48 ":sup:`[^`]+`",
49 "``[^`]+``",
50 "`[^`]+`__",
51 "`[^`]+`_",
52 "\*\*[^\*]+\*\*", # bold text between **
53 "\*[^\*]+\*", # italic text between *
54 ]
55
56 _exps = [re.compile(e) for e in _patterns]
57
58 def protect_sphinx_directives(s: str) -> Tuple[dict, str]:
59 """
60 Parameters:
61 string containing the text to translate
62
63 Returns:
64 dictionary containing all the placeholder text as keys
65 and the correct value.
66 """
67
68 i = 0
69 d: Dict[str, str] = {}
70 for exp in _exps:
71 matches = exp.findall(s)
72 if DEBUG:
73 print(exp, matches)
74 for match in matches:
75 ph = f"XASDF{str(i).zfill(2)}"
76 s = s.replace(match, ph)
77 if ph in d and VERBOSE:
78 print(f"Error: {ph} is already in the dictionary")
79 print("new", match)
80 print("old", d[ph])
81 d[ph] = match
82 i += 1
83 return d, s
84
85
86 def undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:
87 for ph, value in placeholders.items():
88 translated_text = translated_text.replace(ph, value)
89 if DEBUG:
90 print(ph, value)
91 print(translated_text)
92 return translated_text
93
94
95 if __name__ == "__main__":
96 filename = sys.argv[1]
97 if not os.path.isfile(filename):
98 print(f"File not found: '{filename}'")
99 sys.exit(-1)
100
101 po = polib.pofile(filename)
102 translator = GoogleTranslator(source="en", target="es")
103
104 for entry in po:
105 # If the entry has already a translation, skip.
106 if SKIP_TRANSLATED_ENTRIES and entry.msgstr:
107 continue
108
109 print("\nEN|", entry.msgid)
110 placeholders, temp_text = protect_sphinx_directives(entry.msgid)
111 if VERBOSE:
112 print(temp_text)
113 print(placeholders)
114
115 # Translate the temporary text without sphinx statements
116 translated_text = translator.translate(temp_text)
117
118 # Recover sphinx statements
119 real_text = undo_sphinx_directives_protection(placeholders, translated_text)
120 print("ES|", real_text)
121
122 # Replace the po file translated entry
123 entry.msgstr = real_text
124
125 # Save the file after all the entries are translated
126 po.save()
127
[end of scripts/translate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/translate.py b/scripts/translate.py
--- a/scripts/translate.py
+++ b/scripts/translate.py
@@ -44,6 +44,7 @@
":RFC:`[^`]+`",
":rfc:`[^`]+`",
":doc:`[^`]+`",
+ ":source:`[^`]+`",
":manpage:`[^`]+`",
":sup:`[^`]+`",
"``[^`]+``",
| {"golden_diff": "diff --git a/scripts/translate.py b/scripts/translate.py\n--- a/scripts/translate.py\n+++ b/scripts/translate.py\n@@ -44,6 +44,7 @@\n \":RFC:`[^`]+`\",\n \":rfc:`[^`]+`\",\n \":doc:`[^`]+`\",\n+ \":source:`[^`]+`\",\n \":manpage:`[^`]+`\",\n \":sup:`[^`]+`\",\n \"``[^`]+``\",\n", "issue": "Translate 'using/unix.po'\nThis needs to reach 100% translated.\n\nThe rendered version of this file will be available at https://docs.python.org/es/3.10/using/unix.html once translated.\nMeanwhile, the English version is shown.\n\nCurrent stats for `using/unix.po`:\n\n* Fuzzy: 1\n* Percent translated: 88.9%\n* Entries: 40 / 45\n* Untranslated: 5\n\nPlease, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.\n\nRemember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).\n", "before_files": [{"content": "import os\nimport re\nimport sys\nfrom typing import Dict, Tuple\n\nimport polib\n\nVERBOSE = False\nDEBUG = False\nSKIP_TRANSLATED_ENTRIES = True\n\ntry:\n from deep_translator import GoogleTranslator\nexcept ImportError:\n print(\"Error: This util script needs `deep_translator` to be installed\")\n sys.exit(1)\n\n_patterns = [\n \":c:func:`[^`]+`\",\n \":c:type:`[^`]+`\",\n \":c:macro:`[^`]+`\",\n \":c:member:`[^`]+`\",\n \":c:data:`[^`]+`\",\n \":py:data:`[^`]+`\",\n \":py:mod:`[^`]+`\",\n \":func:`[^`]+`\",\n \":mod:`[^`]+`\",\n \":ref:`[^`]+`\",\n \":class:`[^`]+`\",\n \":pep:`[^`]+`\",\n \":data:`[^`]+`\",\n \":exc:`[^`]+`\",\n \":term:`[^`]+`\",\n \":meth:`[^`]+`\",\n \":envvar:`[^`]+`\",\n \":file:`[^`]+`\",\n \":attr:`[^`]+`\",\n \":const:`[^`]+`\",\n \":issue:`[^`]+`\",\n \":opcode:`[^`]+`\",\n \":option:`[^`]+`\",\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n \":rfc:`[^`]+`\",\n \":doc:`[^`]+`\",\n \":manpage:`[^`]+`\",\n \":sup:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n \"\\*\\*[^\\*]+\\*\\*\", # bold text between **\n \"\\*[^\\*]+\\*\", # italic text between *\n]\n\n_exps = [re.compile(e) for e in _patterns]\n\ndef protect_sphinx_directives(s: str) -> Tuple[dict, str]:\n \"\"\"\n Parameters:\n string containing the text to translate\n\n Returns:\n dictionary containing all the placeholder text as keys\n and the correct value.\n \"\"\"\n\n i = 0\n d: Dict[str, str] = {}\n for exp in _exps:\n matches = exp.findall(s)\n if DEBUG:\n print(exp, matches)\n for match in matches:\n ph = f\"XASDF{str(i).zfill(2)}\"\n s = s.replace(match, ph)\n if ph in d and VERBOSE:\n print(f\"Error: {ph} is already in the dictionary\")\n print(\"new\", match)\n print(\"old\", d[ph])\n d[ph] = match\n i += 1\n return d, s\n\n\ndef undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:\n for ph, value in placeholders.items():\n translated_text = translated_text.replace(ph, value)\n if DEBUG:\n print(ph, value)\n print(translated_text)\n return translated_text\n\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n if not os.path.isfile(filename):\n print(f\"File not found: '{filename}'\")\n sys.exit(-1)\n\n po = polib.pofile(filename)\n translator = GoogleTranslator(source=\"en\", target=\"es\")\n\n for entry in po:\n # If the entry has already a translation, skip.\n if SKIP_TRANSLATED_ENTRIES and entry.msgstr:\n continue\n\n print(\"\\nEN|\", entry.msgid)\n placeholders, temp_text = protect_sphinx_directives(entry.msgid)\n if VERBOSE:\n print(temp_text)\n print(placeholders)\n\n # Translate the temporary text without sphinx statements\n translated_text = translator.translate(temp_text)\n\n # Recover sphinx statements\n real_text = undo_sphinx_directives_protection(placeholders, translated_text)\n print(\"ES|\", real_text)\n\n # Replace the po file translated entry\n entry.msgstr = real_text\n\n # Save the file after all the entries are translated\n po.save()\n", "path": "scripts/translate.py"}]} | 1,887 | 105 |
gh_patches_debug_32832 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-426 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't display redundant form field labels
- Don't display redundant field labels
- Retain field labels for screenreaders
</issue>
<code>
[start of src/registrar/templatetags/field_helpers.py]
1 """Custom field helpers for our inputs."""
2 import re
3
4 from django import template
5
6 register = template.Library()
7
8
9 @register.inclusion_tag("includes/input_with_errors.html", takes_context=True)
10 def input_with_errors(context, field=None): # noqa: C901
11 """Make an input field along with error handling.
12
13 Args:
14 field: The field instance.
15
16 In addition to the explicit `field` argument, this inclusion_tag takes the
17 following "widget-tweak-esque" parameters from the surrounding context.
18
19 Context args:
20 add_class: append to input element's `class` attribute
21 add_error_class: like `add_class` but only if field.errors is not empty
22 add_required_class: like `add_class` but only if field is required
23 add_label_class: append to input element's label's `class` attribute
24 add_group_class: append to input element's surrounding tag's `class` attribute
25 attr_* - adds or replaces any single html attribute for the input
26 add_error_attr_* - like `attr_*` but only if field.errors is not empty
27
28 Example usage:
29 ```
30 {% for form in forms.0 %}
31 {% with add_class="usa-input--medium" %}
32 {% with attr_required=True attr_disabled=False %}
33 {% input_with_errors form.street_address1 %}
34 {% endwith %}
35 {% endwith %}
36 {% endfor }
37
38 There are a few edge cases to keep in mind:
39 - a "maxlength" attribute will cause the input to use USWDS Character counter
40 - the field's `use_fieldset` controls whether the output is label/field or
41 fieldset/legend/field
42 - checkbox label styling is different (this is handled, don't worry about it)
43 """
44 context = context.flatten()
45 context["field"] = field
46
47 # get any attributes specified in the field's definition
48 attrs = dict(field.field.widget.attrs)
49
50 # these will be converted to CSS strings
51 classes = []
52 label_classes = []
53 group_classes = []
54
55 # this will be converted to an attribute string
56 described_by = []
57
58 if "class" in attrs:
59 classes.append(attrs.pop("class"))
60
61 # parse context for field attributes and classes
62 # ---
63 # here we loop through all items in the context dictionary
64 # (this is the context which was being used to render the
65 # outer template in which this {% input_with_errors %} appeared!)
66 # and look for "magic" keys -- these are used to modify the
67 # appearance and behavior of the final HTML
68 for key, value in context.items():
69 if key.startswith("attr_"):
70 attr_name = re.sub("_", "-", key[5:])
71 attrs[attr_name] = value
72 elif key.startswith("add_error_attr_") and field.errors:
73 attr_name = re.sub("_", "-", key[15:])
74 attrs[attr_name] = value
75
76 elif key == "add_class":
77 classes.append(value)
78 elif key == "add_required_class" and field.required:
79 classes.append(value)
80 elif key == "add_error_class" and field.errors:
81 classes.append(value)
82
83 elif key == "add_label_class":
84 label_classes.append(value)
85
86 elif key == "add_group_class":
87 group_classes.append(value)
88
89 attrs["id"] = field.auto_id
90
91 # do some work for various edge cases
92
93 if "maxlength" in attrs:
94 # associate the field programmatically with its hint text
95 described_by.append(f"{attrs['id']}__message")
96
97 if field.use_fieldset:
98 context["label_tag"] = "legend"
99 else:
100 context["label_tag"] = "label"
101
102 if field.use_fieldset:
103 label_classes.append("usa-legend")
104
105 if field.widget_type == "checkbox":
106 label_classes.append("usa-checkbox__label")
107 elif not field.use_fieldset:
108 label_classes.append("usa-label")
109
110 if field.errors:
111 # associate the field programmatically with its error message
112 message_div_id = f"{attrs['id']}__error-message"
113 described_by.append(message_div_id)
114
115 # set the field invalid
116 # due to weirdness, this must be a string, not a boolean
117 attrs["aria-invalid"] = "true"
118
119 # style the invalid field
120 classes.append("usa-input--error")
121 label_classes.append("usa-label--error")
122 group_classes.append("usa-form-group--error")
123
124 # convert lists into strings
125
126 if classes:
127 context["classes"] = " ".join(classes)
128
129 if label_classes:
130 context["label_classes"] = " ".join(label_classes)
131
132 if group_classes:
133 context["group_classes"] = " ".join(group_classes)
134
135 if described_by:
136 # ensure we don't overwrite existing attribute value
137 if "aria-describedby" in attrs:
138 described_by.append(attrs["aria-describedby"])
139 attrs["aria-describedby"] = " ".join(described_by)
140
141 # ask Django to give us the widget dict
142 # see Widget.get_context() on
143 # https://docs.djangoproject.com/en/4.1/ref/forms/widgets
144 widget = field.field.widget.get_context(
145 field.html_name, field.value() or field.initial, field.build_widget_attrs(attrs)
146 ) # -> {"widget": {"name": ...}}
147
148 context["widget"] = widget["widget"]
149
150 return context
151
[end of src/registrar/templatetags/field_helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/registrar/templatetags/field_helpers.py b/src/registrar/templatetags/field_helpers.py
--- a/src/registrar/templatetags/field_helpers.py
+++ b/src/registrar/templatetags/field_helpers.py
@@ -21,6 +21,7 @@
add_error_class: like `add_class` but only if field.errors is not empty
add_required_class: like `add_class` but only if field is required
add_label_class: append to input element's label's `class` attribute
+ add_legend_class: append to input element's legend's `class` attribute
add_group_class: append to input element's surrounding tag's `class` attribute
attr_* - adds or replaces any single html attribute for the input
add_error_attr_* - like `attr_*` but only if field.errors is not empty
@@ -50,6 +51,7 @@
# these will be converted to CSS strings
classes = []
label_classes = []
+ legend_classes = []
group_classes = []
# this will be converted to an attribute string
@@ -82,6 +84,8 @@
elif key == "add_label_class":
label_classes.append(value)
+ elif key == "add_legend_class":
+ legend_classes.append(value)
elif key == "add_group_class":
group_classes.append(value)
@@ -129,6 +133,9 @@
if label_classes:
context["label_classes"] = " ".join(label_classes)
+ if legend_classes:
+ context["legend_classes"] = " ".join(legend_classes)
+
if group_classes:
context["group_classes"] = " ".join(group_classes)
| {"golden_diff": "diff --git a/src/registrar/templatetags/field_helpers.py b/src/registrar/templatetags/field_helpers.py\n--- a/src/registrar/templatetags/field_helpers.py\n+++ b/src/registrar/templatetags/field_helpers.py\n@@ -21,6 +21,7 @@\n add_error_class: like `add_class` but only if field.errors is not empty\n add_required_class: like `add_class` but only if field is required\n add_label_class: append to input element's label's `class` attribute\n+ add_legend_class: append to input element's legend's `class` attribute\n add_group_class: append to input element's surrounding tag's `class` attribute\n attr_* - adds or replaces any single html attribute for the input\n add_error_attr_* - like `attr_*` but only if field.errors is not empty\n@@ -50,6 +51,7 @@\n # these will be converted to CSS strings\n classes = []\n label_classes = []\n+ legend_classes = []\n group_classes = []\n \n # this will be converted to an attribute string\n@@ -82,6 +84,8 @@\n \n elif key == \"add_label_class\":\n label_classes.append(value)\n+ elif key == \"add_legend_class\":\n+ legend_classes.append(value)\n \n elif key == \"add_group_class\":\n group_classes.append(value)\n@@ -129,6 +133,9 @@\n if label_classes:\n context[\"label_classes\"] = \" \".join(label_classes)\n \n+ if legend_classes:\n+ context[\"legend_classes\"] = \" \".join(legend_classes)\n+\n if group_classes:\n context[\"group_classes\"] = \" \".join(group_classes)\n", "issue": "Don't display redundant form field labels\n- Don't display redundant field labels \n- Retain field labels for screenreaders\n", "before_files": [{"content": "\"\"\"Custom field helpers for our inputs.\"\"\"\nimport re\n\nfrom django import template\n\nregister = template.Library()\n\n\[email protected]_tag(\"includes/input_with_errors.html\", takes_context=True)\ndef input_with_errors(context, field=None): # noqa: C901\n \"\"\"Make an input field along with error handling.\n\n Args:\n field: The field instance.\n\n In addition to the explicit `field` argument, this inclusion_tag takes the\n following \"widget-tweak-esque\" parameters from the surrounding context.\n\n Context args:\n add_class: append to input element's `class` attribute\n add_error_class: like `add_class` but only if field.errors is not empty\n add_required_class: like `add_class` but only if field is required\n add_label_class: append to input element's label's `class` attribute\n add_group_class: append to input element's surrounding tag's `class` attribute\n attr_* - adds or replaces any single html attribute for the input\n add_error_attr_* - like `attr_*` but only if field.errors is not empty\n\n Example usage:\n ```\n {% for form in forms.0 %}\n {% with add_class=\"usa-input--medium\" %}\n {% with attr_required=True attr_disabled=False %}\n {% input_with_errors form.street_address1 %}\n {% endwith %}\n {% endwith %}\n {% endfor }\n\n There are a few edge cases to keep in mind:\n - a \"maxlength\" attribute will cause the input to use USWDS Character counter\n - the field's `use_fieldset` controls whether the output is label/field or\n fieldset/legend/field\n - checkbox label styling is different (this is handled, don't worry about it)\n \"\"\"\n context = context.flatten()\n context[\"field\"] = field\n\n # get any attributes specified in the field's definition\n attrs = dict(field.field.widget.attrs)\n\n # these will be converted to CSS strings\n classes = []\n label_classes = []\n group_classes = []\n\n # this will be converted to an attribute string\n described_by = []\n\n if \"class\" in attrs:\n classes.append(attrs.pop(\"class\"))\n\n # parse context for field attributes and classes\n # ---\n # here we loop through all items in the context dictionary\n # (this is the context which was being used to render the\n # outer template in which this {% input_with_errors %} appeared!)\n # and look for \"magic\" keys -- these are used to modify the\n # appearance and behavior of the final HTML\n for key, value in context.items():\n if key.startswith(\"attr_\"):\n attr_name = re.sub(\"_\", \"-\", key[5:])\n attrs[attr_name] = value\n elif key.startswith(\"add_error_attr_\") and field.errors:\n attr_name = re.sub(\"_\", \"-\", key[15:])\n attrs[attr_name] = value\n\n elif key == \"add_class\":\n classes.append(value)\n elif key == \"add_required_class\" and field.required:\n classes.append(value)\n elif key == \"add_error_class\" and field.errors:\n classes.append(value)\n\n elif key == \"add_label_class\":\n label_classes.append(value)\n\n elif key == \"add_group_class\":\n group_classes.append(value)\n\n attrs[\"id\"] = field.auto_id\n\n # do some work for various edge cases\n\n if \"maxlength\" in attrs:\n # associate the field programmatically with its hint text\n described_by.append(f\"{attrs['id']}__message\")\n\n if field.use_fieldset:\n context[\"label_tag\"] = \"legend\"\n else:\n context[\"label_tag\"] = \"label\"\n\n if field.use_fieldset:\n label_classes.append(\"usa-legend\")\n\n if field.widget_type == \"checkbox\":\n label_classes.append(\"usa-checkbox__label\")\n elif not field.use_fieldset:\n label_classes.append(\"usa-label\")\n\n if field.errors:\n # associate the field programmatically with its error message\n message_div_id = f\"{attrs['id']}__error-message\"\n described_by.append(message_div_id)\n\n # set the field invalid\n # due to weirdness, this must be a string, not a boolean\n attrs[\"aria-invalid\"] = \"true\"\n\n # style the invalid field\n classes.append(\"usa-input--error\")\n label_classes.append(\"usa-label--error\")\n group_classes.append(\"usa-form-group--error\")\n\n # convert lists into strings\n\n if classes:\n context[\"classes\"] = \" \".join(classes)\n\n if label_classes:\n context[\"label_classes\"] = \" \".join(label_classes)\n\n if group_classes:\n context[\"group_classes\"] = \" \".join(group_classes)\n\n if described_by:\n # ensure we don't overwrite existing attribute value\n if \"aria-describedby\" in attrs:\n described_by.append(attrs[\"aria-describedby\"])\n attrs[\"aria-describedby\"] = \" \".join(described_by)\n\n # ask Django to give us the widget dict\n # see Widget.get_context() on\n # https://docs.djangoproject.com/en/4.1/ref/forms/widgets\n widget = field.field.widget.get_context(\n field.html_name, field.value() or field.initial, field.build_widget_attrs(attrs)\n ) # -> {\"widget\": {\"name\": ...}}\n\n context[\"widget\"] = widget[\"widget\"]\n\n return context\n", "path": "src/registrar/templatetags/field_helpers.py"}]} | 2,095 | 384 |
gh_patches_debug_56800 | rasdani/github-patches | git_diff | wright-group__WrightTools-522 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
hide fit functionality
</issue>
<code>
[start of WrightTools/__init__.py]
1 """WrightTools init."""
2 # flake8: noqa
3
4
5 # --- import --------------------------------------------------------------------------------------
6
7
8 import sys as _sys
9
10 from .__version__ import *
11 from . import artists
12 from . import collection
13 from . import data
14 from . import diagrams
15 from . import fit
16 from . import kit
17 from . import units
18 from . import exceptions
19
20 from ._open import *
21 from .collection._collection import *
22 from .data._data import *
23
24
25 # --- rcparams ------------------------------------------------------------------------------------
26
27
28 if int(_sys.version.split('.')[0]) > 2:
29 artists.apply_rcparams('fast')
30
[end of WrightTools/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/WrightTools/__init__.py b/WrightTools/__init__.py
--- a/WrightTools/__init__.py
+++ b/WrightTools/__init__.py
@@ -12,7 +12,6 @@
from . import collection
from . import data
from . import diagrams
-from . import fit
from . import kit
from . import units
from . import exceptions
| {"golden_diff": "diff --git a/WrightTools/__init__.py b/WrightTools/__init__.py\n--- a/WrightTools/__init__.py\n+++ b/WrightTools/__init__.py\n@@ -12,7 +12,6 @@\n from . import collection\n from . import data\n from . import diagrams\n-from . import fit\n from . import kit\n from . import units\n from . import exceptions\n", "issue": "hide fit functionality\n\n", "before_files": [{"content": "\"\"\"WrightTools init.\"\"\"\n# flake8: noqa\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport sys as _sys\n\nfrom .__version__ import *\nfrom . import artists\nfrom . import collection\nfrom . import data\nfrom . import diagrams\nfrom . import fit\nfrom . import kit\nfrom . import units\nfrom . import exceptions\n\nfrom ._open import *\nfrom .collection._collection import *\nfrom .data._data import *\n\n\n# --- rcparams ------------------------------------------------------------------------------------\n\n\nif int(_sys.version.split('.')[0]) > 2:\n artists.apply_rcparams('fast')\n", "path": "WrightTools/__init__.py"}]} | 719 | 87 |
gh_patches_debug_7839 | rasdani/github-patches | git_diff | getsentry__sentry-25 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Limitations on SENTRY_KEY not documented
I assumed that SENTRY_KEY was just any random string that should be unique to a deployment and kept secret (ie, serving a similar purpose to django's SECRET_KEY) so I generated a random string to use there.
It turns out that it's used in a URL for the feeds:
```
url(r'^feeds/%s/messages.xml$' % KEY, MessageFeed(), name='sentry-feed-messages'),
```
(in sentry.urls)
Mine happened to have an '[' in it which caused a very confusing error about an unterminated regex.
It should be documented that SENTRY_KEY will make it into a URL (and further that django will try to then parse it as a regexp when trying to do reverse lookups) and thus can't have most punctuation.
</issue>
<code>
[start of sentry/urls.py]
1 import os
2
3 from django.conf.urls.defaults import *
4
5 from sentry.conf import KEY
6 from sentry import views
7 from sentry.feeds import MessageFeed, SummaryFeed
8
9 SENTRY_ROOT = os.path.dirname(__file__)
10
11 urlpatterns = patterns('',
12 url(r'^_media/(?P<path>.*)$', 'django.views.static.serve',
13 {'document_root': os.path.join(SENTRY_ROOT, 'media')}, name='sentry-media'),
14
15 # Feeds
16
17 url(r'^feeds/%s/messages.xml$' % KEY, MessageFeed(), name='sentry-feed-messages'),
18 url(r'^feeds/%s/summaries.xml$' % KEY, SummaryFeed(), name='sentry-feed-summaries'),
19
20 # JS and API
21
22 url(r'^jsapi/$', views.ajax_handler, name='sentry-ajax'),
23 url(r'^store/$', views.store, name='sentry-store'),
24
25 # Normal views
26
27 url(r'^login$', views.login, name='sentry-login'),
28 url(r'^logout$', views.logout, name='sentry-logout'),
29 url(r'^group/(\d+)$', views.group, name='sentry-group'),
30 url(r'^group/(\d+)/messages$', views.group_message_list, name='sentry-group-messages'),
31 url(r'^group/(\d+)/messages/(\d+)$', views.group_message_details, name='sentry-group-message'),
32 url(r'^group/(\d+)/actions/([\w_-]+)', views.group_plugin_action, name='sentry-group-plugin-action'),
33
34 url(r'^$', views.index, name='sentry'),
35 )
36
[end of sentry/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry/urls.py b/sentry/urls.py
--- a/sentry/urls.py
+++ b/sentry/urls.py
@@ -1,4 +1,5 @@
import os
+import re
from django.conf.urls.defaults import *
@@ -14,8 +15,8 @@
# Feeds
- url(r'^feeds/%s/messages.xml$' % KEY, MessageFeed(), name='sentry-feed-messages'),
- url(r'^feeds/%s/summaries.xml$' % KEY, SummaryFeed(), name='sentry-feed-summaries'),
+ url(r'^feeds/%s/messages.xml$' % re.escape(KEY), MessageFeed(), name='sentry-feed-messages'),
+ url(r'^feeds/%s/summaries.xml$' % re.escape(KEY), SummaryFeed(), name='sentry-feed-summaries'),
# JS and API
| {"golden_diff": "diff --git a/sentry/urls.py b/sentry/urls.py\n--- a/sentry/urls.py\n+++ b/sentry/urls.py\n@@ -1,4 +1,5 @@\n import os\n+import re\n \n from django.conf.urls.defaults import *\n \n@@ -14,8 +15,8 @@\n \n # Feeds\n \n- url(r'^feeds/%s/messages.xml$' % KEY, MessageFeed(), name='sentry-feed-messages'),\n- url(r'^feeds/%s/summaries.xml$' % KEY, SummaryFeed(), name='sentry-feed-summaries'),\n+ url(r'^feeds/%s/messages.xml$' % re.escape(KEY), MessageFeed(), name='sentry-feed-messages'),\n+ url(r'^feeds/%s/summaries.xml$' % re.escape(KEY), SummaryFeed(), name='sentry-feed-summaries'),\n \n # JS and API\n", "issue": "Limitations on SENTRY_KEY not documented\nI assumed that SENTRY_KEY was just any random string that should be unique to a deployment and kept secret (ie, serving a similar purpose to django's SECRET_KEY) so I generated a random string to use there. \n\nIt turns out that it's used in a URL for the feeds:\n\n```\nurl(r'^feeds/%s/messages.xml$' % KEY, MessageFeed(), name='sentry-feed-messages'),\n```\n\n(in sentry.urls)\n\nMine happened to have an '[' in it which caused a very confusing error about an unterminated regex. \n\nIt should be documented that SENTRY_KEY will make it into a URL (and further that django will try to then parse it as a regexp when trying to do reverse lookups) and thus can't have most punctuation. \n\n", "before_files": [{"content": "import os\n\nfrom django.conf.urls.defaults import *\n\nfrom sentry.conf import KEY\nfrom sentry import views\nfrom sentry.feeds import MessageFeed, SummaryFeed\n\nSENTRY_ROOT = os.path.dirname(__file__) \n\nurlpatterns = patterns('',\n url(r'^_media/(?P<path>.*)$', 'django.views.static.serve',\n {'document_root': os.path.join(SENTRY_ROOT, 'media')}, name='sentry-media'),\n\n # Feeds\n\n url(r'^feeds/%s/messages.xml$' % KEY, MessageFeed(), name='sentry-feed-messages'),\n url(r'^feeds/%s/summaries.xml$' % KEY, SummaryFeed(), name='sentry-feed-summaries'),\n\n # JS and API\n\n url(r'^jsapi/$', views.ajax_handler, name='sentry-ajax'),\n url(r'^store/$', views.store, name='sentry-store'),\n \n # Normal views\n\n url(r'^login$', views.login, name='sentry-login'),\n url(r'^logout$', views.logout, name='sentry-logout'),\n url(r'^group/(\\d+)$', views.group, name='sentry-group'),\n url(r'^group/(\\d+)/messages$', views.group_message_list, name='sentry-group-messages'),\n url(r'^group/(\\d+)/messages/(\\d+)$', views.group_message_details, name='sentry-group-message'),\n url(r'^group/(\\d+)/actions/([\\w_-]+)', views.group_plugin_action, name='sentry-group-plugin-action'),\n\n url(r'^$', views.index, name='sentry'),\n)\n", "path": "sentry/urls.py"}]} | 1,112 | 194 |
gh_patches_debug_21879 | rasdani/github-patches | git_diff | ansible__ansible-17489 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Delegation of service action fails when using different init systems
Hello ansible devs,
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`service` action, as determined by `lib/ansible/plugins/action/service.py`
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel 1c33b5a9f0) last updated 2016/08/30 09:23:53 (GMT +200)
lib/ansible/modules/core: (detached HEAD 5310bab12f) last updated 2016/08/30 09:23:59 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 2ef4a34eee) last updated 2016/08/30 09:23:59 (GMT +200)
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
nothing specific to this bug
##### OS / ENVIRONMENT
Debian stretch/testing on amd64, clients are wheezy systems running sysvinit and jessie systems running systemd.
##### SUMMARY
Depending on the gathered host systems facts, ansible decides which `service_mgr` to use. On systemd systems, this is going to be `systemd`, on sysvinit systems, it's `service` and so on. The problem is, when delegating a service action from a system running systemd to a system running something else, ansible still uses the systemd service_mgr, which obviously fails.
A workaround is forcing the action to `service` with the un(der)documented `use` option as shown in the third play.
##### STEPS TO REPRODUCE
consider the following playbook. on some_sysvinit_host, the icinga service needs to be restarted. This fails when that action is delegated and no special care is taken to work around the bug:
```
- hosts: monitoring_running_sysvinit
name: Verify that direct service restart works
tasks:
- service: name=icinga state=restarted
- hosts: client_with_systemd
name: Check whether it falls back to generic action w/o facts
gather_facts: false
tasks:
- service: name=icinga state=restarted
delegate_to: monitoring_running_sysvinit
- hosts: client_with_systemd
name: Delegated to host running sysvinit, force 'service' handler
tasks:
- service: name=icinga state=restarted use=service
delegate_to: monitoring_running_sysvinit
- hosts: client_with_systemd
name: Delegated to host running sysvinit
tasks:
- service: name=icinga state=restarted
delegate_to: monitoring_running_sysvinit
```
##### EXPECTED RESULTS
All four actions should restart the icinga service on monitoring_running_sysvinit successfully.
##### ACTUAL RESULTS
The first action, running directly on the monitoring_running_sysvinit host restarts correctly.
Delegation works when we're working around the bug by either not gathering facts (and so, forcing client_with_systemd's service_mgr to the fallback `service`) or by overwriting the helper action to `service` by setting the `use` action for the delegated service action.
When just delegating from the systemd host to the sysvinit (or any other combination), the incompatible action fails:
```
$ ansible-playbook test-delegation-service.yml --sudo --verbose --diff
PLAY [Verify that direct service restart works] ********************************
TASK [setup] *******************************************************************
ok: [monitoring_running_sysvinit]
TASK [service] *****************************************************************
changed: [monitoring_running_sysvinit] => {"changed": true, "name": "icinga", "state": "started"}
PLAY [Check whether it falls back to generic action w/o facts] *****************
TASK [service] *****************************************************************
changed: [client_with_systemd -> monitoring_running_sysvinit] => {"changed": true, "name": "icinga", "state": "started"}
PLAY [Delegated to host running sysvinit, force 'service' handler] *************
TASK [setup] *******************************************************************
ok: [client_with_systemd]
TASK [service] *****************************************************************
changed: [client_with_systemd -> monitoring_running_sysvinit] => {"changed": true, "name": "icinga", "state": "started"}
PLAY [Delegated to host running sysvinit] **************************************
TASK [setup] *******************************************************************
ok: [client_with_systemd]
TASK [service] *****************************************************************
fatal: [client_with_systemd -> monitoring_running_sysvinit]: FAILED! => {"changed": false, "cmd": "None show icinga", "failed": true, "msg": "[Errno 2] No such file or directory", "rc": 2}
to retry, use: --limit @test-delegation-service.retry
PLAY RECAP *********************************************************************
client_with_systemd : ok=4 changed=2 unreachable=0 failed=1
monitoring_running_sysvinit : ok=2 changed=1 unreachable=0 failed=0
```
</issue>
<code>
[start of lib/ansible/plugins/action/service.py]
1 # (c) 2015, Ansible Inc,
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17 from __future__ import (absolute_import, division, print_function)
18 __metaclass__ = type
19
20
21 from ansible.plugins.action import ActionBase
22
23
24 class ActionModule(ActionBase):
25
26 TRANSFERS_FILES = False
27
28 UNUSED_PARAMS = {
29 'systemd': ['pattern', 'runlevels', 'sleep', 'arguments'],
30 }
31
32 def run(self, tmp=None, task_vars=None):
33 ''' handler for package operations '''
34 if task_vars is None:
35 task_vars = dict()
36
37 result = super(ActionModule, self).run(tmp, task_vars)
38
39 module = self._task.args.get('use', 'auto').lower()
40
41 if module == 'auto':
42 try:
43 module = self._templar.template('{{ansible_service_mgr}}')
44 except:
45 pass # could not get it from template!
46
47 if module == 'auto':
48 facts = self._execute_module(module_name='setup', module_args=dict(gather_subset='!all', filter='ansible_service_mgr'), task_vars=task_vars)
49 self._display.debug("Facts %s" % facts)
50 if 'ansible_facts' in facts and 'ansible_service_mgr' in facts['ansible_facts']:
51 module = facts['ansible_facts']['ansible_service_mgr']
52
53 if not module or module == 'auto' or module not in self._shared_loader_obj.module_loader:
54 module = 'service'
55
56 if module != 'auto':
57 # run the 'service' module
58 new_module_args = self._task.args.copy()
59 if 'use' in new_module_args:
60 del new_module_args['use']
61
62 # for backwards compatibility
63 if 'state' in new_module_args and new_module_args['state'] == 'running':
64 new_module_args['state'] = 'started'
65
66 if module in self.UNUSED_PARAMS:
67 for unused in self.UNUSED_PARAMS[module]:
68 if unused in new_module_args:
69 del new_module_args[unused]
70 self._display.warning('Ignoring "%s" as it is not used in "%s"' % (unused, module))
71
72 self._display.vvvv("Running %s" % module)
73 result.update(self._execute_module(module_name=module, module_args=new_module_args, task_vars=task_vars))
74 else:
75 result['failed'] = True
76 result['msg'] = 'Could not detect which service manager to use. Try gathering facts or setting the "use" option.'
77
78 return result
79
[end of lib/ansible/plugins/action/service.py]
[start of lib/ansible/plugins/action/package.py]
1 # (c) 2015, Ansible Inc,
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17 from __future__ import (absolute_import, division, print_function)
18 __metaclass__ = type
19
20 from ansible.plugins.action import ActionBase
21
22 try:
23 from __main__ import display
24 except ImportError:
25 from ansible.utils.display import Display
26 display = Display()
27
28
29 class ActionModule(ActionBase):
30
31 TRANSFERS_FILES = False
32
33 def run(self, tmp=None, task_vars=None):
34 ''' handler for package operations '''
35 if task_vars is None:
36 task_vars = dict()
37
38 result = super(ActionModule, self).run(tmp, task_vars)
39
40 module = self._task.args.get('use', 'auto')
41
42 if module == 'auto':
43 try:
44 module = self._templar.template('{{ansible_pkg_mgr}}')
45 except:
46 pass # could not get it from template!
47
48 if module == 'auto':
49 facts = self._execute_module(module_name='setup', module_args=dict(filter='ansible_pkg_mgr', gather_subset='!all'), task_vars=task_vars)
50 display.debug("Facts %s" % facts)
51 if 'ansible_facts' in facts and 'ansible_pkg_mgr' in facts['ansible_facts']:
52 module = getattr(facts['ansible_facts'], 'ansible_pkg_mgr', 'auto')
53
54 if module != 'auto':
55
56 if module not in self._shared_loader_obj.module_loader:
57 result['failed'] = True
58 result['msg'] = 'Could not find a module for %s.' % module
59 return result
60
61 # run the 'package' module
62 new_module_args = self._task.args.copy()
63 if 'use' in new_module_args:
64 del new_module_args['use']
65
66 display.vvvv("Running %s" % module)
67 result.update(self._execute_module(module_name=module, module_args=new_module_args, task_vars=task_vars))
68 return result
69 else:
70 result['failed'] = True
71 result['msg'] = 'Could not detect which package manager to use. Try gathering facts or setting the "use" option.'
72 return result
73
[end of lib/ansible/plugins/action/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/ansible/plugins/action/package.py b/lib/ansible/plugins/action/package.py
--- a/lib/ansible/plugins/action/package.py
+++ b/lib/ansible/plugins/action/package.py
@@ -41,7 +41,10 @@
if module == 'auto':
try:
- module = self._templar.template('{{ansible_pkg_mgr}}')
+ if self._task.delegate_to: # if we delegate, we should use delegated host's facts
+ module = self._templar.template("{{hostvars['%s']['ansible_pkg_mgr']}}" % self._task.delegate_to)
+ else:
+ module = self._templar.template('{{ansible_pkg_mgr}}')
except:
pass # could not get it from template!
diff --git a/lib/ansible/plugins/action/service.py b/lib/ansible/plugins/action/service.py
--- a/lib/ansible/plugins/action/service.py
+++ b/lib/ansible/plugins/action/service.py
@@ -40,7 +40,10 @@
if module == 'auto':
try:
- module = self._templar.template('{{ansible_service_mgr}}')
+ if self._task.delegate_to: # if we delegate, we should use delegated host's facts
+ module = self._templar.template("{{hostvars['%s']['ansible_service_mgr']}}" % self._task.delegate_to)
+ else:
+ module = self._templar.template('{{ansible_service_mgr}}')
except:
pass # could not get it from template!
| {"golden_diff": "diff --git a/lib/ansible/plugins/action/package.py b/lib/ansible/plugins/action/package.py\n--- a/lib/ansible/plugins/action/package.py\n+++ b/lib/ansible/plugins/action/package.py\n@@ -41,7 +41,10 @@\n \n if module == 'auto':\n try:\n- module = self._templar.template('{{ansible_pkg_mgr}}')\n+ if self._task.delegate_to: # if we delegate, we should use delegated host's facts\n+ module = self._templar.template(\"{{hostvars['%s']['ansible_pkg_mgr']}}\" % self._task.delegate_to)\n+ else:\n+ module = self._templar.template('{{ansible_pkg_mgr}}')\n except:\n pass # could not get it from template!\n \ndiff --git a/lib/ansible/plugins/action/service.py b/lib/ansible/plugins/action/service.py\n--- a/lib/ansible/plugins/action/service.py\n+++ b/lib/ansible/plugins/action/service.py\n@@ -40,7 +40,10 @@\n \n if module == 'auto':\n try:\n- module = self._templar.template('{{ansible_service_mgr}}')\n+ if self._task.delegate_to: # if we delegate, we should use delegated host's facts\n+ module = self._templar.template(\"{{hostvars['%s']['ansible_service_mgr']}}\" % self._task.delegate_to)\n+ else:\n+ module = self._templar.template('{{ansible_service_mgr}}')\n except:\n pass # could not get it from template!\n", "issue": "Delegation of service action fails when using different init systems\nHello ansible devs,\n##### ISSUE TYPE\n- Bug Report\n##### COMPONENT NAME\n\n`service` action, as determined by `lib/ansible/plugins/action/service.py`\n##### ANSIBLE VERSION\n\n```\nansible 2.2.0 (devel 1c33b5a9f0) last updated 2016/08/30 09:23:53 (GMT +200)\n lib/ansible/modules/core: (detached HEAD 5310bab12f) last updated 2016/08/30 09:23:59 (GMT +200)\n lib/ansible/modules/extras: (detached HEAD 2ef4a34eee) last updated 2016/08/30 09:23:59 (GMT +200)\n config file = \n configured module search path = Default w/o overrides\n```\n##### CONFIGURATION\n\nnothing specific to this bug\n##### OS / ENVIRONMENT\n\nDebian stretch/testing on amd64, clients are wheezy systems running sysvinit and jessie systems running systemd.\n##### SUMMARY\n\nDepending on the gathered host systems facts, ansible decides which `service_mgr` to use. On systemd systems, this is going to be `systemd`, on sysvinit systems, it's `service` and so on. The problem is, when delegating a service action from a system running systemd to a system running something else, ansible still uses the systemd service_mgr, which obviously fails.\n\nA workaround is forcing the action to `service` with the un(der)documented `use` option as shown in the third play.\n##### STEPS TO REPRODUCE\n\nconsider the following playbook. on some_sysvinit_host, the icinga service needs to be restarted. This fails when that action is delegated and no special care is taken to work around the bug:\n\n```\n- hosts: monitoring_running_sysvinit\n name: Verify that direct service restart works\n tasks:\n - service: name=icinga state=restarted\n\n- hosts: client_with_systemd\n name: Check whether it falls back to generic action w/o facts\n gather_facts: false\n tasks:\n - service: name=icinga state=restarted\n delegate_to: monitoring_running_sysvinit\n\n- hosts: client_with_systemd\n name: Delegated to host running sysvinit, force 'service' handler\n tasks:\n - service: name=icinga state=restarted use=service\n delegate_to: monitoring_running_sysvinit\n\n- hosts: client_with_systemd\n name: Delegated to host running sysvinit\n tasks:\n - service: name=icinga state=restarted\n delegate_to: monitoring_running_sysvinit\n```\n##### EXPECTED RESULTS\n\nAll four actions should restart the icinga service on monitoring_running_sysvinit successfully.\n##### ACTUAL RESULTS\n\nThe first action, running directly on the monitoring_running_sysvinit host restarts correctly.\n\nDelegation works when we're working around the bug by either not gathering facts (and so, forcing client_with_systemd's service_mgr to the fallback `service`) or by overwriting the helper action to `service` by setting the `use` action for the delegated service action.\n\nWhen just delegating from the systemd host to the sysvinit (or any other combination), the incompatible action fails:\n\n```\n$ ansible-playbook test-delegation-service.yml --sudo --verbose --diff\nPLAY [Verify that direct service restart works] ********************************\n\nTASK [setup] *******************************************************************\nok: [monitoring_running_sysvinit]\n\nTASK [service] *****************************************************************\nchanged: [monitoring_running_sysvinit] => {\"changed\": true, \"name\": \"icinga\", \"state\": \"started\"}\n\nPLAY [Check whether it falls back to generic action w/o facts] *****************\n\nTASK [service] *****************************************************************\nchanged: [client_with_systemd -> monitoring_running_sysvinit] => {\"changed\": true, \"name\": \"icinga\", \"state\": \"started\"}\n\nPLAY [Delegated to host running sysvinit, force 'service' handler] *************\n\nTASK [setup] *******************************************************************\nok: [client_with_systemd]\n\nTASK [service] *****************************************************************\nchanged: [client_with_systemd -> monitoring_running_sysvinit] => {\"changed\": true, \"name\": \"icinga\", \"state\": \"started\"}\n\nPLAY [Delegated to host running sysvinit] **************************************\n\nTASK [setup] *******************************************************************\nok: [client_with_systemd]\n\nTASK [service] *****************************************************************\nfatal: [client_with_systemd -> monitoring_running_sysvinit]: FAILED! => {\"changed\": false, \"cmd\": \"None show icinga\", \"failed\": true, \"msg\": \"[Errno 2] No such file or directory\", \"rc\": 2}\n to retry, use: --limit @test-delegation-service.retry\n\nPLAY RECAP *********************************************************************\nclient_with_systemd : ok=4 changed=2 unreachable=0 failed=1 \nmonitoring_running_sysvinit : ok=2 changed=1 unreachable=0 failed=0 \n\n```\n\n", "before_files": [{"content": "# (c) 2015, Ansible Inc,\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\n\nfrom ansible.plugins.action import ActionBase\n\n\nclass ActionModule(ActionBase):\n\n TRANSFERS_FILES = False\n\n UNUSED_PARAMS = {\n 'systemd': ['pattern', 'runlevels', 'sleep', 'arguments'],\n }\n\n def run(self, tmp=None, task_vars=None):\n ''' handler for package operations '''\n if task_vars is None:\n task_vars = dict()\n\n result = super(ActionModule, self).run(tmp, task_vars)\n\n module = self._task.args.get('use', 'auto').lower()\n\n if module == 'auto':\n try:\n module = self._templar.template('{{ansible_service_mgr}}')\n except:\n pass # could not get it from template!\n\n if module == 'auto':\n facts = self._execute_module(module_name='setup', module_args=dict(gather_subset='!all', filter='ansible_service_mgr'), task_vars=task_vars)\n self._display.debug(\"Facts %s\" % facts)\n if 'ansible_facts' in facts and 'ansible_service_mgr' in facts['ansible_facts']:\n module = facts['ansible_facts']['ansible_service_mgr']\n\n if not module or module == 'auto' or module not in self._shared_loader_obj.module_loader:\n module = 'service'\n\n if module != 'auto':\n # run the 'service' module\n new_module_args = self._task.args.copy()\n if 'use' in new_module_args:\n del new_module_args['use']\n\n # for backwards compatibility\n if 'state' in new_module_args and new_module_args['state'] == 'running':\n new_module_args['state'] = 'started'\n\n if module in self.UNUSED_PARAMS:\n for unused in self.UNUSED_PARAMS[module]:\n if unused in new_module_args:\n del new_module_args[unused]\n self._display.warning('Ignoring \"%s\" as it is not used in \"%s\"' % (unused, module))\n\n self._display.vvvv(\"Running %s\" % module)\n result.update(self._execute_module(module_name=module, module_args=new_module_args, task_vars=task_vars))\n else:\n result['failed'] = True\n result['msg'] = 'Could not detect which service manager to use. Try gathering facts or setting the \"use\" option.'\n\n return result\n", "path": "lib/ansible/plugins/action/service.py"}, {"content": "# (c) 2015, Ansible Inc,\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nfrom ansible.plugins.action import ActionBase\n\ntry:\n from __main__ import display\nexcept ImportError:\n from ansible.utils.display import Display\n display = Display()\n\n\nclass ActionModule(ActionBase):\n\n TRANSFERS_FILES = False\n\n def run(self, tmp=None, task_vars=None):\n ''' handler for package operations '''\n if task_vars is None:\n task_vars = dict()\n\n result = super(ActionModule, self).run(tmp, task_vars)\n\n module = self._task.args.get('use', 'auto')\n\n if module == 'auto':\n try:\n module = self._templar.template('{{ansible_pkg_mgr}}')\n except:\n pass # could not get it from template!\n\n if module == 'auto':\n facts = self._execute_module(module_name='setup', module_args=dict(filter='ansible_pkg_mgr', gather_subset='!all'), task_vars=task_vars)\n display.debug(\"Facts %s\" % facts)\n if 'ansible_facts' in facts and 'ansible_pkg_mgr' in facts['ansible_facts']:\n module = getattr(facts['ansible_facts'], 'ansible_pkg_mgr', 'auto')\n\n if module != 'auto':\n\n if module not in self._shared_loader_obj.module_loader:\n result['failed'] = True\n result['msg'] = 'Could not find a module for %s.' % module\n return result\n\n # run the 'package' module\n new_module_args = self._task.args.copy()\n if 'use' in new_module_args:\n del new_module_args['use']\n\n display.vvvv(\"Running %s\" % module)\n result.update(self._execute_module(module_name=module, module_args=new_module_args, task_vars=task_vars))\n return result\n else:\n result['failed'] = True\n result['msg'] = 'Could not detect which package manager to use. Try gathering facts or setting the \"use\" option.'\n return result\n", "path": "lib/ansible/plugins/action/package.py"}]} | 3,265 | 325 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.